Thursday, October 3, 2013

The Cost of Automation


 As finite beings, we humans are constantly required to make tradeoffs.  Do I want an extra five minutes of sleep or do I want to fry eggs for breakfast?  Do I go in for an oil change today or do I put it off for another week?  We calculate the costs and benefits of actions on a moment by moment basis and act on these kinds of tradeoffs and most of time it seems that we do a pretty good job of it.

However, sometimes our gut reaction or choice can be wrong.  One too many late oil changes and the gaskets in the engine start to break down.  One too many skipped breakfasts and our body starts to complain.  Sometimes we do things because it has worked for us in the past or because it is a habit. Sometimes our internal cost/benefit calculator is wrong.  Daniel Kahneman in his book Thinking, Fast and Slow points out that our brains are naturally 'lazy' and also that we are prone to certain cognitive biases in some situations.  In order to overcome this we need to use our 'slow brain' and think through these things in a way that exercises our reason.   That is what I hope to do in this article.  I want to take some time to slow down and think through some of the costs and benefits of automating tests.  Too often the choice to automate or not is a gut reaction or a habitual choice, but I would like to have some more rational criterion by which we can judge the merits of automating a test.

This is all very good in theory, but it is very difficult to quantify the costs of an automated test.  There are so many factors ranging from tangibles like test creation time and time spent fixing old tests to intangibles like the fact that the type of testing you do will influence how you test, or the amount of effort that needs to be put into building and maintaining a system in which to setup and run the tests. The more I think about it, the less it seems possible to actually quantify the cost of creating an automated test with a phrase like 'this test will cost x dollars or take x hours of time.'  I think it makes sense to follow Brian Marick's thoughts on this and instead of a direct evaluation do a tradeoff analysis.  Automated tests will usually be more expensive than manual tests and so we want to understand how much more expensive. We need to test (as testers that is our job), but we don't need to be constrained to only test in one way.  Rather than directly evaluate the costs and benefits of an automating a test, it would be better to look at how much more cost there is to automating a particular test and how much more benefit we would get out of it.

If we follow this approach we can do a comparative analysis of Automated vs. Manual testing.  I have outlined some of the basics factors involved in testing below, along with a comparison of the differences in cost between automating or manually executing the test.

Test Creation Time

Typically more time will be spent creating an automated test.  Even if you do 'record and playback' automation you will usually need to spend a bit of time editing the script afterwards.  You will also need to add the test to the regression system in some way, which will typically add some amount of time to the test creation.  There are some situations where automated tests can be faster to create (eg. you have a large list of known inputs and you want to try them all.  It may be quicker to write a script to create a number of automated tests, then to manually try all combinations), but in general it takes more time to create an automated test as compared to running the same test manually.

Debugging Time

Tests fail sometimes.  Sometimes the reason is a defect.  Other times it is a misunderstanding or a code/design change the test was not aware of, or any of a number of other reasons.  This is a normal part of testing and this means that time has to be spend figuring out why the test failed.  The cost here is again typically higher for automated tests.  Unless you explicitly 'tell' the test about any code changes that affect it, it will fail.  Humans are much better than machines at reacting to code changes or misunderstandings.  Also, when an automated test finds a defect, the test may have to be manually run or verified before the issue can be logged.

Run Environment setup

All tests have to be run in a certain environment.  That environment includes many factors ranging from the operating system and drivers to the hardware and user types.  Automated tests usually need a specialized environment to handle things like test selection and running as well as results comparison, analysis and reporting.  Tools like this have to either be purchased, installed and configured, or developed in house, any one of which options can have significant costs.  However, there can also be a benefit to automated tests since they can easily be repeated in different environments (eg. on different OS flavors).  In general I think the relative cost of this would depend on the resilience of tests to run in different environments and the complexity needed for the automation environment.

Speed of Feedback

One of the purposes of testing is to provide accurate feedback on the state of the product.  Feedback that is received sooner has more value for several reasons.  For example, the sooner a developer gets feedback about problems in his code, the easier it is for him to remember what he did with it and make the necessary changes.  If an automated test is quick enough to run against every build, it can provide very timely feedback. It should, however, also be noted that in accordance with the first point above, It usually takes more time to automate a test than to run it manually meaning that sometimes you have slower feedback initially on a new feature if you choose to automate.

Quality of Feedback


Feedback in and of itself is not good unless it is both truthful and useful.  We need good quality feedback.  In some ways automation is better at giving good feedback since it will do the exact same action over and over and so can provide accurate, repeatable information.  However, it is also important to understand that automated tests only give good quality feedback in certain situations.  They can easily tell you that something has changed but they often can't tell you what has changed, why  it has changed, how  it has changed or even if the changes is for the better or not.

Philosophical Costs

Testing is done by testers and testers are humans.  This means we cannot ignore the human factor in testing.  Different approaches, emphasis and  procedures in testing will lead to different human responses.  For example, if we were to try to automate every test, it could easily lead to tests be designed for their ease of automation rather than their ability to discover problems.  Another example is measuring build validity based on automated test passing rates.  Passing rates, of course, only tell you how many tests pass and not the quality of each of the tests that have passed; and hence without a better understanding of the tests being run, the passing percentage number in itself does not tell you anything about the quality of the product.  This means that it is possible to have false confidence as to product quality.  It can also mean that testers (perhaps just subconsciously) may write tests to pass, rather than to find defects.  An automated test and a manual test are quite different and the tester (as a normal human being) will approach them differently.

There are of course risk factors associated with manual testing as well.  For examples, humans tune out when we are bored or comfortable, which can lead us to miss obvious things if we have been through a particular area of the product many times before.  I don't think either type of testing is clearly worse than the other in this regard, but I do think that it is important to understand choosing one approach or the other will have an affect on how the tester approaches the test.

Putting it all Together

So where do we go from here?  We have some general ideas about comparisons between automated and manual tests, but how does that work in the trenches of day to day testing?  If I have a feature to test how do I decide whether to automate tests or not?

I think one important consideration that I see in all this, is that we should not have a 'default' approach to testing.  In other words, we should not just assume that a test will be automated unless it can't be (or vice versa), but we should start with what we need to test and then decide from there which approach makes the most sense to use.  We could use a checklist like the following to help force the thought process to start.

Should I automate this test (These questions should only be asked AFTER you have decided what you are testing)?
  1. How much more time will it take to automate?
  2. How long do I think it will take before this test breaks? In other words, how many times can I run it before I will need to touch it again?
  3. Is this an important tests to run in different environments (cross platform etc.)?  will it be robust to environment changes?
  4. Is it more important to have quick feedback now or consistent feedback in future builds?
  5. Am I trying to check a simple fact that should not change?  
  6. What could I miss if I automate this instead of run it manually?
Although it obviously would not make sense to do this level of analysis for every test we create, I think running through a checklist similar to this on occasion could help clarify when we should or should not automate something.  It may also give an idea as to categories of tests that work better for automation.

Thursday, July 4, 2013

Pleasures and Pitfalls of Automation

I remember the excitement I felt when I first started to get my mind around the idea of testing using automation. I had started a testing job right out of university and after a while I was transitioned towards doing some cleanup work on the regression system.  After figuring the system out and glimpsing some of the power of automation, I started to get really excited about this idea of test automation.  There seemed to be so much about it that promised to do away with all the parts of my job I was growing to dislike.  No more boring monotonous work!  No more manually executing the same test cases from one build to the next.  No more putting features into 'beta' mode since testing hadn't gotten to them yet. Automation seemed like the answer to all the problems we as an under-resourced (of course) test team faced.  However, several years later the automation that we have tried to put in place hasn't lived up to all our expectations.  We still have to do boring work.  We might not have to re-run as many test cases by hand any more, but how many times were we updating the same test case in the automation system?  And we are still having features go into beta due to lack of testing resources.  What happened?  Why is automation failing us?

I suspect that a part of the answers lies in a lack of understanding what automation is good at.  Are we perhaps trying to automate the wrong things?  Are we trying to automate too much?  Or maybe we haven't committed enough resources to developing a more robust system?  Where is the balance between automation and manual testing?  I order to help myself answer some of these questions I would like to first list some of the reasons that automation is good, along with some of the reasons it is bad.  Hopefully a list like this will help clarify when a test should be automated and when it should not be.

Benefits:
  • Machines don't get bored.  Humans do.
  • Machines don't make mistakes.  They will repeat the exact same actions over and over
  • Machines are really really fast at computation or comparison
  • Better protection against project regression
  • Can provide faster feedback. 
Pitfalls
  • Can be expensive to maintain
  • Machines don't make mistakes.  They will repeat the exact same actions over and over (yes I know that is on the benefits list as well....some times automation misses things that would be obvious to a human tester)
  • Can give false sense of confidence
  • Can take some time and training for people to be able to use the automation system.
I'm sure there are many more items that could go on each list and many of the items could be broken down into several parts as well.  These are just some of the obvious ones I could come up with in after a few minutes of thought.

Although my initial enthusiasm about automated testing may have waned a bit over the years, I do still feel that it is a useful tool in the testers tool pouch, but I think I still have a lot to learn about how to properly wield it.   Next time I will try to look a little more closely at how we can measure the costs and benefits of an automated test.

Wednesday, July 3, 2013

Getting Started

I have been in the software testing industry for over 5 years now, and having been part of a team that in those 5 years has transitioned from development practices that followed the waterfall approach to practices following a more agile approach, I have been struggling to understand how to best use my skills.  Since Agile emphasizes the 'test early and often' approach to software development, this has meant an increased focus on automated testing and my work has transitioned more and more towards this.

I have enjoyed the challenges that come with this new role and as I have been more involved in this type of testing I have seen there can be many benefits to automating tests.  However, I have also started to see more and more that there are also significant pitfalls that can come with automated testing.  I have been trying to pin down exactly when automation is useful and when it should not be used. Since I have found over the years that the best way to learn new things is to try to teach or explain them to others, I have decided to post my thoughts here as a way of working though and clarifying my thoughts on these issues.  I am hopeful that through the process of organizing my thoughts and writing them down I will get some more clarity on how to apply this in my day to day work.