Many organizations today are moving towards automated testing for the right reasons, but unfortunately the vast majority experience a spectacular failure. Often at Meetups or talking to potential customers who have failed at one or more automation implementations, I continue to hear the puzzlement of why they failed. I often hear what great diligence they put into the planning and the talented people they hired to build the automation, so the questions begs why are they all failing?
After working with a number of clients to not only install an automated testing framework, but also implementing their agile process, I have consistently observed what I term a “killer dozen” automation implementation killers. Over the next few weeks I will discuss each of these issues in more detail.
1. Unrealistic Planning and Expectations
3. Standalone Teams
4. Starting Automation in the Wrong Place
5. Implementing in a Silo
6. Failure to Deal with Culture Issues
7. Choosing the Wrong Tool Set
8. Lack of Transparency
9. Not Truly Integrating with your Agile Practice
10. Failure to Take a Leap of Faith
11. Sorry, QA does not own Automation
Unrealistic Planning and Expectation – It never fails that shortly after meeting with a client who is planning to implement automated testing, the planning team rolls out the vision of having “x” amount of tests automated by a certain date and “x” amount by another date. My very first question is always what baseline are you using to determine how many tests cases can be written by these dates? The answer is always the same, they either have no baseline or just took a best guess. The problem is that they are now setup for failure, once they don’t hit the planned number of test cases the organization will start to lose faith in the automation. Determining how many test cases can be written should use the same practice as scrum teams use for planning, after an initial ramp up you can start to measure velocity and also start pointing test cases in order to start adding timelines for test case completion.
Training – A simple truth exists in the software industry today that most organizations fail to fully understand, their traditional QA staff does not possess the required skill set to work with today’s agile automation test tools and they are not going to find individuals with these skill sets through their traditional recruiting process. I often find myself called in by clients to participate in interviews with potential automation candidates, but the interviews always end the same way. After one or two questions I am able to determine that the candidates supposed expertise in automated testing is limited to record and play automation. Organizations at some point must come to the realization that true automated testing requires at a minimum a set of basic skills which include programming knowledge, understanding object oriented concepts, development tools and unit testing concepts.
Standalone Teams – You’re an organization that has moved to the agile methodology and after a period of time you have figured out that you need to implement automated testing in order to become truly agile, unfortunately many decide to create a standalone automation team. They quickly forget one of the tenants of the agile methodology is that we focus on teams and no longer silo individuals by their functionality, if QA is no longer strictly responsible for quality then why would there a separate automation team responsible for all automation testing? There are three distinct problems with having a standalone automation team in an agile environment, first the team will never be able to keep up with the test case maintenance because they will always be automating after the fact. Having a standalone automation team also creates another issue with the basic agile tenant of testing the software during the iteration and having releasable software at the end of the iteration. Functional automation should be done by the agile team as part of the iteration. If an organization is using the correct tools, following the correct automation method (Page Objects) and using processes like BDD there is no reason not to be doing the automation during the iteration. The last issue with automating test cases after the fact with a standalone automation team is how can you have a truly automated CI/CD process if the automated test cases are built after the agile team finishes their work (You don’t have self-testing code)?
Starting Automation in the Wrong Place – Let me start off by saying that if you are in the process of initiating a project to implement automated testing do not go the route of attempting to automate all of your existing manual test cases. Trying to automate your old manual test cases is a recipe for disaster. Organizations need to understand that these old manual test cases are often outdated and more importantly were not written in a format that is often usable for automated tests. We suggest that a better plan is to first build a high level suite of Smoke tests. There are three reasons why we suggest starting with Smoke tests, first if you are using automated tools such as Selenium this allows you to build out your Page Objects, second Smoke tests are relatively easy to write and are a great place to have new automation engineers start learning to write automated tests. The last reason is that putting together a fairly comprehensive high level suite of Smoke tests allows you to get a fairly big bang for your buck fairly quickly.
When deciding what should be automated next, organizations should borrow from the general philosophy the software industry has taken in regards to TDD (Test Driven Development). The general philosophy in regards to TDD is to not try and go back and write tests for legacy code, instead it’s considered a better practice to concentrate on new development and changes to existing code and bring the overall code coverage up by concentrating on new code and changes to legacy code. After creating a suite of Smoke tests, we suggest that organizations then concentrate of automating in their agile iterations, thus they are then concentrating on new development and changes to legacy code.