9 Questions Developers Ask About Test Automation Tools & Maintenance

(Photo by Dominik Scythe on Unsplash)

In an effort to increase investment in our software delivery teams and test automation knowledge, in my previous blog I compiled a list of questions and responses that I have faced internally in scaling test automation adoption at Trax. In part 2 of my series, I will explore questions faced around test automation tool features and requirements needed to establish and maintain efficient automation.

Make sure that the tool supports what you want to cover:

  • Are you testing web apps?
  • Do you plan to run DB queries as part of your tests? Is it SQL/Mongo/other?
  • Are you performing API requests and validating responses to assure that your product functions properly?
  • Do you want to automate mobile native flows?
  • What are your other tech stack support needs for scaling test automation?

Conduct POC on your own scenarios and don’t rely on marketing materials that usually demo automation of a simple login flow. Take an average complexity test case and see if the tool suits what you want to accomplish.

To establish a stable automation project, framework reliability is crucial. Create several scenarios as part of tool evaluation, make sure that the only reason for failures and unexpectancies is that you missed something and not because the tool doesn’t support scaling up. Find companies and users who use this tool and get their impression. What worked for them? What didn’t? Check how this relates to your expectations and requirements.

The tool should allow creating scenarios in an easy yet scalable way. During your POC evaluate the following tool features:

  • Creation and usage of re-usable components (functions)
  • Using parameters rather than hard-coded values to make functions more generic
  • Support of self explanatory naming conventions to make reports easier to follow and new tests faster to create

In my opinion, so called “code-less automation testing”, promising creating automation in the blink of an eye, is too good to be true. This approach is usually based on the “record and re-play” technique when you click on the record button, manually perform the flow in the application, and the test is ready to be re-played. While it sounds like a charm (fast and effortless), remember that there is no way to do automation by recording only because:

  • First, such tests are too specific which is a clear disadvantage for a long term (and, as we already know, automation should remain efficient over time). Think about a functional change in your application flow: if you have several recorded scenarios in this area, you will need to adjust each test separately.
  • Second, only keyboard and mouse action can be recorded whereas a good test is much more than that. Validations, conditions, loops, DB calls, API requests, and many more must be specified manually.

Recording is only a helpful start point which must be followed by enhancing the test to become more dynamic in order to allow going fast even after having hundreds of tests.

As can be seen, efficient automation project requires investment. However, while it cannot be effortless, it will prove beneficial over time.

To clarify, use the coding part only when you must, and note that in most cases, especially in a long run, you should be able to create tests faster, by simply pulling existing functions that you have created during your automation project.

As I mentioned in the first part of this series, to provide useful feedback, automation reports must be easy to understand. I personally prefer a visual output vs. a long list of log lines. The following tool features ensure clear indication upon test failures and, as a result, lead to faster recovery:

  • Self explanatory error messages, containing the expected and the actual results
  • Marking failed steps in red to make them stand out among other steps
  • Screenshots of application state both the way it was expected to be and how it actually appeared. In addition to taking screenshots at the time of a failure, screenshots taken for succeeded steps are sometimes very helpful for troubleshooting as they help tracking at what stage the flow went in an unexpected way.
  • Fetching browser logs is yet another useful feature for investigating failures as they might provide additional insights by revealing potential network issues
  • Displaying a list of test parameters and their values at any point of test execution is quite handy for troubleshooting

Ticketing system

  • Some tools allow quite handy feature to create a ticket from the failure report by automatically transforming the test into a list of steps to reproduce the scenario
  • Embedding the link to the report into the ticket releases QA bottleneck. Letting anyone to simply click the link and play the test again to easily reproduce the buggy flow is extremely helpful, e.g.: a product manager can better understand the issue and define the priority more precisely, a developer can make sure that after the fix the test passes, etc.

CI/CD

  • Setting your CI/CD tool to trigger relevant tests execution on a relevant environment will help assuring that the new code does not introduce new bugs
  • To reflect the status of your applications to your key stakeholders, publish daily dashboards based on automation pipelines’ statuses along with a short summary of areas where regressions were detected and a list of tickets that were reported

There is another important aspect which is essential in order for automation project to succeed. I call it “around-automation maintenance”. It relates to technical framework maintenance, such as configuring infrastructure, managing code libraries, setting up dockers, handling synchronization issues, and so on. There are two ways to approach it:

  • Open source tools. While they are invaluable to our software development community, it is important to remember that the time required for maintaining them should also be considered.
  • On the other hand, commercial tools can do this part on your behalf

To put it simple, commercial tools will charge you for handling framework maintenance while with free tools you will need to do it by yourself (ultimately, “free” tools are not really free as it might seem).

Consider this point while evaluating an automation tool, because it is a significant part of automation project in terms of required effort investment. Since many companies deal with similar framework maintenance issues, it might make sense to consider a commercial tool which takes care of this indirect automation “headache”. The hard work to build and maintain the framework cannot be skipped, but does it make sense to “invent the wheel” from scratch, wasting time and human resources (= money) if you can rather invest in a solution which allows you to start increasing quality of company products right away? There is no single truth about it. To find the right answer for your particular case, simply remember that you’ll need to invest anyway and think which of the investments is more expensive for you: tool cost or team resources?

We at Trax decided to take what I call “around-automation maintenance” out of our scope by delivering it to external experts, and chose automation platform called Testim. In the very high level, the effort is divided between Trax and Testim in a way that we focus on covering our business flows and Testim take care of all the rest. Time proves that while they deal with lots of above mentioned indirect automation issues, we gain precious time and provide higher value to our company.

If you plan to automate tests for Web applications, keep in mind that defining and maintaining UI elements’ identifiers (a.k.a. locators) is yet another significant ongoing investment. Consider choosing a framework which uses AI to do this hard work for you. If the tool has embedded self healing mechanism, it will help your tests to be tolerant to non functional UI changes, which will save significant maintenance time and effort.

Last but not least — adoption simplicity:

  • Installation process: how easy is the tool setup and maintenance?
  • Onboarding: how soon can you start releasing efficient tests?
  • Documentation: how clear and up-to-date is it?
  • Support: how can your idle time be shortened should you need assistance with the tool?

Key takeaways

  • Remember that fully code-less tools are too good to be true, efficient automation projects cannot be effortless
  • Keep in mind that open source tools are not free in you consider the total cost of ownership
  • Make sure the tool you evaluate meets your needs
  • Examine the tool for its maturity and obtain feedback from other users’ experience
  • Explore tool scalability
  • Assure that the tool provides self-explanatory reports
  • Verify its integration with other development life-cycle related tools used in your company
  • Solution adoption simplicity will determine how soon the tool can start bringing value to your company and for how long its ROI will last

Coming next — stay tuned!

  • Re-usability
  • Independence
  • Effort sharing
  • Achieving best potential coverage at lowest lost

Katya is an Automation Developer at Trax with nearly 20 years of experience in business development in the areas of software development, implementation, application engineering, and automated testing. In her spare time, Katya enjoys photography.

Automation developer at Trax

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store