5 Questions Developers Ask About Test Automation Best Practices

(Photo by Olav Ahrens Røtne on Unsplash)

I made the mistake early in my career that developers understand test automation — not all of them do. But from my experience as an automation developer working closely with product developers, most of them are eager to learn. In an effort to increase investment in our software delivery teams and test automation knowledge, in my previous blogs [7 Questions Developers Ask about Test Automation, 9 Questions Developers Ask About Test Automation Tools & Maintenance] I compiled a list of questions that I have faced internally in scaling test automation adoption at Trax, followed by responses. In the last part of this trilogy, I will close with test automation best practices for creating and scaling efficient automation that I have learned.

Q1. Can tests maintenance be avoided?

Applications evolve and as such they may change. As a result, tests can potentially break due to the outdated expected functionality and require adjustments to make them work again. Therefore, tests maintenance is inevitable. Having said that, we can definitely decrease maintenance effort if we always keep in mind, while creating tests, that we should get back to changing them as less as possible. Here are two examples how this can be achieved:

  • Follow the KISS principle — Keep It Simple and Stupid
    1. Each test should cover a single functional unit (long tests complicate report analysis)
    2. Turn every single logical unit (one or more steps) that can be utilized in other tests into a reusable component, to keep it as a single place of change. Remember the example of a functional change in an application flow? Reusable components will require applying the change in only one place seamlessly updating all related tests.
  • Create independent/isolated tests
    Make sure each test takes care of all its prerequisites (pre-loading relevant data to the database, navigating to the correct page, etc.). Do not assume that the previous test in a suite created the necessary state — the order of tests can change, or the previous test can fail and it shouldn’t impact the current test result (relevance is one of automation core values).

Remember, time spent on maintenance is time that can rather be spent on adding new coverage. Investment at test development stage will help you keep the scale by ensuring automation stability over time, instead of bogging down fixing numerous tests as time goes by.

Q2. Why is it so crucial for automation to be reliable and relevant?

Unreliable, outdated or irrelevant tests cause people to lose trust in automation which makes it useless. I strongly recommend following these key principles:

  • Prevent false alarms
    Before releasing a test, make sure that not only does it pass when it is expected to pass, but also that it fails when it is expected to fail (mock data, change parameters values or test flow — simulate a failure). False negatives are our worst enemies — they quietly cause tests to pass without revealing the issue!
  • Keep your dashboard as “green” as possible. This will make spotting regressions easier.
    1. Request fixing regressions reported by automation at the earliest convenience. Although some failures may indicate low priority bugs, they might block the flow from testing other areas which leads to having untested functionality (hidden bugs).
    2. In case of intentional application changes, adjust the expected state in the corresponding tests as soon as possible
    3. If a test reports an issue that is not planned to be fixed, delete the test or create a workaround. There is no need to report a failure if it doesn’t lead to action items.

Q3. Should a test stop once an error has been detected?

If a test failure doesn’t cause the whole scenario to be blocked, the test should rather report the issue and continue, accumulating all occurred errors to present exhaustive information in the report — reveal as many issues as you can.

Q4. What are some key concepts for correct prioritizing among numerous application areas that need to be tested?

There are new features, there are new bugs, there are more product developers than automation people — how do we keep up with ongoing developments making sure that we always have maximal coverage? Let me assure you, there will always be a gap. The question is what will be covered and what won’t. Automating 100% of testing coverage should not be your goal. Choose your battles. Considering investment vs. benefit is what should help you make such decisions. Here are some guiding thoughts to achieve best potential coverage at lowest cost:

  • Concentrate on functionality prior to checking visual appearance (your customer would most likely prefer a working button even it has an unexpected colour).
  • Start with commonly used scenarios. Consult with your Product Owner, Professional Services, Support — anyone who can provide insights on customers’ usage.
  • Determine which kind of bugs should block the release and automate these areas first.
  • Consider automation investment as opposed to manual effort. It is often worthwhile to leave certain scenarios in manual coverage.
  • Fixing a malfunctioning test precedes adding a new test (remember the importance of automation reliability).

Q5. How should automation project be managed in terms of knowledge sharing?

Quite similar to how any development project is managed:

  • Hold internal learning sessions with other automation contributors of the same project. To avoid duplicate work, ask everyone to present the new components he/she developed which others can use, share detected bugs to prevent submitting duplicate tickets. Grow together by raising challenges, troubleshooting and suggesting solutions as a team. Record some of these sessions for future onboardings.
  • Test design and code reviews
    Invest in test design and code reviews similarly to how it is done for software development for the exact same reasons. The only difference between automation project and any other development project is that automation has no QA — have another pair of eyes on your tests.

Key takeaways

  • Keep your tests modular, independent and isolated to reduce the required maintenance
  • Report exhaustive information about application malfunctioning, yet only issues which lead to action items
  • Manage automation project as any other software project
  • Always think how you can achieve best potential coverage at lowest cost

Closing thought

To be efficient, test automation requires investment. However, when approached correctly, it makes a significant impact on delivering higher quality software in a shorter time.

I would like to express my gratitude to Tristan Lombard for his enthusiastic encouragement and generous help facilitating knowledge sharing across testing community from all over the world.

--

--

--

Automation developer at Trax

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Introduction to Binary Search Algorithm

Reverse-engineering TP-Link KC100

Imperative Commands in Kubernetes — A Primer

Stirring the pot: Hackathon Edition

05. Finally… The flow gets better!

How to Export Phone Numbers from any WhatsApp Group

Things you have to do, when you create an account on AWS.

A New Year Brings New Change

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Katya Aronov

Katya Aronov

Automation developer at Trax

More from Medium

How to optimize downtime management with Meta API?

Deep Dive: Let’s create a data structure that handles insert, delete, and getRandom in O(1) time…

Code Generation; the Nuclear power of programming!

State of Web Development 2022