Test automation is often viewed as the holy grail of quality, lower costs and tests that run on every feature, as if by magic, all by themselves! But that’s not always how things are.
Test automation requires an eagle’s eye view of the project(s), plus analysis, workshops with the business line, and the whole endeavour being orchestrated by best practices, which we’ll get into later. The effort required to leverage automated testing shouldn’t be underestimated as it must factor in time for:
- groundwork ;
- development ;
- set-up ;
- results analysis ;
- and maintenance.
So it’s easy to see that automating every feature of an application takes more time than analysing and testing them manually. If so, automation seems pointless. So is there any real advantage in automating software testing?
Automation is great, but when’s the right time?
Test automation is used to confirm critical application paths and regressions so as to track down anomalies as early on in the process as possible and curb the cost of fixes further down the line. Arguably, automation makes sense the instant we tackle the subject of regression. With large-scale projects, any changes, additions or fixes can occasionally produce anomalies that interfere with other previously validated features. We call this regression.
To track down regressions as early as possible, automated non-regression tests can be set up and run before code is deployed on the end machines. Setting up automated tests saves time, money and resources. Whether or not these tests should be implemented depends on specific criteria:
- Firstly, the project’s scale: for small, non-critical applications or applications with a low rate of change, non-regression tests can be performed manually, unlike major applications. Automatic testing will avoid wasting time running the same tests over and over.
- Secondly, the number of tests and repetitive tasks: a person who’s used to repeating a gesture or an action several tens, hundreds or even thousands of times, will operate much faster than someone who is not. On the other hand, the risk of the accustomed individual making mistakes due to the high task repetition rate is much higher. If too many repetitive tests need to be run, the best decision will be to automate them, since a robot never makes mistakes.
- The third criterion is the duration of the project and its complexity: for a short project lasting less than 2 to 3 months, allocating resources to implement test automation may not necessarily be the best choice. The time spent analysing, setting up the environment and writing test cases can be more expensive than a functional tester. Therefore, it may be necessary to determine how often automated testing is required and assess its quality/cost ratio.
To sum up, test automation is necessary whenever:
- Several people are involved in a joint development.
- The project shows significant duration and complexity requiring a reliable basis.
- The project is considered critical.
- The project demands high quality standards.
- There are regressions with almost every deployment.
- There is a need to check that data is sent to another application (application intercommunication).
Automation according to AUSY:
We’d like to suggest some best practices we believe are essential to successful test automation.
1. model your project
AUSY initially recommends going through a business path modelling stage followed by a test-oriented modelling stage. Modelling makes it possible to determine which communication mode is best so that expectations match up to final results, and more importantly, so that the automatic tests reflect these expectations. This involves forging a common language and dynamic documentation to facilitate teamwork among project players. Modelling provides a macro view of business applications and the critical application paths that illustrate the user experience.
Business path modelling can then be extended to test paths by consulting with business experts and users to determine hot spots or recurrent regressions.
To facilitate modelling workshops, our choices are based on four tools:
- Enterprise Architect, developed by Sparx Systems, is a comprehensive modelling solution. We use this tool as a platform to exchange the files and methods we use in workshops with the client.
- draw.io is a plugin that comes with built-in BPMN templates. It’s very easy to use and we recommend it for its simplicity, price and also for its JIRA integration capability.
- Yest, coupled with Jira, facilitates interaction between the business line and test teams. Yest lets the PO or Business analyst create the test paths from Jira, which will then be tweaked in Yest by a tester who will enhance the data sets and add the necessary test functionalities. Yest will then be used to build the automatic testing framework through keywords, either current to be developed, and then to automatically run the tests.
Once the model has been completed, it’s easy to adapt it to automation. Automation specialists know how to write consistent automated test paths that avoid the trap of automating unnecessary or unstable software components.
This step in the modelling process greatly facilitates the choice of tests to automate, and those to be kept functional. It saves a great deal of time, and we attach great importance to it. One of the most common mistakes is to try to automate every single functional test. This can lead to wasted time and costs that may not actually be necessary. Raising the awareness of in-house teams of this aspect is one of our biggest concerns.
To facilitate test automation, AUSY recommends using BDD (Behaviour Driven Development). AUSY’s teams rely on BDD agile methodology aimed at designing functional tests using a widely understood natural language (Gherkin). The three major principles of DDB are:
- Involving non-technical teams in the project through the use of language that’s easy to understand and use. No advanced programming or development language skills required.
- Automated scenarios to describe user behaviour, as shown in the example below:
- Thanks to Gherkin, end-users can describe what they want in highly exact terms. This gives the technical teams a better grasp of what’s needed. Tools like RobotFramework enable developers to transform language into development code.
Either in our case: (example with RobotFramework)
The link between the described behaviour and functions has been made, and the test can now be run.
AUSY has chosen to rely on this method because it enhances interaction among the players, provides for better teamwork, a more comprehensive overview of the project and necessarily improves the quality of the deliverables through the various iteration phases.
Coupling automation as early as possible with BDD and modelling makes it possible to better understand the end product at each step, gain confidence and quality, stabilize versions, and provide trouble-free upgrades.
3. technical project analysis and feasibility
Another best practice that should be followed is the choice of collaborative test and robot management tools.
a. List every technology involved
Some automation tools and frameworks only work for web apps, others only for fat clients and some can’t do API or SOAP testing. So it’s important to carefully analyse all technologies and determine which components are essential for automation, depending on the work environment.
b. Determine the best automation tools for your project’s technical specs
While juggling with several tools is possible, it requires people with multiple skills. Plus, configuration concerns inherent to each tool can arise and multiply the potential number of snag points.
As far as the choice of robot is concerned, AUSY recommends the use of RobotFramework because it also supports web app, API and fat client testing. In addition, a plethora of existing Python libraries can be used or developed if necessary without too much complexity. Plus, RobotFramework uses the highly popular, open and permissive Python language and relies on a large community to solve all kinds of problems. Lastly, using the Python language can make up for a lack of human resources. Python developers with a perfect command of the language can then be rapidly hired and trained to use best automation practices.
We’ll take another look at RobotFramework and its advantages in greater detail in a future article.
4. the scripting stage
Once the test management tools and robot are selected, it’s time for development, whether in JAVA, PYTHON, GHERKIN, SQL etc. Whatever language you chose, this is the point where script maintainability and automation are determined. It’s now important to maintain a structure that’s consistent with the project and the test structure: Campaign -> Test plan -> Test suite -> Test case -> Test.
Scripts and associated artefacts must be developed so that the test can be run in line with the results expected as well as in line with the acceptance criteria. To successfully complete this step, AUSY recommends following best development practices such as:
a. Organising automated testing like a development project
Organising the tests and the associated naming schemes are a key issue when designing automated tests. As the volume of tests grows rapidly, it is crucial to keep organisation orderly. In addition, building tests into a development process also requires trying out these automatic tests in a unit test phase, which is all the easier to implement in a Dev-type project hierarchy.
b. Variabilize your data and objects
The various objects we need to interact with multiply as we progress through each step of the automation path.
All too often, data included directly in the code leads to an enormous loss of time in maintainability, as well as issues when running repetitions on different data sets. Conversely, it is widely accepted that a test case cannot be written for each different data set.
● Choice of attribute(s)
i. Web Apps
Possibilities include identifiers based on ID, CSS, specific tag attributes and, of course, Xpaths. Recommendations clearly focus on IDs, which are stable data that are not subject to frequent change. When no IDs are available, there are the other less stable attributes, and the risk of performing significant maintenance on these objects is high.
That leaves the Xpath solution. This solution is not popular with developers because XPATH version 1.0 and its initial applications depended on an absolute tree structure. The slightest change (addition of a column, a menu, a CTA, etc.) required maintenance on the objects. Today, Xpath upgrades make it possible to locate objects directly through an attribute, a text string or relatively to a table, and use them as an entry point for checking the corresponding values in the nth column without having to carry out maintenance on every object.
ii. Fat clients
For fat clients we use an inspector (inspec.exe) that comes with the latest Windows releases. It retrieves all the information on the object including its name, class or ID.
All these choices remain stable and require very little maintenance.
Sometimes the DOM (Document Object Model) has to be carefully analysed to determine whether the degree of complexity is necessary or whether an alternative solution can be implemented. Xpath is sometimes the only feasible solution when every attribute produces several results. In these instances, it may be necessary to contact the developer team to add IDs to the tags to avoid unstable dependencies. Making changes to the development process is sometimes necessary.
We recommend keeping an up-to-date file with an object identifier, a version of the application mix and a small screen print to quickly locate and maintain the object in question.
The robustness of an object identifier is measured by its ability to withstand code changes. In web apps, front end development is often the cause of the maintenance required to keep automated tests functional. We can enhance the robustness of our object identifiers either by adding an ID, a new attribute in the tags used for automation (very time-consuming) or by modifying our Xpaths. Retrieving an Xpath often involves an absolute path which is, as previously stated, highly unstable and therefore not recommended.
For instance: if we identify the search field on the site https://www.google.fr, we get:
The slightest change on this page and our Xpath has to be redone, whereas by analysing the attributes of the relevant field we get:
A class, a name, a type, a title, a label, etc.
Now all you have to do is determine and check if these information items are unique. There is only one “text-type” input field with a name attribute “q”. So we can choose these two items to define a robust Xpath, since it’s highly unlikely that two unique identifiers will be changed at the same time. Which would result in: //input[@type= « text »]|//input[@name= « q »]
PS: there is a pipe | between the two expressions to indicate OR
In terms of maintainability, the automation specialists’ nightmare is having to spend too much time editing previously written test cases “simply” to make something that once worked work again.
Beyond the irony of regression affecting non-regression tests, maintainability has to be built in from the very cornerstone onwards:
- Variabilize every object;
- Separate the files where the objects are located, e.g. one general file with the page menu and URLs, another for the home page;
- Use an understandable naming scheme;
- Organise your project properly so that every part or object can be located easily later on.
The risk of not classifying everything lies in using variables in the tests that are wrong because they haven’t been clearly categorized and defined. Solving this type of problem results in a significant loss of time.
5. using stubs or mocks
We recommend using stubs or mocks to create a closed environment and to test integration as soon as possible. Python libraries can be used to create mocks for API testing. In more complex cases, a stub solution developed in Java, Python or any other language should be considered in order to be able to run isolated tests.
6. collaborative information and results sharing
Finally, AUSY recommends using collaborative software or tools (Jira/Xray) so that results can be shared with all players, and potentially the client. This gives everyone working on the project an overview of the results, and an instantaneous view of what’s been accomplished. Once again, there are considerable time savings to be had.
Advantages of automation
The take-away from this article is that test automation has a lot of plusses. Needless to say, testing as early on as possible and right through to the development stage improves software quality and efficiency and cuts costs and time spent on regression fixes. The benefits of automation lie in better test coverage, more regular and wider-scale testing, less risk of error, and less time between the moment a need is expressed and production launched.
Obviously, you have to have what it takes to leverage all the step and adjustments required for it to work smoothly. AUSY stands out for the way it supports it customers step by step through their digital transformation and test automation projects. To achieve this, AUSY relies on the complementary skills of our pool of test coaches. Long-standing experience allows this team to deliver an immediate response to a large number of complex issues that will save you a significant amount of time spent on the project. Their coaching expertise and adaptability are big assets in training your teams, getting them up to speed in automation best practices and supporting them, as per the customer’s requirements, until they are fully autonomous.