This a repost from the recent report of Gartner. The original report can be found here
–
RESEARCH FROM GARTNER
The Eight Essentials When Moving to Automated Software Testing
When implemented correctly, automated testing can deliver strong benefits to support organizations in delivering software with higher quality at a faster pace. This research provides application leaders with eight essentials to realize improvements in delivering high-quality software.
Key Challenges
- Many organizations are not prepared for the impact that moving to automated software testing will have on development practices, organizational changes and key financial elements.
- Building automated tests is a complex process that requires expertise – it can’t be easily simplified by just acquiring the tools required for automation.
- Most types of testing can benefit from automation, but automation does not fully eliminate the need for manual testing and is not always the most effective way to find software defects.
- Application development teams tend to start their automation at the graphical user interface (GUI) level. However, GUI tests are expensive to build, slow to execute, less effective and fragile to maintain.
### Recommendations
Application leaders who are modernizing application development by improving developer effectiveness should:
- Treat test automation like a software development project that requires developing and managing the automation code and its related components, by dedicating time and skilled resources to test automation efforts.
- Develop the right strategy for automation by setting simple but clear goals for test automation.
- Select test automation solutions that will match teams’ skills and fit the development style, application profile and the different needs of application development (AD) teams.
- Adopt a layered test strategy by using the test automation pyramid as a way to organize test automation efforts based on where they provide the greatest benefit.
Introduction
The shift to agile development methods and DevOps continues to intensify as a result of business demands for IT organizations to deliver high-quality software at an increasingly faster pace. Further, as applications under development become more complex in nature, constantly changing to include new technologies and new feature sets, manual testing can’t keep up. This has driven the need for increased levels of test automation. Test automation can deliver strong benefits that include:
- Shorter test cycle times, as well as shorter development and delivery times.
- Improved regression coverage.
- Freeing up the resources that are usually reserved for repetitive testing tasks for other activities that add more value.
- Elimination of human error that can occur when manually testing.
- Improved resource utilization (as tests can be run at any time).
- A consistent testing process, especially when test automation efforts are connected to continuous integration efforts.
Regardless of whether a traditional or agile development approach is applied, embarking on test automation needs careful business preparation and a thorough understanding of where it can or can’t deliver benefits. Setting the right expectations regarding what test automation can provide and what is needed for it to be successful is key. Unrealistic expectations cause many automation initiatives to fail.
Moving to automated software testing requires the adoption of best practices such as behavior-driven development (BDD), developing with testability in mind, and continuous integration and testing. They are key elements to improving the overall development process. Many Gartner clients consider the procurement of tools as a primary step in transitioning from manual testing to test automation, but building automated tests is a complex process and can’t be simplified solely by acquiring the required tools for automation. When done well, the merits of test automation are irrefutable, but when done badly, AD teams can end up wasting time, effort and money. This research outlines eight things to consider when preparing for test automation initiatives (see Figure 1). The following recommendations will increase your likelihood of succeeding in adopting test automation as an integral part of your development and testing approach.
Figure 1. Eight Essentials for Automated Software Testing
Figure 1. Eight Essentials for Automated Software Testing Source: Gartner (November 2017)
Analysis
Start Your Test Automation Initiatives With a Clear Set of Goals
It is important to define a simple – but clear and balanced – set of measurable goals when automating tests. This will help in identifying the right approach to automation and selecting the right tools. Shifting from a model that is characterized by mostly manual testing to one that leverages tools to automate increasingly larger portions of the application under test will cause uncertainty and friction in the organization. Too often, test automation is seen as some kind of “magic bullet” that will dramatically reduce the cost of testing, or it is assumed that 100% coverage should be readily obtainable or desired. The fundamental purpose of testing is to find defects early on and reduce quality risks by not allowing them to propagate. The primary goal of automation is to remove repetitive, error-prone manual tasks so that people can focus on adding value, and to ensure that the tests are executed regularly without the need for human intervention. Automation will help teams find defects earlier, resulting in increased confidence in test and project deadlines. When an automated regression test suite is run as part of an automated build process, regression bugs can be found early and feedback provided immediately each time a new code is checked in. This enables programmers to troubleshoot problems quickly, thus preventing low-quality builds.
Another important goal of automation is to free up testers from having to repeatedly execute regression tests manually. That freedom will allow your team to focus on more insightful exploratory testing. If automation is applied to test steps for running exploratory testing, more time can be allocated to investigating potentially weak parts of the system.
Setting test automation goals enables the whole team to experience the benefits that can be achieved from automation. Understand where the benefits will come from and the type of benefits you can expect, which may include acceleration of the code-and-test process, repeatability and improved cycle time for regression testing, higher test efficiency via flexible test management processes, and prevention of the inherent inaccuracies that manual processes inject into the process. If affected parties see a benefit, they will embrace automation. However, if automation is considered as another form of management control, it will not be successful. Therefore, list goals and develop an implementation timetable that provides for incremental improvement.
Automation Requires Significant Investment That May Not Pay Off Immediately
Test automation requires significant initial investment, skilled employees and commitment. Evaluate the current state of manual tests and determine the most effective approach for transitioning these testing assets to the new paradigm of automation. The existing structure of a manual test may or may not be suited for automation, in which case a complete rewrite of the test to support automation may be necessary. Alternatively, relevant components of existing manual tests (such as input values, expected results, navigational path and test data) may be extracted and reused for automation. It takes time and research to decide on which test frameworks to use and whether to build them in-house, or use third-party commercial or open-source tools. Additional hardware, software or any computing resources will probably be required. Regardless of whether it is manual or automated testing, it is important to understand the cost elements for both (see Table 1). Some common factors apply to both manual and automated testing. The costs generally increase when automated testing is introduced, but the increase in overall costs as a result of automation initiatives can be equally beneficial and justified for manual testing.
Table 1. Cost Considerations
Cost Factors Associated With Test Automation | Cost Factors Associated With Both Test Automation and Manual Testing |
---|---|
Tool license cost | Test case (design, creation and maintenance) |
Tool training cost | Test data (design, creation and maintenance) |
Hardware, software and other computing resources (additional or upgrade) | Test execution or implementation |
Test automation environment (design, creation and maintenance) | Test results analysis and reporting |
Source: Gartner (November 2017)
Testers’ skills vary. Some are domain experts, having come from the end-user community or been involved as a business analyst, while others have strong technical skills, enabling them to better understand the underlying system architecture. Designing, implementing and maintaining an automated test environment is technical in nature and, as such, would require individuals with strong programming skills and technical backgrounds. Moreover, automating other types of testing, such as GUI or API/service-layer testing, requires certain knowledge and understanding of technologies such as document object model, along with JavaScript, REST architectural standard and web services concept. Make sure that enough support is provided in terms of necessary training and resources. Provide an environment where programmers and testers can collaborate closely for test automation, and make effective use of programmers to fill skills gaps in building test automation.
Test automation is an activity in which every stakeholder is expected to participate. As the team shifts to automation, roles will become more specialized. Changing the makeup of the test team is essential if automation is to be successful, and educating the team early about the intended changes will help reduce anxiety over both roles and practices. When addressed correctly, the shift toward automation should get everybody on the team very excited and ready to participate in the organizational and technical change.
Focus Automation on the Areas With the Greatest Benefit
Not every scenario is suitable for test automation and automation is not always the most effective way to find all software defects. Some tests do not justify investment in automation. Table 2 expands on the key factors to consider before deciding whether or not the effort to automate is justified.
Table 2. Factors to Consider – When to Automate or Not
Factors | Consider Automation When | Caution | |
---|---|---|---|
Frequency of testing | Tests are run more regularly as part of a major or minor release cycle, or used in regression testing and repeated at least twice. | It may not be efficient to automate a test if it is a one-off or run once a year, and the application under test changes within the year. | |
Complexity of automation | Complex steps are error-prone, tedious and time-consuming to execute, but repeatable in nature. | Certain test scripts may not be cost-effective to automate. For example, test scripts that would require developing substantial program code and API calls that would need to interact with unavailable external systems*. Or, where the application being tested is incompatible with existing automated test solutions. | |
Life cycle stage of application under test | Older systems are being refactored – reusing test data would be possible and recoding the automated environment to make it compatible with the new architecture would be necessary. This is particularly true for legacy systems that have been wrapped in an API. An automated API test can then be used to validate the new service. | It is not recommended to automate a test for a system that is nearing the end of its life cycle, and for which there is little value to be gained from undertaking such a short-lived automation initiative. | |
Legacy systems | A test automation strategy is in place that focuses on characterization tests and building tests that are harnessed around an existing code base and refactoring.1 | Legacy code may have entwined database access, input/output, business logic and presentation code, and it may not be clear where to hook into the code to automate a test. Take a pragmatic and incremental approach to building test suites for legacy code; focus on the most active and high-risk areas of the code first. Build a new test for all new functionality and escaped defects. |
1
*may be addressed with service virtualization for simplification
1 “Unit Testing Legacy Code.” ZeaLake. and M. Feather. “Working Effectively With Legacy Code.” Prentice Hall. 2004. Source: Gartner (November 2017)
Strike a balance between the cost of automation and the amount of valuable time consumed by manually doing the test – if it’s easy to do manually, and automating wouldn’t be quick, just keep it manual.
Don’t Try to Automate Everything, Manual Testing Still Has a Place
Almost all types and levels of testing can be automated. However, while there are multiple benefits of test automation, simply trying to automate tests without understanding when it is or is not useful will not work. Usability testing and exploratory testing are examples of testing that need human intelligence and intervention, and do not justify an investment for automation. Automating scripts may be useful for setting up scenarios and speeding up the subsequent execution of usability or exploratory testing. However, understanding the usability aspect of an application, or learning more about the software for future improvement, can’t be automated.
Test automation is not a substitute for human testers. Teams will need to leverage both manual and automated testing to ensure software quality prior to launch. In mobile testing, for example, simply performing automated tests will not work, because mobile testing requires a significant amount of manual testing and testing in real-life environments. For example, testing device-specific functions, such as location data and other environmental sensor data, is hard to do in a lab situation and so requires manual testing in the field. A similar case applies to Internet of Things (IoT) applications. While automation is a fundamental need, do not expect all testing to be handled this way. Employ a mixture of manual testing and other services for in-the-field testing.
Test Automation Is More Than Just Functional Testing at the GUI Level
Most teams start their automation effort with GUI functional tests, but because the components of the GUI tend to change frequently, they are expensive to build, slow to execute and fragile to maintain. To ensure that you find problems early and often, a combination of different testing levels and types is needed to achieve the desired level of quality and to mitigate the risks associated with defects. Think about not just automating GUI functional testing, but also other types of tests such as:
- Static testing
- Build verification tests (BVT) as part of the continuous integration environment
- Unit or component tests
- APIs and web services
- Performance, load and security tests
The concept of the test automation pyramid, generally credited to Mike Cohn of Mountain Goat Software (see Figure 2 and Note 1) is a good way to organize your test automation efforts based on where they do the most good. The lowest tier of the pyramid is mainly made up of unit tests and component tests that represent the bulk of the automated tests. It is a practice that increases the execution speed for tests and reduces the test implementation costs. The top tier consists of the tests done through the GUI and represents the smallest automation effort. User acceptance tests and exploratory testing are represented by tests done manually.
Because each type of test (unit, API/service and UI tests) has its own purpose, associated cost and ROI, a solid test automation strategy should detail the amount of time your teams should spend on each area. It should also be recognized that goals will vary based on the type of project; just as the type of testing will vary. New custom development versus packaged software implementations or maintenance work requires a different balance of activities. Focus efforts on the highest-value tests – this allows you to establish the ROI.
Figure 2. Test Automation Pyramid
Test Automation Pyramid Adapted from the model devised by Mike Cohn Source: Gartner (November 2017)
As companies practice continuous integration, they tend to incorporate a greater use of test automation that enables testing to be shifted further upstream (shift-left testing). This includes automatic execution of static analysis, execution of unit test fixtures and the use of functional automation. These companies recognize that a consistent daily cycle will drive more consistent success and earlier identification of defects, and that continuous attention to technical excellence and good design enhances agility.
Test Automation Requires a Good Test Design and Regular Maintenance
Test design skills have a huge impact on whether automation pays off right away. Poor practices produce tests that are hard to understand and maintain, and may produce hard-to-interpret results or false failures that take time to research.
False positives – It is expected that some tests will fail when there actually isn’t any defect in the application. Anything that changes the parameters of the tests, or doesn’t fit into the existing script, has the potential to output a false positive. It can be time-consuming to investigate and confirm false positives. Maintenance overhead – As the application or system changes, you need to continually assess the usefulness and efficacy of each test in order for test automation to work. New scripts must be written to cater for new features and directions. Teams have to write and maintain these tests. This is where BDD can help. Since tests are tied directly to the acceptance criteria, this actually helps to ensure that you are not building tests that will break as easily.
Unreliable tests are a major reason why teams ignore or lose confidence in automated tests. Teams with inadequate training and skills might decide that the return on their automation investment isn’t worth their time. Automation brings an entirely new development process into the existing testing process, which requires managing the automation code and related components. Version control tools are an essential element to backing up and creating the history of tests and the product versions they align to. Good test design practices produce simple, well-designed, continually refactored and maintainable tests. Libraries of test modules and objects build up over time and make automating new tests quicker. Linking test cases to product requirements and specifying requirements in a user story format will help build better test cases, as the user story outlines the role or permission, the feature or function to be performed, and the success criteria.
Choose Tools That Match Your Team’s Skills, Practices and Application Targets
Today’s testing market offers a variety of solutions for automated testing. There are both commercially available and open-source tools. Most agile teams gain value from applying a mix of commercial and open-source tools, while others prefer a customized scripting approach to automate necessary tasks, generate test data or drive other tools. Because of the breadth of test automation solutions available in the market, the products available from various vendors often differ markedly in terms of their focus and functional scope (see the Gartner Recommended Reading section). No single solution from a single vendor will meet all your testing needs. Therefore, it is important to select the right tools that take technologies and application targets into account, fit your team’s development style and practices, and match the skills of your resources.
Before implementing a new set of tools, it is essential to have a well-managed testing process in place. Examine the maturity of the current testing process, as well as the degree to which specific weaknesses in the process can be potentially optimized with an automated testing solution. Understand the specific demands of your team and organization, and the objectives it is trying to achieve – do you need to support waterfall, agile and mixed methodologies? What roles will need to be involved? What is the status of your current toolset? Do you need to support integration? Will your current staff need training in new techniques?
In other words, put in place a plan for evaluating tools, set up build processes and explore different test approaches.
Ensure Stability of Your Codebase Before Automation
When it comes to automated testing, the application must be well-developed and stable enough to withstand the demands of testing. The importance of keeping the codebase stable is increased for agile projects where the output of each iteration is potentially shippable. The key challenge is to architect for testability of the application. Although the actual code and implementation tend to change frequently in agile development, the intent of the code rarely changes. If little thought is given to testing while designing the system, it might be difficult and expensive to find a way to automate tests. The programmers and testers need to work together to organize test code by the application’s intent and crate a testable application. Test-driven development, if used as a mechanism to write the tests, not only creates a good regression suite, but can be used to design high-quality, robust code.
Evidence
Gartner has had more than 1,200 client interactions (inquiries and one-on-one meetings) discussing test automation best practices, initiatives, successes and failures, between 2015 and the present.
Source: Gartner Research Note G00343603, Joachim Herschmann, Thomas E. Murphy, 16 November 2017
Note 1 Automated Test Pyramid
The test automation pyramid has shown up in several forms. It is generally credited to Mike Cohn at Mountain Goat Software. It also appears in the book “Agile Testing” by Lisa Crispin and Janet Gregory. All of the versions differ in terms of the number of layers and their names, but the basic concept is consistent with the version presented in this research.