Ten years ago, performance testing was on the last-minute task list before software went live into production. But in those days, end users were less demanding when it comes to user experience. Performance testing was planned late in the project life cycle in order to test the application in a stable and representative environment. With agile, Continuous delivery, or DevOps, this approach is not acceptable anymore. Application performance as part of the global user experience is now the key aspect of application quality. “Old school” sequential projects with static Qualification / Implementation / Test phases that put off performance testing until the end of the project may face a performance risk. This is no longer acceptable by today’s application quality standards. Agile and DevOps involve updating the project organization and require close collaboration between the teams. In these methodologies, the project life cycle is organized into several sprints, with each sprint delivering a part of the application. In this environment, the performance testing process should follow the workflow below: Establishing a performance testing strategy As the first and most important step of performance testing, a strategy should be implemented at an early stage of the project life cycle defining the performance testing scope, the load policy and the service level agreements. Performance testing, being complex and time consuming with many aspects requiring human action (test design, test script maintenance, interpretation of test results), needs automation at every step of the test cycle to test faster and in a continuous manner. Hence, It is never possible to test everything, so conscious decisions about where to focus the depth and intensity of testing must be made to save time, and not extend the delivery deadlines. Risk-based testing Risk assessment provides a mechanism with which to prioritize the test effort. It helps to determine where to direct the most intense and deep test efforts and where to deliberately test lightly, in order to conserve resources for intense testing areas. Risk-based testing can identify significant problems more quickly and earlier on in the process by testing only the riskiest aspects of a system. With a methodology like DevOps, the number of releases is increasing but the size of the releases becomes smaller and smaller. That means that the risk is easier to measure when the release is smaller. You should only focus on the meaningful part of the application. Component testing In a modern project life cycle, the only way to include performance validation in an early stage is to test individual components after each build and implement end-to-end performance testing once the application is assembled. Since the goal is to test the performance at an early stage, listing all important components will help to define a performance testing automation strategy. Once a component has been coded, it makes sense to test it separately to detect regression and measure the response time and the maximum call/s that the component is able to handle. Most of the application has lots of dependencies, so testing a single component could be a challenge because you will have to wait for all the dependencies. In order to let you validate the code, implementing service virtualization would help you to test each component without being affected by the other projects that are currently deploying, or enhancing their system. Validate the user experience Once the application is assembled, testing objectives will change. At some point, the quality of the user experience needs to be validated. Measuring the user experience is possible by combining two solutions: load testing software (NeoLoad) and a browser-based or mobile testing tool. It is important to perform end-to-end testing, but it’s equally important to ensure that the scope of end-to-end testing is not increased unnecessarily. Indeed, we need to remember that executing more tests, especially during end-to-end testing phase could affect productivity. The best way is to focus on the right and important things by performing a selection of end-to-end testing (cf. Performance Strategy). Reduce the maintenance time of your scenarios Even in continuous delivery or DevOps, testing the performance of an unstable system in a functional way) does not make sense because you will only generate exceptions on the applications. In that context, you would only prove that the application behaves strangely in an unstable situation. Functional testing needs to be done before any load testing (even on component / API testing). Reusing or converting functional scenarios is relevant to reduce the creation and the maintenance of your performance testing assets. Reporting a green light for deployment Component testing, end-to-end testing, will be automated by continuous integration servers or specific release automation products. Any testing activity needs to report a status (depending on several parameters such as Response time, User Experience, Hits per seconds, Errors, Behaviour of the infrastructure) in those products to enable or disable the next step of a pipeline. Reporting a status in functional testing is obvious because the aim of each test scenario is to validate a requirement. Devops will limit the end to end testing With DevOps, it’s important to continuously validate the performance of the application without blocking the pace of delivery. That is the reason why end to end testing will be less frequently validated (depending of the risk, of course) and focus performance regression at the code level. ![]()
Contributed by Henrik Rexed, Performance Engineer, Neotys
|