楼主: AlexQin

Test Quality in CI/CD – Expert Roundup

[复制链接]
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
31#
 楼主| 发表于 2018-1-30 21:38 | 只看该作者
Continuous deployment is important because many software development companies release code quickly, and it’s important to release quickly while guaranteeing the quality of the software. For this reason, it’s is important to test in each level of development. You can test effectively and in-depth using CI Tools. I use Jenkins, which is an amazing tool that helps adhere to the CI/CD process. Jenkins provides an option to make pipelines and in each pipeline you can integrate the quality process, executing automated tests at each level (unit tests, integration test, and functional tests).
The pipelines are written in Groovy and you can add stages, for example:
  • Stage 1 = deployment to an environment (QA or dev)
  • Stage 2 =. Unit Tests
  • Stage 3 = Integration Tests
  • Stage 4 = Functional Tests
  • Stage 5 = Deployment to production.

It’s important to mention that all this process can be automatic (that’s the main idea). In the deployment to QA stage, for example, you can configure a webhook in Gitlab, Github, or any platform that you are using. The webhook is going to be listening when the developer team makes a commit. When this webhook detects a commit a job is triggered to start making a deployment to the environment. After the deploy ends, the tests are going to be triggered: first the unit tests, and if everything is working fine, the pipeline passes to the next stage (integration tests).
You can execute some stages in parallel if you want and this helps you to accelerate the deployment and the time that your tests take. If a test fails, the pipeline ends. For Example, if a unit test fails the pipeline is going to stop there. You can add a lot of stages but is important to have the pipeline designed the correct way to perform the automated test that is going to help you to guarantee the quality of your project.
As a test framework, I use Java and Selenium webdriver, and testNG to make my test cases. TestNG is a framework that helps me to make test cases at each level and I use Selenium web driver to test web applications with functional tests. TestNG provides a way to divide your tests into groups, so you can only have one project in which you can have different sets of groups (unit test, integration tests, functional tests, etc).  The functionality of this framework helps you to easily integrate pipelines.
I use Selenium web driver when the tests that I have to perform are on a website. Selenium is a tool that helps you to interact with the user interface of a website by simulating all the actions that a user is going to perform and testing the functionality of the system (functional tests).
Selenium works based on HTML but what about the rendering of the sites? How can we guarantee that a user is seeing the site correctly? If you want to test the rendering of the website you can use an amazing tool that named Galen, which is a framework that provides the opportunity to specify how the sites have to look in mobile devices or desktops. For example, you can define that a button has to be centered in the middle of the page when you are seeing the website in mobile devices and if you are looking the website in a desktop you can set that the same button has to be displayed to the right. Another tool that I use is Sikuli X for automating tests where the app is not a web application.

Contributed by José A. Wolff, autoweb

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
32#
 楼主| 发表于 2018-1-31 13:36 | 只看该作者
In the web development world, a combination of testing + CI/CD is of special importance. This is because of the divergences in the browser implementations and target platforms (desktop, mobile). These divergences lead to increased complexity of the maintenance and quality assurance processes – every feature needs to be verified and known to work on every supported browser/platform.
Managing this complexity without having clear information about the current state of your codebase means just blindly accepting risks of delivering bad/broken user experience. The only answer that exists to this problem is test-driven development and test-driven quality assurance. The results of running the test suite reflect the current state of the codebase and can be used for making informed decisions in the project lifecycle. With the test suite, you basically know whether the current source can be safely pushed to production right now.
And the second piece of the puzzle is CI/CD systems, that will automate the whole infrastructure of your project. Those tools will require a certain effort to setup—there’s no universal solution, as the deployment of every project is unique. But the reward is tremendous — deployment becomes a trivial “after lunch” routine, and new features and bug fixes are delivered to the user much faster.
At Bryntum, we accept no compromises on the quality of our products, and every product is tested nightly in every supported browser. Such focus on quality leads us to the creation of the Siesta testing tool with a focus on modern JavaScript-heavy web apps and seamless cross-browser automation. The latter feature we can definitely recommend to rank highly in your evaluations of the testing tools, as otherwise, you might be limited to certain browsers only.
Recently we’ve also launched the RootCause service for tracking user sessions and reproducing JavaScript errors on the end-user side. This service provides a valuable insight into how your system actually behaves “in the wild” on the end users’ devices. RootCause allows to implement a significantly faster bug fixing cycle – the information about the errors that happened on a user’s machine is available just in few seconds.
Contributed by Nickolay Platnov, Bryntum AB

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
33#
 楼主| 发表于 2018-1-31 13:38 | 只看该作者
Ten years ago, performance testing was on the last-minute task list before software went live into production. But in those days, end users were less demanding when it comes to user experience. Performance testing was planned late in the project life cycle in order to test the application in a stable and representative environment. With agile, Continuous delivery, or DevOps, this approach is not acceptable anymore. Application performance as part of the global user experience is now the key aspect of application quality. “Old school” sequential projects with static Qualification / Implementation / Test phases that put off performance testing until the end of the project may face a performance risk. This is no longer acceptable by today’s application quality standards.
Agile and DevOps involve updating the project organization and require close collaboration between the teams. In these methodologies, the project life cycle is organized into several sprints, with each sprint delivering a part of the application. In this environment, the performance testing process should follow the workflow below:
Establishing a performance testing strategy
As the first and most important step of performance testing, a strategy should be implemented at an early stage of the project life cycle defining the performance testing scope, the load policy and the service level agreements.
Performance testing, being complex and time consuming with many aspects requiring human action (test design, test script maintenance, interpretation of test results), needs automation at every step of the test cycle to test faster and in a continuous manner. Hence, It is never possible to test everything, so conscious decisions about where to focus the depth and intensity of testing must be made to save time, and not extend the delivery deadlines.
Risk-based testing
Risk assessment provides a mechanism with which to prioritize the test effort. It helps to determine where to direct the most intense and deep test efforts and where to deliberately test lightly, in order to conserve resources for intense testing areas. Risk-based testing can identify significant problems more quickly and earlier on in the process by testing only the riskiest aspects of a system. With a methodology like DevOps, the number of releases is increasing but the size of the releases becomes smaller and smaller. That means that the risk is easier to measure when the release is smaller. You should only focus on the meaningful part of the application.
Component testing
In a modern project life cycle, the only way to include performance validation in an early stage is to test individual components after each build and implement end-to-end performance testing once the application is assembled. Since the goal is to test the performance at an early stage, listing all important components will help to define a performance testing automation strategy. Once a component has been coded, it makes sense to test it separately to detect regression and measure the response time and the maximum call/s that the component is able to handle.
Most of the application has lots of dependencies, so testing a single component could be a challenge because you will have to wait for all the dependencies. In order to let you validate the code, implementing service virtualization would help you to test each component without being affected by the other projects that are currently deploying, or enhancing their system.
Validate the user experience
Once the application is assembled, testing objectives will change. At some point, the quality of the user experience needs to be validated. Measuring the user experience is possible by combining two solutions: load testing software (NeoLoad) and a browser-based or mobile testing tool. It is important to perform end-to-end testing, but it’s equally important to ensure that the scope of end-to-end testing is not increased unnecessarily. Indeed, we need to remember that executing more tests, especially during end-to-end testing phase could affect productivity. The best way is to focus on the right and important things by performing a selection of end-to-end testing (cf. Performance Strategy).
Reduce the maintenance time of your scenarios
Even in continuous delivery or DevOps, testing the performance of an unstable system in a functional way) does not make sense because you will only generate exceptions on the applications. In that context, you would only prove that the application behaves strangely in an unstable situation. Functional testing needs to be done before any load testing (even on component / API testing). Reusing or converting functional scenarios is relevant to reduce the creation and the maintenance of your performance testing assets.
Reporting a green light for deployment
Component testing, end-to-end testing, will be automated by continuous integration servers or specific release automation products. Any testing activity needs to report a status (depending on several parameters such as Response time, User Experience, Hits per seconds, Errors, Behaviour of the infrastructure) in those products to enable or disable the next step of a pipeline.
Reporting a status in functional testing is obvious because the aim of each test scenario is to validate a requirement.
Devops will limit the end to end testing
With DevOps, it’s important to continuously validate the performance of the application without blocking the pace of delivery. That is the reason why end to end testing will be less frequently validated (depending of the risk, of course) and focus performance regression at the code level.

Contributed by Henrik Rexed, Performance Engineer, Neotys

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
34#
 楼主| 发表于 2018-1-31 13:38 | 只看该作者
Quality Assurance has not always been an integral part of software development teams, but now it is essential. Organizations tend to have separate QA teams to assess whether the business requirements are met in full. The potential that lies in this situation
is usually explored by DevOps practitioners by integrating core QA functionalities with Dev teams to nurture a holistic growth environment with a focus on quality. But the question still remains as to how this can benefit you.
Scope: Releasing a high-quality product is one of the fundamental aims of DevOps—a quality -driven environment is necessary to achieve business goals. Software quality in today’s fast-paced development environments often refers to exhaustive
test coverage of your code in the form of unit tests, sanity tests, functional tests, system tests, and integration tests. Test quality is the most critical component as it offers a clear understanding of the total percentage of the product that is tested.
One of the fundamental aspects of adopting CI/CD practices is the implementation of automated tests that run once the code has been committed. A continuous testing cycle offers a regressive test suite to be performed after the basic unit tests have been
completed. This usually saves developers time in waiting for feedback on software usability. The process involves assessing the results of tests performed on the product from its code level to its usability level, which is the foundation for test quality.
Evaluation: Test quality measurement is not derived from coverage alone—assessing how thoroughly a test suite exercises a given program is not enough to determine test quality. Measuring the completeness of a software product is a complex process which incorporates evaluating everything, including unit testing, smoke testing, code, requirements, structural, architectural, functional (white, black and broken box) coverage, analysis of temporal behavior, regression, integration, and usability
testing.
Tools and methods:
Cobertura – Widely adopted statistical coverage measurement tool, mainly for Java-based software.
Coverage – Python-oriented coverage analysis tool.
Selenium – Open-source functional testing tool
UFT- Functional and regression testing tool offered by HPE.
Comet- Coverage Measurement tool often used in heavy industrial testing.
Contributed by MSys technologies,

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
35#
 楼主| 发表于 2018-1-31 13:38 | 只看该作者

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
36#
 楼主| 发表于 2018-2-1 21:40 来自手机 | 只看该作者
good job

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
37#
 楼主| 发表于 2018-2-1 21:40 来自手机 | 只看该作者
good job

使用道具 举报

回复

您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

TOP技术积分榜 社区积分榜 徽章 团队 统计 知识索引树 积分竞拍 文本模式 帮助
  ITPUB首页 | ITPUB论坛 | 数据库技术 | 企业信息化 | 开发技术 | 微软技术 | 软件工程与项目管理 | IBM技术园地 | 行业纵向讨论 | IT招聘 | IT文档
  ChinaUnix | ChinaUnix博客 | ChinaUnix论坛
CopyRight 1999-2011 itpub.net All Right Reserved. 北京盛拓优讯信息技术有限公司版权所有 联系我们 未成年人举报专区 
京ICP备16024965号-8  北京市公安局海淀分局网监中心备案编号:11010802021510 广播电视节目制作经营许可证:编号(京)字第1149号
  
快速回复 返回顶部 返回列表