楼主: AlexQin

Test Quality in CI/CD – Expert Roundup

[复制链接]
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
11#
 楼主| 发表于 2018-1-24 11:15 | 只看该作者
Test automation is an essential part of CI/CD, but it must be extremely robust. Unfortunately, tests running in live environments (integration and end-to-end) often suffer rare but pesky “interruptions” that, if unhandled, will cause tests to fail.
These interruptions could be network blips, web pages not fully loaded, temporarily downed services, or any environment issues unrelated to product bugs. Interruptive failures are problematic because they (a) are intermittent and thus difficult to pinpoint, (b) waste engineering time, (c) potentially hide real failures, and (d) cast doubt over process/product quality.
CI/CD magnifies even the rarest issues. If an interruption has only a 1% chance of happening during a test, then considering binomial probabilities, there is a 63% chance it will happen after 100 tests, and a 99% chance it will happen after 500 tests. Keep in mind that it is not uncommon for thousands of tests to run daily in CI – Google Guava had over 286K tests back in July 2012!
It is impossible to completely avoid interruptions – they will happen. Therefore, it is imperative to handle interruptions at multiple layers:
  • Secure the platform upon which the tests run. Make sure system performance is healthy and those network connections are stable.
  • Add failover logic to the automated tests. Any time an interruption happens, catch it as close to its source as possible, pause briefly, and retry the operation(s). Do not catch any type of error: pinpoint specific interruption signatures to avoid false positives. Build failover logic into the framework rather than implementing it for one-off cases. Aspect-oriented programming can help here tremendously. Repeating failed tests in their entirety also works and may be easier to implement but takes much more time to run.
  • Third, log any interruptions and recovery attempts as warnings. Do not neglect to report them because they could indicate legitimate problems, especially if patterns appear. It may be difficult to differentiate interruptions from legitimate bugs. Or, certain retry attempts might take too long to be practical. When in doubt, just fail the test – that’s the safer approach.


Contributed By Andrew Knight, LexisNexis,

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
12#
 楼主| 发表于 2018-1-24 11:16 | 只看该作者
A couple of years ago I worked on a big product where we used Specification by Example (Behaviour Driven Development) extensively. There were 5 developers on the team, a product owner, and a business analyst. We worked for several years on the product. We started to use Specification by Example to solve communication problems between the product owner and the developers – we needed a way to bridge the communication gap, and Specification by Example proved very effective.
After a while, we began to automate our scenarios, creating an executable specification. We added a Living Documentation (that’s what started my involvement with Pickles, the open source Living Documentation generator) and integrated the results of the automated test runs into that Living Documentation. We had a pretty cool automated build with virtual machines where we deployed the software and ran our battery of automated scenarios. Productivity reached an all-time high. The number of user stories that were rejected by the product owner at the end of the iteration became zero.
Gradually, problems started to appear in our setup. We simply had too many scenarios: we began to focus on quantity of scenarios, not on the quality. The scenarios became more technical and less easy to read, so they lost their power to explain the workings of the system. The scenarios took a long time to run, so the running time for the whole suite increased to several hours. Due to timeouts, on average, 0.5 percent of the scenarios might fail – but we had 400 scenarios so there was a failure in every run. The value of our automated verification setup decreased severely.
What I learned from this: when doing automated scenario verification, focus on quality and not on quantity. If you want lots of tests, write good unit tests that run in the blink of an eye. But for your integration tests, or end-to-end tests, or scenario verifications, pick a small set of important scenarios and make sure they run reliably and reasonably fast. That way you will get the most value from those tests in the long run.

Contributed by Dirk Rombauts, Pickles,
*The open source Living Documentation generator

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
13#
 楼主| 发表于 2018-1-24 11:16 | 只看该作者
CI/CD is like a chain, and testing is one of its essential links. There are many types of testing and one of them is performance testing. If we want to be sure that our application meets its SLA requirements, we have to run performance tests with every build. A good practice is to automate these tests and put them into our CI pipeline. A good performance testing tool, dedicated environment, and trending reports are crucial elements in this process.
Thanks to the cloud, container technologies, and virtualization it is quite simple to prepare a testing environment even before each test run. The tricky part, however, is to integrate the test tool with our CI tool to easily run load tests. We need to run test scripts, collect data, and lastly, display results. Due to differences between test and production environments, we usually do not focus on absolute numbers but rather relative comparisons. Choosing the right load test tool could save us a lot of work.
At SmartMeter.io we are aware of this. Therefore, reporting in SmartMeter.io is as simple as possible (there are literally “one-click” reports). Reports also contain Trends analysis in clear graphs and tables, not dependent on any plug-in or CI tool, which makes it possible to use our favorite tool or a tool preferred by our client. If we want to be sure that the metrics of our application meet our business SLA, we can use acceptance criteria provided as a core component of SmartMeter.io. Every report tells us which criteria passed or failed. Any failed criterion marks the whole test run as failed, so any CI could recognize that load tests failed. There is no need to check every report. Rather, you can focus your work on the things that matter.

Contributed by Martin Krutak, SmartMeter.io

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
14#
 楼主| 发表于 2018-1-25 10:21 | 只看该作者
The essential component required to enable Continuous Integration/Continuous Delivery is automated testing. This is, of course, a correct statement, but only in part. Yes, automated testing is an essential component for enabling Continuous Integration/Continuous Delivery.  However, the problem with automated testing is that no matter how good the automated testing solution is, the creation of the automated testing scripts or scenarios will always lag behind the development work having been completed.
Often this is because the automated testing scripts cannot actually be written until the development work is completed i.e. the field or button on the screen needs to be added to the screen before an automated test can be written to test it. Because of this delay, automated testing lags behind development by as much as one, two, or even more sprints.  In the worst case, due to project schedule pressures, automated testing is abandoned for a period of time to meet a milestone or deadline, with the idea that it will be reinstated for the next phase of delivery.  Of course, all this does is build up a bigger automated testing debt, which has to be paid off in the next phase.
Testing needs to lead development, not lag behind it, for us to remove that lag and debt build-up. Enter Test Driven Development (TDD) or Acceptance Driven Development (ADD).  With these approaches, the requirements are written in the form of the automated tests that will be used to test the system and deem it acceptable.  Developers make changes to the system based on these definitions and acceptance criteria.
Once the automated tests and regression tests pass, the developer knows his work is completed and the system can be delivered immediately and continuously.  There is no lag at all between development and testing because automated testing scripts are written as part of the requirements definition process. The biggest change we need to make to enable Continuous Integration/Continuous Delivery is for testing to lead development, not lag behind it.
With this in mind, we can say that the essential elements required to enable Continuous Integration/Continuous Delivery are automated testing and a Test or Acceptance Driven development and testing approach.  Only when these two components are used in combination can the dream of Continuous Integration and Continuous Delivery become a reality.  Visionary companies in this space like AutotestPro offer solutions which combine TDD and automated testing so that testing does not lag behind development; instead, testing leads the development.

Contributed by Paul Chorley, Co-Founder and Managing Director, AutotestPro Ltd,

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
15#
 楼主| 发表于 2018-1-25 10:21 | 只看该作者
DevOps testing is the portion of the DevOps Pipeline that is responsible for the continuous assessment of incremental change. DevOps test engineers are the team members accountable for testing in a DevOps environment.  This role can be played by anyone in the DevOps environment such as QA personnel, developers, or infrastructure and security engineers for their respective areas. The DevOps test engineer can be anyone who is trusted to do the testing.
In a non-DevOps environment, independent testers from QA team test the products passed on to them by developers, and QA then passes the product on to operations. In a DevOps environment, there is a need to pass on an error-free code in small chunks (e.g. microservices). This means that there is a need for testing more frequently throughout the process, end-to-end across the development and deployment cycles. The time for each increment in DevOps is very short.
The combination of short increments and the spreading of tests across the end-to-end pipeline requires fast, automated tests and end-to-end test results coordination. Everyone in IT needs to learn to test, both manual and automated and know how to read the test results.
Testing maturity is a key differentiator of DevOps maturity:
  • Many organizations automate integrations, builds, and delivery processes but have trouble with the subtleness of test orchestration and automation
  • There is a vital role for testing architects and testing teams to offer their expertise in test design, test automation, and test case development with DevOps
  • Whether the organization is using a test-driven development methodology, behavior-based test creation, or model-based testing, testing is a vital part of the overall DevOps process — not only to verify code changes work and integrate well — but to ensure the changes do not mess up the product
  • Testing is an integral part of product development and delivery

There needs to be constantly testing so that error-free code can be merged into the main trunk and we can get a deployable code from the CD/CI. This needs people to plan for the environment, choose the right tools and design the orchestration to suit the need.
Effective DevOps testing requires Development, QA and IT Operations teams to harmonize their cultures into a common collaborative culture focused on common goals. The culture requires leaders to sponsor, reinforce, and reward collaborative team behaviors and invest in the training, infrastructures, and tools needed for effective DevOps testing
A DevOps testing strategy has the following components:
  • DevOps testing is integrated into the DevOps infrastructure
  • DevOps testing emphasizes orchestration of the test environment
  • DevOps tests are automated as much as possible
  • DevOps testing goal is to accelerate test activities as early in the pipeline as possible

The 5 Tenets of DevOps Testing are:
  • Shift Left
  • Fail Often
  • Relevance
  • Test Fast
  • Fail Early

The test strategy requires also to design the application in a loosely coupled architecture. It is very important to have a good design before moving onto the automation. Test result analysis is also another key activity to be performed to ensure that proper testing takes place with the right coverage.
Some examples of open source DevOps testing frameworks are Jenkins and Robot.
Examples of a commercially licensed testing framework are Cloudbees, Electric Cloud, and Team City. For further detailed learning, the DevOps Test Engineer Course is recommended.

Contributed by Niladri Choudhuri, Xellentro,

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
16#
 楼主| 发表于 2018-1-25 10:22 | 只看该作者
If you’ve been working as a tester for any length of time, you can’t have failed to notice the shift
towards CI/CD in many projects and organizations. Businesses, projects, and operations teams all want to try and take advantage of at least some of the perceived benefits of being able to quickly and consistently release new builds to production, at the push of a button. In the meantime, testers will likely have found that the CI/CD model has a big impact on how they need to approach testing.
Most of the CI/CD pipeline has development, QA, staging, and production environments where certain tests are run to ensure that the code which has been written is safe to push ahead. An automated test is the most important part of any CI/CD pipeline. Without proper automated tests that run fast, have good coverage, and no erroneous results, there can be no successful CI/CD pipeline. The automated tests are usually divided into multiple “suites”, each with their own objective.
The list below gives a small overview:
  • Unit tests: This is the suite that is run first, often by developers themselves before they add their changes to the repository. Unit tests normally test individual classes or functions.
  • Integration tests: After unit tests come integration tests. These tests make sure that the modules integrated together work properly as an application. Ideally, these tests are run on environments that are similar to the production environment.
  • System tests: These tests should test the entire system in an environment as close as possible to the real production environment.

Testing in a development environment
In the development environment, smoke testing is done. Smoke testing, also known as “Build Verification Testing”, is a type of software testing that comprises a non-exhaustive set of tests aiming to ensure the most important functions run properly. The results of this testing are used to decide if a build is stable enough to proceed with further testing.
To implement smoke tests, the testing team will develop a set of test cases that are run when a new release is provided by the development team. It will be more productive and efficient if the smoke test suite is automated or it can be a combination of manual and automation. To ensure quality awareness, the smoke test cases are communicated to the development team ahead of time, so that they are aware of the quality expectations. It is important to focus on the fact that smoke test suite is a “shallow and wide” approach towards testing.
Testing in a QA Environment
In a QA Environment, regression testing is done. Regression testing is the type of testing carried out to ensure that changes made in the fixes or any enhancement changes are not impacting the previously working functionality. The regression packs are a combination of scripted tests that have been derived from the requirement specifications for previous versions of the software as well as random or ad-hoc tests. A regression test pack should, at a minimum, cover the basic workflow of typical use case scenarios.
Best practices for Testers in CI/CD
  • Perform standard actions defined in the testing procedure & check the desired responses for correctness. Any failure of the system to comply with the set of desired responses becomes a clear indicator of system regression
  • Do a careful analysis of every defect based on the previous test scenarios to avoid a slip in regression testing
  • Ensure that the regression tests are correct are not outdated

Testing in a Stage Environment
In the stage environment, (similar to the production environment) performance testing is done. Any application performance test result depends upon the test environment configurations.
Performance testing is often an afterthought, performed in haste late in the development cycle, or only in response to user complaints. It’s crucial to have a common definition of the types of performance tests that should be executed against your applications, such as Single User Test, Load Test, Peak Load Test and Stress Tests. It is best practice to include performance testing in development unit tests and performs modular and system performance tests.
Testing in a Production Environment
In a production environment, sanity testing is done. Sanity tests are usually unscripted, helping to identify the dependent missing functionalities. These tests are used to determine if the section of the application is still working on a minor change. Sanity testing goals are not to find defects but to check system health. An excellent way is to create a daily sanity checklist for the production testing that covers all the main functionalities of the application. Sanity testing should be conducted on stable builds to ascertain new functionality/ bugs have been fixed and application is ready for complete testing, and sanity testing is performed by resters only.
Conclusion
This blog points out which environments are part of the CI/CD pipeline and how it is configured to successfully deployment of an application. It also explains the best testing types and approach used in each environment and its best practices.

Contributed by Devendra Date, DevOpsTech Solutions Pvt Ltd. ,

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
17#
 楼主| 发表于 2018-1-25 10:22 | 只看该作者
Implementing Continuous Integration (CI) provides software development teams the ability to adopt a regular software release schedule with an automated error detection process for a more agile, safe, and low-cost DevOps approach.
When applying this approach in data management, automated testing is important for some of the same reasons as it enables teams to execute with the 3 testing drivers: agility, accessibility, and accuracy.
Data Agility Testing
By leveraging modern data management tools, the data ingestion process can be deployed at a more rapid pace (with metadata driven workflows or drag-and-drop code generation). Agility testing helps ensure proper front-end configuration is in place, which may appear daunting, but with appropriate environment access and testing jobs, the process can be quite simple. Data agility gives teams accurate data ingestion to produce data for use or storage immediately.
Data Accessibility Testing
To start, this process tests database connections and file URLs for accuracy. In advanced models, data dictionaries and glossaries are also checked for valid entries against ingested data. This driver forces governance practices to be in place before ingestion for fewer deployment and activation problems.
Data Accuracy Testing
This testing takes place downstream of the ingestion process ensuring validation, transformation, and business rule logic is applied. It’s often considered the most difficult testing to visualize and implement at the right scope.
Tackling CI may seem complex on the surface, but by following these 3 testing drivers, teams can ingest, transform, and apply business rules faster and with fewer issues while reducing manual touch points. Once CI is configured and test-driven development is in place, you will probably wonder what you ever did without it.

Contributed by Robert Griswold, Vice President of Professional Services for TESCHGlobal

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
18#
 楼主| 发表于 2018-1-26 09:54 | 只看该作者
When it comes to CI/CD systems, properly designing the overall structure of your system can often more effectively test your applications than poorly designed systems with excellent tests.
When you start designing your CI/CD pipelines, the first thing to do is break apart your application into as many logically independent components as possible. For example, your application might have a frontend, backend, and middleware layer. Your first instinct might be to create a pipeline for each component, but you usually want more granular control.
For example, let’s say that you deploy your software in a Docker container. You’ll want to independently test your software (ex situ), and also test the software inside your Docker container (in situ). This allows you to catch errors specific to your code, and errors related to your deployment platform. In this instance, these ex-situ and in situ tests require their own pipelines to comprehensively test both the application and the application’s deployment.
The next thing you need to consider is upstream dependencies. Upstream dependencies are triggers that cause a pipeline to execute. As a rule, each pipeline should have one primary upstream dependency, and zero or more secondary upstream dependencies. A primary upstream dependency is usually source code (but not always). Understanding the entire set of upstream dependencies for a given pipeline will make sure that your application always has the latest code, and will identify upstream changes that break functionality.
For example, let’s say that you have source code deployed as a Docker container. The git repository that contains your source code is a primary upstream dependency. When there are new commits to the repository, it triggers a pipeline that tests the individual code. This pipeline might also subscribe to another pipeline that contains dependencies, such as security scanning software.
After you understand how to structure your upstream dependencies, you need to consider downstream dependencies. Downstream dependencies are additional pipelines that are triggered when a pipeline successfully executes. When you start designing the structure of your CI/CD system, you need to take into account all downstream dependencies for every pipeline.
For example, if multiple pieces of software depend on a common module, anytime that common module’s pipeline executes, it should trigger those additional pieces of software. This guarantees that each source code component has the most recent version of all libraries, and it will identify any problems as early as possible.
Let’s go over an example to showcase how designing the high-level actions and organization of your CI/CD system will enable you to structurally find problems, even with suboptimal testing.
An engineer at our hypothetical organization issues a patch to fix a security vulnerability. After he pushes the code to git, this triggers a pipeline which successfully tests the patch. This then triggers 5 additional pipelines, because each of these pipelines depends on the new patch. 4 of these pipelines complete successfully, but one fails. The successful pipelines push their build products to QA. An engineer notices that the last pipeline is now red, and sees that the security patch worked, but it exposed an integer overflow bug in a Java library that needs to be patched. The engineer patches the library, and the CI/CD system automatically builds, tests, and deploys that code to QA.
This pipeline structure enabled our hypothetical organization to deploy the security patch to 4 applications in QA automatically, and show engineers exactly what is wrong with the last bit of code. Without understanding the proper upstream and downstream dependencies, applying the software security patches would have been extremely time-consuming, and it would not have identified the fact that one of the apps has a bug that directly conflicts with the patch.
This CI/CD system is able to effectively test large sets of highly dependent applications simply because of how it is structured.

Contributed by David Widen, Pre-Sales Systems Engineer, BoxBoat Technologies,

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
19#
 楼主| 发表于 2018-1-26 09:54 | 只看该作者
The setup could be fast, but the maintenance may be hard—this is the truth of Continuous Testing.
Here is a straightforward story of continuous testing started from scratch. The tester first climbed through some learning curves in scripting an automated test, maybe an API test, maybe a UI test. He gained hands-on experience in using test frameworks and libraries (e.g. Selenium, Mocha, Calabash, etc.). Then he found some ways to run the tests in a clean environment, understood the concept of building an image and running in a container, say using Docker. After some trial and error…Pass, Pass, Pass. Great!
The boss saw his work and said, “Let’s run all the tests once an hour, send me an alert when any test fails.” He went to Google to search some keywords: Pipeline and Scheduler. Jenkins, GitLab, Heroku—lots of systems are providing a pipeline service. By choosing any system, he could run the tests all at once right after the deployment stage. A schedule is even handier to trigger the pipeline periodically. At the same time, he saved the test results to some kind of database, so he could use those results to compare the records with previous runs. Finally, when a test failure was detected, the program would send an email to the boss.
From then, this guy no longer needs to repeat the same set of manual tests every day, every few hours, overnight, marking ticks and crosses on a long long list, being bored and making mistakes easily. But one day, sad news came. The website broke with NO alert sent. Okay…Let’s check what’s going wrong. The test script? The built image? The pipeline? The test runner? The scheduler? The machine? The database? The alert sender? That’s why I said, the maintenance may be hard. Yet, if your project is going to be a long one, it’s really worth it.
Contributed by Joyz Ng, QA Engineer, Oursky

使用道具 举报

回复
论坛徽章:
1056
紫蜘蛛
日期:2015-09-22 15:53:22紫蜘蛛
日期:2015-10-15 13:48:52紫蜘蛛
日期:2015-10-15 14:45:48紫蜘蛛
日期:2015-10-15 14:47:47紫蜘蛛
日期:2015-10-15 14:48:45九尾狐狸
日期:2015-09-22 15:53:22九尾狐狸
日期:2015-10-15 13:50:37九尾狐狸
日期:2015-10-15 14:45:48九尾狐狸
日期:2015-10-15 14:47:47九尾狐狸
日期:2015-10-15 14:48:45
20#
 楼主| 发表于 2018-1-26 09:54 | 只看该作者
Continuous Delivery is a key practice to help teams keep pace with business demands. True Continuous Delivery is hard to do—it is impossible without confidence in the quality and fitness of the software—the kind of confidence only deterministic and repeatable automated tests can supply.
It is common for teams who have historically been dependent on manual testing to start with an inverted test pyramid in their automated build pipeline. This is better than no tests at all, but the fragility of these types of tests tends to surface quickly. False positives, false negatives, and little improvement in escaped production defect counts tend to erode trust in the test suite and, ultimately, leads to the removal of the tests.
I recommend that my teams start with a solid core of fast running, highly isolated unit tests. Every language has a popular unit testing framework which gives teams a high level of confidence in the quality and correctness of their code in mere seconds. This should be the first quality gate in a Continuous Delivery pipeline. If these tests fail there is no point in moving on to the next stage. Unit tests show that the code does things correctly.
Building upon the unit test suite should be a thin layer of integration tests. This proves that components play nice together and increases confidence that the application behaves the way that the customer expects. These tests should be executed one level below the UI – increasing their stability and minimizing their execution time. I encourage teams to use Behavior Driven Development style tests, which has the goal of proving that the code does the correct thing.
Given its popularity, it is typical for teams to jump straight into a Gherkin-based BDD framework, but the unit testing framework with which the team is already comfortable can be just as effective. Debates over tooling are endless. Ultimately, the ‘right tool’ is the one that gives the team the highest level of confidence that the code is fit for production with the least amount of friction.

Contributed by Nick Korbel, Booked Scheduler,

使用道具 举报

回复

您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

TOP技术积分榜 社区积分榜 徽章 团队 统计 知识索引树 积分竞拍 文本模式 帮助
  ITPUB首页 | ITPUB论坛 | 数据库技术 | 企业信息化 | 开发技术 | 微软技术 | 软件工程与项目管理 | IBM技术园地 | 行业纵向讨论 | IT招聘 | IT文档
  ChinaUnix | ChinaUnix博客 | ChinaUnix论坛
CopyRight 1999-2011 itpub.net All Right Reserved. 北京盛拓优讯信息技术有限公司版权所有 联系我们 未成年人举报专区 
京ICP备16024965号-8  北京市公安局海淀分局网监中心备案编号:11010802021510 广播电视节目制作经营许可证:编号(京)字第1149号
  
快速回复 返回顶部 返回列表