Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation
Rate it:
Open Preview
7%
Flag icon
There should be two tasks for a human being to perform to deploy software into a development, test, or production environment: to pick the version and environment and to press the “deploy” button.
7%
Flag icon
All aspects of each of your testing, staging, and production environments, specifically the configuration of any third-party elements of your system, should be applied from version control through an automated process.
8%
Flag icon
A working software application can be usefully decomposed into four components: executable code, configuration, host environment, and data. If any of them changes, it can lead to a change in the behavior of the application.
14%
Flag icon
You should, as part of your deployment script, ensure that the messaging bus you are configured to use is actually up and running at the address configured, and that the mock order fulfillment service your application expects to use in the functional testing environment is working. At the very least, you could ping all external services.
15%
Flag icon
It should always be cheaper to create a new environment than to repair an old one.
15%
Flag icon
Your automated environment provisioning system should be able to establish, or reestablish, any given baseline that has existed in the recent history of your project.
25%
Flag icon
high-quality software is only possible if testing becomes the responsibility of everybody involved in delivering software and is practiced right from the beginning of the project and throughout its life.
26%
Flag icon
you should only build your binaries once, during the commit stage of the build.
35%
Flag icon
The lowest layer is the operating system. Next is the middleware and any other software your application depends on. Once both these layers are in place, they will need some specific configuration applied to prepare them for the deployment of our application. Only after this has been added, can we deploy our software—the deployable binaries, any services or daemons, and their own associated configuration.
36%
Flag icon
Finally, it bears reiterating that scripts are first-class parts of your system. They should live for its entire life. They should be version-controlled, maintained, tested, and refactored, and be the only mechanism that you use to deploy your software.
41%
Flag icon
Initially, the analysts will work closely with testers and the customer to define acceptance criteria.
41%
Flag icon
Once the acceptance criteria have been defined, just before the requirement is to be implemented, the analyst and tester sit with the developers who will do the implementation, along with the customer if available. The analyst describes the requirement and the business context in which it exists, and goes through the acceptance criteria. The tester then works with the developers to agree on a collection of automated acceptance tests that will prove that the acceptance criteria have been met.
43%
Flag icon
avoid the lure of obtaining a dump of production data to populate your test database for your acceptance tests
43%
Flag icon
if your software supports multiple users who have independent accounts, use the features of your application to create a new account at the start of every test, as shown in the example in the previous section. Create some simple test infrastructure in your application driver layer to make the creation of new accounts trivially simple. Now when your test is run, any activities and resulting state belonging to the account associated with the test is independent of activities taking place in other accounts.
45%
Flag icon
We often choose to run a small selection of new smoke tests designed to assert that our environment is configured as we expect and that the communications channels between the various components of our system are correctly in place and working as intended.
45%
Flag icon
Sometimes, prepopulating the application with “seed data” or using some back door into the application to populate it with test data is a valid approach, but you should treat such back doors with a degree of skepticism, since it is all too easy for this test data to not be quite the same as that created by the normal operation of the application, which invalidates the correctness of subsequent testing.
47%
Flag icon
For example, one approach to managing an NFR, such as auditability, is to say something like “All important interactions with the system should be audited,” and perhaps create a strategy for adding relevant acceptance criteria to the stories involving the interactions that need to be audited. An alternative approach is to capture requirements from the perspective of an auditor. What would a user in that role like to see? We simply describe the auditor’s requirements for each report they want to see.
49%
Flag icon
A good strategy is to take some existing acceptance tests and adapt them to become capacity tests. If your acceptance tests are effective, they will represent realistic scenarios of interaction with your system, and will be robust in the face of change in the application. The properties that they lack are: the ability to scale up so you can apply serious load to the application, and a specification of a measure of success.
50%
Flag icon
Whenever you are writing capacity tests, it is important to start by implementing a simple no-op stub of the application, interface, or technology under test so you can show that your test can run at the speeds that it needs to and correctly assert a pass when the other end is doing no work.
51%
Flag icon
When deployment to production occurs, the same process should be followed as for any other deployment. Fire up your automated deployment system, give it the version of your software to deploy and the name of the target environment, and hit go. This same process should also be used for all subsequent deployments and releases.
52%
Flag icon
make your smoke tests verify that you are pointing at the right things. For example, you could have a test double service return the environment it expects to talk to as a string, and have the smoke tests check that the string your application gets back from an external service matches the environment it is deploying to.
54%
Flag icon
In every system, there comes a moment when a critical defect is discovered and has to be fixed as soon as possible. In this situation, the most important thing to bear in mind is: Do not, under any circumstances, subvert your process.
54%
Flag icon
As an application developer, you want to give your users options. However, in the case of upgrading, users have no understanding of why they might want to delay the upgrade.
54%
Flag icon
The upgrade process might break the application, thinks the development team, so we should give the user a choice on this matter. But, if the upgrade process is indeed flaky, the user would of course be correct never to upgrade. If the upgrade process is not flaky, then there is no point in providing the choice: The upgrade should happen automatically. So in fact, giving users a choice simply tells them that the developers have no confidence in the upgrade process.
54%
Flag icon
if the upgrade process fails, the application should automatically revert to the previous version and report the failure to the development team. They can then fix the problem and roll out a new version which will (hopefully) upgrade correctly.
55%
Flag icon
A “build and deployment expert” is an antipattern. Every member of the team should know how to deploy, and every member of the team should know how to maintain the deployment scripts.
55%
Flag icon
Deployment scripts should incorporate tests to ensure that the deployment was successful. These should be run as part of the deployment itself. They shouldn’t be comprehensive unit tests, but simple smoke tests that make sure the deployed units are working.
57%
Flag icon
The deployment system forms an integral part of the application—it should be tested and refactored with the same care and attention as the rest of the application, and kept in version control.
59%
Flag icon
If you can’t keep vital configuration information in versioned storage and thus manage changes to in a controlled manner, the technology will become an obstacle to delivering high-quality results.
60%
Flag icon
On one of our projects, we kept a “pain-register,” a diary of time lost on inefficient technology, which after a month easily demonstrated the cost of struggling with technology that slowed down delivery.
64%
Flag icon
While unit tests should not, by definition, require a database in order to run, any kind of meaningful acceptance tests running against a database-using application will require the database to be correctly initialized. Thus, part of your acceptance test setup process should be creating a database with the correct schema to work with the latest version of the application and loading it with any test data necessary to run the acceptance tests.
65%
Flag icon
Simply create a table in your database that contains its version number. Then, every time you make a change to the database, you need to create two scripts: one that takes the database from a version x to version x + 1 (a roll-forward script), and one that takes it from version x + 1 to version x (a roll-back script).
65%
Flag icon
At deployment time, you can then use a tool which looks at the version of the database currently deployed and the version of the database required by the version of the application that is being deployed. The tool will then work out which scripts to run to migrate the database from its current version to the required version, and run them on the database in order.
65%
Flag icon
“technical debt” applies to database design. There is an inevitable cost to any design decision. Some costs are obvious, for example the amount of time it takes to develop a feature. Some costs are less obvious, such as the cost of maintaining code in the future.
65%
Flag icon
If we make design choices that are suboptimal, we are in effect borrowing from the future. As with any debt, there are interest payments to be made. For technical debt, the interest is paid in the form of maintenance.
66%
Flag icon
If you are releasing frequently, you do not need to migrate your database for every release of your application. When you do need to migrate your database, instead of having the application work only with the new version of the database, you must ensure it works with both the new version and the current version.
66%
Flag icon
for all tests, whether manual or automated. What data will allow us to simulate common interactions with the system? What data represents edge cases that will prove that our application works for unusual inputs? What data will force the application into error conditions so that we can evaluate its response under those circumstances?
66%
Flag icon
We run unit tests to protect ourselves from the effects of inadvertently making a change that breaks our application. We run acceptance tests to assert that the application delivers the expected value to users. We perform capacity testing to assert that the application meets our capacity requirements. Perhaps we run a suite of integration tests to confirm that our application communicates correctly with services it depends on.
67%
Flag icon
The more these tests are tied to the specifics of the implementation, the worse they are at performing that role. The problem is that when you need to refactor the implementation of some aspect of your system, you want the test to protect you. If the tests are too tightly linked to the specifics of the implementation, you will find that making a small change in implementation results in a bigger change in the tests that surround it. Instead of defending the behavior of the system, and so facilitating necessary change, tests that are too tightly coupled to the specifics of the implementation ...more
67%
Flag icon
tight coupling in tests is often the result of overelaborate test data.
67%
Flag icon
If you find yourself working hard to establish the data for a particular test, it is a sure indicator that your design needs to be better decomposed. You need to split the design into more components and test each independently,
67%
Flag icon
isolate the code creating test instances of such commonly used data structures and share them between many different test cases. We may have a CustomerHelper or CustomerFixture class that will simplify the creation of Customer objects for our tests, so they are created in a consistent manner with a collection of standard default values for each Customer.
67%
Flag icon
When considering how to set up the state of the application for an acceptance test, it is helpful to distinguish between three kinds of data. 1. Test-specific data: This is the data that drives the behavior under test. It represents the specifics of the case under test. 2. Test reference data: There is often a second class of data that is relevant for a test but actually has little bearing upon the behavior under test. It needs to be there, but it is part of the supporting cast, not the main player. 3. Application reference data: Often, there is data that is irrelevant to the behavior under ...more
67%
Flag icon
we recommend using the application’s API to put it into the correct state.
Attila Bertók
Acceptance tests
67%
Flag icon
Your acceptance tests will also serve as tests of your application’s API.
68%
Flag icon
These datasets, including the minimal dataset required to start the application, should also be used by developers in their environments. On no account should developers use production datasets in their environments.
68%
Flag icon
Employing a component-based design is often described as encouraging reuse and good architectural properties such as loose coupling. This is true, but it also has another important benefit: It is one of the most efficient ways for large teams of developers to collaborate.
68%
Flag icon
There are four strategies to employ in order to keep your application releasable in the face of change: • Hide new functionality until it is finished. • Make all changes incrementally as a series of small changes, each of which is releasable. • Use branch by abstraction to make large-scale changes to the codebase. • Use components to decouple parts of your application that change at different rates.
69%
Flag icon
The bigger the apparent reason to branch, the more you shouldn’t branch.
69%
Flag icon
If some part of the codebase needs to be changed, you first find the entry point to this part—a seam—and put in an abstraction layer which delegates to the current implementation. You then develop the new implementation alongside the new one. Which implementation gets used is decided by a configuration option that can be modified at deploy time or even run time.
« Prev 1