More on this book
Community
Kindle Notes & Highlights
by
Sam Newman
Read between
November 1 - December 24, 2017
The calling system would POST a BatchRequest, perhaps passing in a location where a file can be placed with all the data. The customer service would return an HTTP 202 response code, indicating that the request was accepted but has not yet been processed. The calling system could then poll the resource waiting until it retrieves a 201 Created status, indicating that
the request has been fulfilled, and then the calling system could go and fetch the data.
An alternative option is to have a standalone program that directly accesses the database of the service that is the source of data, and pumps it into a reporting database,
To start with, the data pump should be built and managed by the same team that manages the service.
have one schema in the reporting database for each service, using things like materialized views to create the aggregated view.
Like data pumps, though, with this pattern we still have a coupling to the destination reporting schema (or target system).
we are moving more and more toward generic eventing systems capable of routing our data to multiple different places depending on need.
CI without some verification that our code behaves as expected isn’t CI.
A red build means the last change possibly did not integrate. You need to stop all further check-ins that aren’t involved in fixing the builds to get it passing again.
Organizations using this approach often fall back to just deploying everything together, which we really want to avoid.
The approach I prefer is to have a single CI build per microservice,
source code repository, mapped to its own CI build. When making a change, I run only the build and tests I need to.
The tests for a given microservice should live in source control with the microservice’s source code too,
each microservice will live in its own source code repository,
solution to this problem is to have different stages in our build, creating what is known as a build pipeline. One stage for the faster tests, one for the slower tests.
Tools that fully support CD allow you to define and visualize these pipelines, modeling the entire path to production for your software.
we want to ensure we can release our services independently of each other,
Chef, Puppet, and Ansible all support multiple different common technology-specific build artifacts too.
Chocolatey NuGet has extended these ideas,
We also want to avoid keeping our machines around for too long, as we don’t want to allow for too much configuration drift
When we want to deploy our software, we spin up an instance of this custom image, and all we have to do is install the latest version of our service.
This problem is often called configuration drift — the code in source control no longer reflects the configuration of the running host.
create one single artifact, and manage configuration separately. This could be a properties file that exists for each environment,
when dealing with a larger number of microservices, is to use a dedicated system for providing configuration,
There are some challenges with this model, though. First, it can make monitoring more difficult.
Deployment of services can be somewhat more complex too,
small upside in improving simplicity is more than outweighed by the fact that we have given up one of the key benefits of microservices: striving for independent release of our software.
If you do adopt the multiple-services-per-host model, make sure you keep hold of the idea that each service should be deployed independently.
The idea is that the application container your services live in gives you benefits in terms of improved manageability, such as clustering support to handle grouping multiple instances together, monitoring tools, and the like.
Whether you decide to have multiple services per host as a deployment model, I would strongly suggest looking at self-contained deployable microservices as artifacts.
For .NET, this is possible with things like Nancy, and Java has supported this model for years.
With a single-service-per-host model shown in Figure 6-8, we avoid side effects of multiple hosts living on a single host, making monitoring and remediation much simpler.
In my opinion, if you don’t have a viable PaaS available, then this model does a very good job of reducing a system’s overall complexity.
Having a single-service-per-host model is significantly easier to reason about and can help reduce complexity.
Platform as a Service When using a platform as a service (PaaS), you are working at a higher-level abstraction than at a single host.
When PaaS solutions work well, they work very well indeed. However, when they don’t quite work for you, you often don’t have much control in terms of getting under the hood to fix things.
the smarter the PaaS solutions try to be, the more they go wrong.
One of the pushbacks against the single-service-per-host setup is the perception that the amount of overhead to manage these hosts will increase.
Ideally, developers should have access to exactly the same tool chain as is used for deployment of our production services so as to ensure that we can spot problems early on.
In the first three months of this exercise, REA was able to move just two new microservices into production, with the development team taking full responsibility for the entire build, deployment, and support of the services. In the next three months, between 10–15 services went live in a similar manner. By the end of the 18-month period, REA had over 60–70 services.
The problem is that the hypervisor here needs to set aside resources to do its job.
The more hosts the hypervisor manages, the more resources it needs.
Each container is effectively a subtree of the overall system process tree. These containers can have physical resources allocated to them, something the kernel handles for us.
Due to the lighter-weight nature of containers, we can have many more of them running on the same hardware than would be possible with VMs.
We have our builds for our services create Docker applications, and store them in the Docker registry, and away we go.
Think of it as a simple PaaS that works on a single machine. If you want tools to help you manage services across multiple Docker instances across multiple machines, you’ll need to look at other software that adds these capabilities.
Docker with an appropriate scheduling layer sits between IaaS and PaaS solutions — the term containers as a service (CaaS) is
A common example of this is the smoke test suite, a collection of tests designed to be run against newly deployed software to confirm that the deployment worked.