Building Microservices: Designing Fine-Grained Systems
Rate it:
Kindle Notes & Highlights
Read between November 1 - December 24, 2017
34%
Flag icon
The calling system would POST a BatchRequest, perhaps passing in a location where a file can be placed with all the data. The customer service would return an HTTP 202 response code, indicating that the request was accepted but has not yet been processed. The calling system could then poll the resource waiting until it retrieves a 201 Created status, indicating that
34%
Flag icon
the request has been fulfilled, and then the calling system could go and fetch the data.
34%
Flag icon
An alternative option is to have a standalone program that directly accesses the database of the service that is the source of data, and pumps it into a reporting database,
34%
Flag icon
To start with, the data pump should be built and managed by the same team that manages the service.
34%
Flag icon
have one schema in the reporting database for each service, using things like materialized views to create the aggregated view.
35%
Flag icon
Like data pumps, though, with this pattern we still have a coupling to the destination reporting schema (or target system).
35%
Flag icon
we are moving more and more toward generic eventing systems capable of routing our data to multiple different places depending on need.
36%
Flag icon
CI without some verification that our code behaves as expected isn’t CI.
36%
Flag icon
A red build means the last change possibly did not integrate. You need to stop all further check-ins that aren’t involved in fixing the builds to get it passing again.
37%
Flag icon
Organizations using this approach often fall back to just deploying everything together, which we really want to avoid.
37%
Flag icon
The approach I prefer is to have a single CI build per microservice,
37%
Flag icon
source code repository, mapped to its own CI build. When making a change, I run only the build and tests I need to.
37%
Flag icon
The tests for a given microservice should live in source control with the microservice’s source code too,
37%
Flag icon
each microservice will live in its own source code repository,
37%
Flag icon
solution to this problem is to have different stages in our build, creating what is known as a build pipeline. One stage for the faster tests, one for the slower tests.
37%
Flag icon
37%
Flag icon
Tools that fully support CD allow you to define and visualize these pipelines, modeling the entire path to production for your software.
37%
Flag icon
we want to ensure we can release our services independently of each other,
38%
Flag icon
Chef, Puppet, and Ansible all support multiple different common technology-specific build artifacts too.
38%
Flag icon
Chocolatey NuGet has extended these ideas,
39%
Flag icon
We also want to avoid keeping our machines around for too long, as we don’t want to allow for too much configuration drift
39%
Flag icon
When we want to deploy our software, we spin up an instance of this custom image, and all we have to do is install the latest version of our service.
39%
Flag icon
This problem is often called configuration drift — the code in source control no longer reflects the configuration of the running host.
40%
Flag icon
create one single artifact, and manage configuration separately. This could be a properties file that exists for each environment,
40%
Flag icon
when dealing with a larger number of microservices, is to use a dedicated system for providing configuration,
41%
Flag icon
There are some challenges with this model, though. First, it can make monitoring more difficult.
41%
Flag icon
Deployment of services can be somewhat more complex too,
41%
Flag icon
small upside in improving simplicity is more than outweighed by the fact that we have given up one of the key benefits of microservices: striving for independent release of our software.
41%
Flag icon
If you do adopt the multiple-services-per-host model, make sure you keep hold of the idea that each service should be deployed independently.
41%
Flag icon
The idea is that the application container your services live in gives you benefits in terms of improved manageability, such as clustering support to handle grouping multiple instances together, monitoring tools, and the like.
41%
Flag icon
Whether you decide to have multiple services per host as a deployment model, I would strongly suggest looking at self-contained deployable microservices as artifacts.
41%
Flag icon
For .NET, this is possible with things like Nancy, and Java has supported this model for years.
41%
Flag icon
With a single-service-per-host model shown in Figure 6-8, we avoid side effects of multiple hosts living on a single host, making monitoring and remediation much simpler.
42%
Flag icon
In my opinion, if you don’t have a viable PaaS available, then this model does a very good job of reducing a system’s overall complexity.
42%
Flag icon
Having a single-service-per-host model is significantly easier to reason about and can help reduce complexity.
42%
Flag icon
Platform as a Service When using a platform as a service (PaaS), you are working at a higher-level abstraction than at a single host.
42%
Flag icon
When PaaS solutions work well, they work very well indeed. However, when they don’t quite work for you, you often don’t have much control in terms of getting under the hood to fix things.
42%
Flag icon
the smarter the PaaS solutions try to be, the more they go wrong.
42%
Flag icon
One of the pushbacks against the single-service-per-host setup is the perception that the amount of overhead to manage these hosts will increase.
42%
Flag icon
Ideally, developers should have access to exactly the same tool chain as is used for deployment of our production services so as to ensure that we can spot problems early on.
42%
Flag icon
In the first three months of this exercise, REA was able to move just two new microservices into production, with the development team taking full responsibility for the entire build, deployment, and support of the services. In the next three months, between 10–15 services went live in a similar manner. By the end of the 18-month period, REA had over 60–70 services.
43%
Flag icon
The problem is that the hypervisor here needs to set aside resources to do its job.
43%
Flag icon
The more hosts the hypervisor manages, the more resources it needs.
43%
Flag icon
Each container is effectively a subtree of the overall system process tree. These containers can have physical resources allocated to them, something the kernel handles for us.
43%
Flag icon
Due to the lighter-weight nature of containers, we can have many more of them running on the same hardware than would be possible with VMs.
44%
Flag icon
We have our builds for our services create Docker applications, and store them in the Docker registry, and away we go.
44%
Flag icon
Think of it as a simple PaaS that works on a single machine. If you want tools to help you manage services across multiple Docker instances across multiple machines, you’ll need to look at other software that adds these capabilities.
44%
Flag icon
Docker with an appropriate scheduling layer sits between IaaS and PaaS solutions — the term containers as a service (CaaS) is
46%
Flag icon
51%
Flag icon
A common example of this is the smoke test suite, a collection of tests designed to be run against newly deployed software to confirm that the deployment worked.