Building Microservices: Designing Fine-Grained Systems
Rate it:
Open Preview
Read between August 6 - September 13, 2022
32%
Flag icon
With continuous deployment on the other hand, all check-ins have to be validated using automated mechanisms (tests for example), and any software that passes these verification checks is deployed automatically, without human intervention.
32%
Flag icon
Build a deployment artifact for your microservice once. Reuse this artifact everywhere you want to deploy that version of your microservice. Keep your deployment artifact environment-agnostic—store environment-specific configuration elsewhere.
33%
Flag icon
if you choose to reuse code via libraries, then you must be OK with the fact that these changes cannot be rolled out in an atomic fashion, or else we undermine our goal of independent deployability.
33%
Flag icon
By having all the source code in the same repository, you allow for source code changes to be made across multiple projects in an atomic fashion, and for finer-grained reuse of code from one project to the next.
33%
Flag icon
the ability to reuse code easily and to make changes that impact multiple different projects is often cited as the major reason for adopting this pattern.
34%
Flag icon
With strong ownership, some code is owned by a specific group of people. If someone from outside that group wants to make a change, they have to ask the owners to make the change for them. Weak ownership still has the concept of defined owners, but people outside the ownership group are allowed to make changes, although any of these changes must be reviewed and accepted by someone in the ownership group.
35%
Flag icon
Due to the way that relational databases work, it’s more difficult to scale writes by adding additional machines (typically sharding models are required, which adds additional complexity), so moving read-only traffic to these read replicas can often free up more capacity on the write node to allow for more scaling.
36%
Flag icon
With a rolling upgrade, your microservice isn’t totally shut down before the new version is deployed, instead instances of your microservice are slowly ramped down as new instances running new versions of your software are ramped up.
39%
Flag icon
If one part of your system can dynamically scale but the other parts of your system don’t, then you might find that this mismatch can cause significant headaches.
40%
Flag icon
Assuming that the same team manages all these functions, and that conceptually it remains a single “service,” I’d be OK with them still using the same database, as Figure 8-20 shows.
40%
Flag icon
Over time, though, if the needs of each aggregate function diverge, I’d be inclined to look to separate out their data usage, as seen in Figure 8-21, especially if you start to see coupling in the data tier impair your ability to change them easily.
40%
Flag icon
The overall system uses a mix of Lambda and EC2 instances—with the EC2 instances often being used in situations in which Lambda function invocations would be too expensive.3
40%
Flag icon
First, there’s a set of machines that the workloads will run on called the nodes. Secondly, there’s a set of controlling software that manages these nodes and is referred to as the control plane.
42%
Flag icon
Deployment is what happens when you install some version of your software into a particular environment (the production environment is often implied). Release is when you make a system or some part of it (for example, a feature) available to users.
42%
Flag icon
with a canary rollout the idea is that a limited subset of our customers see new functionality.
42%
Flag icon
With a parallel run you do exactly that—you run two different implementations of the same functionality side by side, and send a request to the functionality to both implementations.
44%
Flag icon
For a system comprising a number of microservices, a service test would test an individual microservice’s capabilities.
45%
Flag icon
For example, in Figure 9-8, a successful build for any of the four microservices would end up triggering the shared end-to-end tests stage.
45%
Flag icon
The more moving parts there are, the more brittle our tests may be and the less deterministic they are.
45%
Flag icon
A test suite with flaky tests can become a victim of what Diane Vaughan calls the normalization of deviance—the idea that over time we can become so accustomed to things being wrong that we start to accept them as being normal and not a problem.5
45%
Flag icon
One solution I’ve seen here is to designate certain end-to-end tests as being the responsibility of a given team, even though they might cut across microservices being worked on by multiple different teams.
45%
Flag icon
Although it is unfortunately still a common organizational pattern, I see significant harm done whenever a team is distanced from writing tests for the code it wrote in the first place.
45%
Flag icon
When end-to-end tests slow down our ability to release small changes, they can end up doing more harm than good.
45%
Flag icon
By versioning together changes made to multiple services, we effectively embrace the idea that changing and deploying multiple services at once is acceptable. It becomes the norm; it becomes OK. In doing so, we cede one of the main advantages of a microservice architecture: the ability to deploy one service by itself, independently of other services.
45%
Flag icon
Ideally, if you want your teams to be able to develop and test in an independent fashion, they should have their own test environments too.
46%
Flag icon
With contract tests, a team whose microservice consumes an external service writes tests that describe how it expects an external service will behave.
46%
Flag icon
With CDCs, the consumer team ensures that these contract tests are shared with the producer team to allow the producer team to ensure that its microservice meets these expectations.
47%
Flag icon
I strongly suggest looking at your CFRs as early as possible and reviewing them regularly.
48%
Flag icon
The observability of a system is the extent to which you can understand the internal state of the system from external outputs.
48%
Flag icon
Monitoring, on the other hand, is something we do. We monitor the system. We look at it.
48%
Flag icon
Observability is the extent to which you can understand what the system is doing based on external outputs. Logs, events, and metrics might help you make things observable, but be sure to focus on making the system understandable rather than throwing in lots of tools.
49%
Flag icon
you should view implementing a log aggregation tool as a prerequisite for implementing a microservice architecture.
49%
Flag icon
Before you do anything else to build out your microservice architecture, get a log aggregation tool up and running. Consider it a prerequisite for building a microservice architecture. You’ll thank me later.
49%
Flag icon
Once you have log aggregation, get correlation IDs in as soon as possible. Easy to do at the start and hard to retrofit later, they will drastically improve the value of your logs.
50%
Flag icon
The secret to knowing when to panic and when to relax is to gather metrics about how your system behaves over a long-enough period of time that clear patterns emerge.
50%
Flag icon
There are a number of ways to describe cardinality, but you can think of it as the number of fields that can be easily queried in a given data point.
51%
Flag icon
It describes not only what the users can expect but also what happens if the system doesn’t reach this level of acceptable behavior.
51%
Flag icon
SLOs define what the team signs up to provide.
51%
Flag icon
An SLI is a measure of something our software does.
56%
Flag icon
This advice includes recommendations to use password managers and long passwords, to avoid the use of complex password rules, and—somewhat surprisingly—to avoid mandated regular password changes.
57%
Flag icon
Find ways to build regular restoration of backups into your software development process—for example, by using production backups to build your performance test data.
57%
Flag icon
Being able to rebuild your microservice and recover its data in an automated fashion helps you recover in the wake of an attack and also has the advantage of making your deployments easier across the board, having positive benefits for development, test, and production operations activities.
57%
Flag icon
When operating in a zero-trust environment, you have to assume that you are operating in an environment that has already been compromised—the
58%
Flag icon
For securing passwords, you should absolutely be using a technique called salted password hashing. This ensures that passwords are never held in plain text, and that even if an attacker brute-forces one hashed password they cannot then automatically read other passwords.13
58%
Flag icon
The advantages to being frugal with data collection are manifold. First, if you don’t store it, no one can steal it. Second, if you don’t store it, no one (e.g., a governmental agency) can ask for it either!
58%
Flag icon
One solution is to use a separate security appliance to encrypt and decrypt data. Another is to use a separate key vault that your service can access when it needs a key.
59%
Flag icon
Encrypt data when you first see it. Only decrypt on demand, and ensure that data is never stored anywhere.
59%
Flag icon
In the context of security, authentication is the process by which we confirm that a party is who they say they are.
59%
Flag icon
Authorization is the mechanism by which we map from a principal to the action we are allowing them to do.
59%
Flag icon
When a client talks to a server using mutual TLS, the server is able to authenticate the client, and the client is able to authenticate the server—this is a form of service-to-service authentication.