More on this book
Community
Kindle Notes & Highlights
If the API Server is the brains of the cluster then the cluster store is its memory. The config and state of the cluster gets persistently stored here. It is the only stateful component of the cluster and is vital to its operation - no cluster store, no cluster!
Things like the node controller, endpoints controller, namespace controller etc. They tend to sit in loops and watch for changes – the aim of the game being to make sure the actual state of the cluster matches the desired state
the master is made up of lots of small specialized services. These include the API server, the cluster store, the controller manager, and the scheduler.
We declare the desired state of our application (microservice) in a manifest file We feed that to the Kubernetes API server Kubernetes implements it on the cluster Kubernetes implements watch loops to make sure the cluster doesn’t vary from desired state
If you’re running multiple containers in it, they all share the same Pod environment - things like the IPC namespace, shared memory, volumes, network stack etc. As an example, this means that all containers in the same Pod will share the same IP address (the Pod’s IP).
Pods are also the minimum unit of scaling in Kubernetes. If you need to scale your app, you do so by adding or removing Pods.
Services only send traffic to healthy Pods. This means if your Pods are failing health-checks they will not receive traffic form the Service.
All containers in a Pod share the same cgroup limits, they have access to the same volumes, the same memory, the same IPC namespaces and more. The Pod holds all the namespaces - any containers they run just join them and share them.
the apiVersion field specifies the version of the API that we’ll be using. v1 has been around since 2015, includes an extensive Pod schema and is stable.
the kind field tells Kubernetes what kind of object to deploy - in this example we’re asking to deploy a Pod.
the spec section. This is where we define what’s in the Pod.
The template section is effectively an embedded Pod spec that defies a single-container Pod and includes the same app=hello-world label specified in the Replication Controllers selector above.
every Service gets its own stable IP address, DNS name, and port.
Service uses labels to dynamically associate with a set of Pods.
However, for a Pod to match a Service the Pod must match all of the values in the Service’s selector.
kubectl expose is the imperative way to create a new Service object.
By default, cluster-wide ports (NodePort values) are between 30,000 - 32,767.
ClusterIP This will give the Service a stable IP address internally within the cluster and is the default. It will not make the Service available outside of the cluster. NodePort This builds on top of ClusterIP and adds a cluster-wide TCP or UDP port. This makes the Service available outside of the cluster. LoadBalancer This builds on top of NodePort and integrates with cloud-native load-balancers.
every Service gets an Endpoint object. This holds a list of all the Pods the Service matches and is dynamically updated as Pods come and go. We can see Endpoints with the normal kubectl commands (Endpoints get the same name as the Service they relate to). $ kubectl get ep hello-svc
Deployments manage Replica Sets, and Replica Sets manage Pods.
A moment ago we used the --recrod flag on the kubectl apply command that we used to perform the rolling update. This is an important flag that will maintain a revision history of the Deoployment.