The Kubernetes Book
Rate it:
Open Preview
13%
Flag icon
If the API Server is the brains of the cluster then the cluster store is its memory. The config and state of the cluster gets persistently stored here. It is the only stateful component of the cluster and is vital to its operation - no cluster store, no cluster!
13%
Flag icon
Things like the node controller, endpoints controller, namespace controller etc. They tend to sit in loops and watch for changes – the aim of the game being to make sure the actual state of the cluster matches the desired state
14%
Flag icon
the master is made up of lots of small specialized services. These include the API server, the cluster store, the controller manager, and the scheduler.
16%
Flag icon
We declare the desired state of our application (microservice) in a manifest file We feed that to the Kubernetes API server Kubernetes implements it on the cluster Kubernetes implements watch loops to make sure the cluster doesn’t vary from desired state
19%
Flag icon
If you’re running multiple containers in it, they all share the same Pod environment - things like the IPC namespace, shared memory, volumes, network stack etc. As an example, this means that all containers in the same Pod will share the same IP address (the Pod’s IP).
20%
Flag icon
Pods are also the minimum unit of scaling in Kubernetes. If you need to scale your app, you do so by adding or removing Pods.
23%
Flag icon
Services only send traffic to healthy Pods. This means if your Pods are failing health-checks they will not receive traffic form the Service.
51%
Flag icon
All containers in a Pod share the same cgroup limits, they have access to the same volumes, the same memory, the same IPC namespaces and more. The Pod holds all the namespaces - any containers they run just join them and share them.
55%
Flag icon
the apiVersion field specifies the version of the API that we’ll be using. v1 has been around since 2015, includes an extensive Pod schema and is stable.
55%
Flag icon
the kind field tells Kubernetes what kind of object to deploy - in this example we’re asking to deploy a Pod.
56%
Flag icon
the spec section. This is where we define what’s in the Pod.
60%
Flag icon
The template section is effectively an embedded Pod spec that defies a single-container Pod and includes the same app=hello-world label specified in the Replication Controllers selector above.
62%
Flag icon
every Service gets its own stable IP address, DNS name, and port.
62%
Flag icon
Service uses labels to dynamically associate with a set of Pods.
63%
Flag icon
However, for a Pod to match a Service the Pod must match all of the values in the Service’s selector.
70%
Flag icon
kubectl expose is the imperative way to create a new Service object.
72%
Flag icon
By default, cluster-wide ports (NodePort values) are between 30,000 - 32,767.
75%
Flag icon
ClusterIP This will give the Service a stable IP address internally within the cluster and is the default. It will not make the Service available outside of the cluster. NodePort This builds on top of ClusterIP and adds a cluster-wide TCP or UDP port. This makes the Service available outside of the cluster. LoadBalancer This builds on top of NodePort and integrates with cloud-native load-balancers.
78%
Flag icon
every Service gets an Endpoint object. This holds a list of all the Pods the Service matches and is dynamically updated as Pods come and go. We can see Endpoints with the normal kubectl commands (Endpoints get the same name as the Service they relate to). $ kubectl get ep hello-svc
83%
Flag icon
Deployments manage Replica Sets, and Replica Sets manage Pods.
96%
Flag icon
A moment ago we used the --recrod flag on the kubectl apply command that we used to perform the rolling update. This is an important flag that will maintain a revision history of the Deoployment.