Containers are very lightweight and are intended to fail at some point. This intended failure is considered as a feature for some reason.
How do we manage these containers which are bound to fail? How do we work our way through them? How do we make sure the application doesn’t break when a container fails? This is where Kubernetes comes in.
Kubernetes makes sure that anytime one container crashes, another container is created to maintain the integrity of the application. Kubernetes also takes care of orchestrating the containers – that is connecting all the containers so that we have an operating functional architecture.
3 things that Kubernetes takes care of are –
- Orchestration – How do we get all the containers to talk to each other.
- Every container has it’s own IP address, which allows us to locate it. These IP addresses are transient- meaning to say that they are keep changing. Every time a container is destroyed, it’s IP address might be assigned to a newly created container.
- Lets say there are 2 containers, one handles the frontend and another is a MySQL database container. Both the containers can communicate with each other with the help of their IP addresses. For some reason, MySQL container crashed and we created a new MySQL container to maintain the integrity of the application. This container will have a different IP address. We need a sure way to continue the communication between the newly spinned up MySQL container and the frontend container. This is where Kubernetes helps.
- Scheduling – Allowing scalling of the application
- Lets say we need 5 MySQL containers for our application and 2 of them failed – here Kubernetes makes sure that 2 new containers are spinned up to maintain the number 5.
- Kubernetes can also help by creating more and more containers when load on the containers increases – it can scale based on the loads.
- Isolation – Protecting containers from the failure of another containers
- Lets say that we’re running 7000-8000 containers on our system and few of the containers fails. Kubernetes helps in making sure that failure of those containers does not affect the other containers that are running on the machine.
Looking into Kubernetes, it has 5 resource types that can be configured using a JSON or a YAML file, or using OpenShift management tools, here they are –
- Docker uses containers, whereas Kubernetes uses “pods”. Pod is the basic unit of work for Kubernetes.
- A pod is a collection of one or more containers sharing the same resources – such as an IP address or persistent storage volumes.
- PS – A pod containing a single container can be called a container.
- Kubernetes uses service to enable communication between pods.
- Service is a single IP:Port combination that provides access to a pool of pods.
- Replication Controllers
- Replication Controllers are objects that help scale out the application. They can be used to spin up more containers as and when required.
- It is framework for defining pods that are meant to be scaled horizontally. [ Horizontal scalling – adding more machines to the pool of resources(here, spinning up more and more containers)
Vertical scalling – adding more power(CPU, RAM) to an existing machine]
- Persistent Volumes and Persistent Volume Claims
- PV and PVC are used to persist/maintain the data, despite any container failure or explicit container destruction.
- ‘How do we maintain our data, if we’re going to regularly create and destroy our database containers? – PV is the answer to this question.
- For example – We’re using MySQL database containers in our application, and for some reason we bring the containers down, but we want to make sure that our data does not get destroyed, then this is where PV enters. It makes sure that our data persists.
- It represents a request to Kubernetes by a pod for storage.
- It allows sharing of a same persistent volume by multiple containers on different hosts.