A group of machines(physical or virtual), running containers and joined into a cluster is called Swarm. After joining a swarm, machines are referred to as nodes. Docker commands are now executed on a cluster by swarm manager. Swarm manager is really a machine that acts as a manager to other machines joined with it in a cluster.
So we’re going to run multiple containers in multiple machines. Now how does swarm manager run containers? How does it decide what machine should get what amount of containers? There are 2 strategies –
1. Emptiest Node – Fill the least utilized machine with containers
2. Global – Every machine should get exactly one instance of the specified container
We can instruct the swarm manager to use either of the strategies by writing instructions in a Compose File.
Now, these machines in swarm managers are the only thing capable of running instructions or authorizing other machines trying to join the swarm as workers. Machines apart from swarm managers, i.e workers – are only present to provide capacity and do not have any authority to tell any other machine what is can and cannot do. A swarm can have more managers, but more the managers, higher the chances of failure.
Important Note: Adding more managers does NOT mean increased scalability or higher performance. In general, the opposite is true.
To use the swarms, we’ll need to enable the swarm mode. Enabling/Initiating swarm mode from a machine instantly makes it a swarm manager. Once this is done, docker commands run from the swarm manager will apply to all the nodes in the swarm cluster.
Setting up a swarm is pretty simple –
docker swarm init to enable the swarm mode and make the current machine a swarm manager.
docker swarm join on other machines to have them join the swarm as workers.
Let’s first spin up a few virtual machines with the help of hypervisor like Virtualbox.
docker-machine create --driver virtualbox myvm1 docker-machine create --driver virtualbox myvm2
docker-machine is pretty smart! Once you create the first virtual machine, it will not download all the files again for the second virtual machine. It will simply copy base ISO files from vm1 or cache to vm2.
We now have 2 VMs – myvm1 and myvm2. Run
docker-machine ls to see a list of all docker machines.
Let’s initialise swarm and add nodes. Here we’re making myvm1 a manager and myvm2 a worker. We do this by –
1. Instruct myvm1 via docker-machine ssh to initialize the swarm and become a swarm manager with –
docker-machine ssh myvm1 "docker swarm init --advertise-addr "
2. Instruct myvm2 via docker-machine ssh to join the swarm as a node with –
docker-machine ssh myvm2 "docker swarm join --token :"
Make sure to use 2377 as a port and not 2376, which is a docker daemon port. (Using 2376 might cause problems in the future.) It’s also alright to not give a port number at all and let docker take the default.
docker-machine ssh myvm1 "docker node ls" shows a list of all the nodes.
Deploying the app on swarm cluster
Just like we deployed our app on the single machine with the help of docker service and docker-compose.yml file, we do the same thing on our swarm cluster.
Docker commands will be executed only on the swarm manager myvm1, other nodes are simply to provide the capacity.
Now we can avoid having to type
docker-machine ssh myvm1 by a simple line of code –
eval $(docker-machine env myvm1) which configures your shell to talk to the myvm1.
That’s it! Try running
docker-machine ls to see if it worked. You’ll observe an * on the myvm1 row.
Everything that follows from here is what we did in Docker Service, so I won’t be repeating it. But this is what happens after we deply our application.
You’ll see how some containers have been deployed on myvm1, while some on myvm2. The app can be accessed via ip address of either of the vms. One might wonder how can both IP addresses work? How does it know which one to use? But the thing is – the service deployed at a certain port within a swarm reserves that port for itself. It makes sure that the particular port offers to serve that service no matter what the IP address is. This is lightly called as ingress routing mesh.
Once the stack is teared down, the machines can now leave the swarm.
1. Worker can leave the swarm using –
docker-machine ssh myvm2 "docker swarm leave"
2. Manager can leave the swarm using –
docker-machine ssh myvm1 "docker swarm leave --force (it has to be done forcefully!)
We can now unset the variables that we had previously configured to allow the communication between our shell and the manager with the help of –
eval $(docker-machine env -u).
Again – Docker Tutorials to the rescue!