Docker/C3/Docker-Swarm/English
Visual Cue | Narration | |
Show slide:
Title Slide |
Hello and welcome to the Spoken Tutorial on “Docker Swarm”. | |
Show Slide:
Learning Objectives |
In this tutorial, we will learn about
| |
Show Slide:
System Requirements |
To record this tutorial, I am using
| |
Show Slide:
Pre-requisites |
To follow this tutorial,
| |
Slide:
Docker-machine installation |
Docker-machine installation
| |
Show Slide:
Docker Swarm |
Docker Swarm is a container orchestration tool.
It allows you to manage a cluster of Docker nodes as a single system. It provides tools to deploy, manage, and scale containerized applications. | |
Only narration | Let us see the execution of it. | |
Open terminal | Open the terminal.
First let us create a Docker swarm cluster. Type docker swarm init and press Enter. | |
Highlight Swarm initialised row | We can see that the swarm is initialised and made the current node as manager.
Managers manage the swarm cluster. | |
Highlight To add a worker section | It gives us the swarm join command to add worker nodes to the swarm.
It has token id and manager IP as parameters. The values may be different for you according to system configuration. Now copy the command. We can use this while joining the swarm. | |
Only narration | First let us create a worker node.
Workers run containers and execute tasks assigned by managers. | |
Type docker run -d --privileged --name | Enter the command as shown.
The docker run command creates a Docker-in-Docker worker container. | |
Highlight the output | docker dind image is pulled and downloaded in our local system. | |
Type docker exec -it worker and paste | Now we will join the worker node to the swarm cluster.
Type the command as shown and paste the copied command and press Enter. | |
Highlight the output | We can see a message that our worker node have joined the swarm. | |
Type docker node ls and press Enter | Now let us see the list of all the nodes in the swarm.
Type docker node ls and press Enter. | |
Highlight the output | We can see two nodes.
One is the leader i.e manager and other is the worker node which we created now. | |
Only narration | Now let us see how to deploy services in a swarm cluster through an example. | |
In the terminal, enter the command as shown.
It creates the myExample service using the stuser1/node-express image. Services are tasks that run containerized applications. We will use the previously created image stuser1/node-express for this example. | ||
Highlight --publish 3000:3000 | It maps port 3000 of the container to port 3000 on the host. | |
Press Enter | Now press Enter. | |
Highlight the output | The output confirms the service has deployed successfully. | |
Type docker service ls and press Enter | Let us verify by retrieving the list of services.
Enter the command as shown. | |
Highlight myExample row. | We can see our service myExample in the list.
Here replicas indicate the number of tasks in the service. For now we only have 1 task running. | |
Only narration | Let us see how to update the service. | |
Type the command as shown.
The docker service update command changes the image used by the service. Here, we’re switching to the nginx image which may not be present in the local system.
Press Enter. | ||
Highlight the output | We can see that the service is updated successfully. | |
Type docker service ls | Enter the command as shown to get the services list. | |
Highlight myExample row | We can see that the image is updated to nginx. | |
Type docker service rollback myExample | To roll back to the previous image, enter the command as shown. | |
Highlight the output | We can see that the rollback is completed. | |
Type docker service ls and press Enter | Now verify by using docker service ls command. | |
Highlight the output | We can see that the image is updated back to stuser1/node-express. | |
Only narration | Now let us see how to scale the service. | |
Type docker service scale myExample=2 | Enter the command as shown.
This command adjusts myExample to run 2 replicas. This means there will be two instances of myExample running in the Swarm.
It will take some time for the system to find the image and use it. | |
Highlight the output | We can see that now our service myExample has successfully converged to running 2 tasks. | |
Only narration | To scale down, just decrease the number of replicas in the previous command. | |
Type docker service ps myExample | To get the history of the service, enter the command as shown. | |
Highlight the running processes | We can see that 2 processes are running.
The processes that are shown as shutdown are previously executed. Rejected processes are system glitches while searching for images. | |
Type docker service rm myExample | To remove the service, enter the command as shown. | |
Type docker service ls | Again we will get the list to verify.
We can see our service myExample is removed from the list. | |
Only narration | Now let us see how to share files using NFS. | |
Show Slide:
Share files using NFS |
Network File System (NFS) allows file sharing between nodes in a Swarm setup.
This setup provides shared, persistent storage for distributed applications. NFS is ideal for storing files needed by multiple services or replicas. It ensures data consistency and simplifies data management across nodes. | |
Only narration | Let us see the execution of it. | |
Type mkdir -p /srv/nfs/sharefiles | First we will create a directory for files’ storage.
Type the command as shown. The -p flag ensures all required parent directories are created if it is not there. | |
Next we will change the ownership.
Type the command as shown. This command changes ownership to nobody user and nogroup. Enter the password if prompted. | ||
Type sudo chmod 777 /srv/nfs/sharefiles | Type the command to give read, write and execute permissions to all users.
This makes the directory accessible to every user across nodes. | |
Type sudo apt-get install -y nfs-kernel-server | Then we need to install an nfs-kernel-server for NFS support on the manager.
For that, enter the command as shown. The -y option automatically confirms the installation prompt. | |
Type sudo nano /etc/exports | Then enter the command as shown.
This opens the exports file to define directories for sharing. | |
We’ll add our directory to the file for NFS access using this command. | ||
Highlight rw | rw shares the directory with read and write access. | |
Highlight sync | sync ensures data consistency. | |
Only narration | Then save by pressing Ctrl+S and exit by pressing Ctrl+X. | |
Type sudo exportfs -a | Then enter the command as shown.
This shares all directories in the exports file for NFS access. | |
Then we shall restart the NFS server to apply new configuration settings.
For that enter the command as shown. | ||
Let us open an interactive shell in the worker container.
Enter the command as shown. | ||
Type apk add --no-cache nfs-utils | In the interactive shell, type the command as shown.
This instals NFS utilities, which enable NFS client functionality. | |
Type mkdir -p /mnt/nfs/sharefiles | Let us create a local directory on the worker for mounting NFS share.
Enter the command as shown. The -p flag ensures that all necessary parent directories are created. | |
Then type the command as shown.
This mounts the manager’s NFS directory to the worker’s local directory. | ||
Highlight 10.0.2.15 | This is the manager node’s IP address.
Replace it with your manager node’s IP address. | |
Highlight nolock | nolock option disables file locking for simpler setup in basic configurations. | |
Type df -h | grep nfs and press Enter | To verify the mount, enter the command as shown. | |
Highlight the output | We get details of mounted filesystems, filtered for our NFS mount.
We can see our mounted sharefiles. | |
Type exit and press Enter | To go back to the manager node, enter the exit command. | |
Only narration | Now we have set up the basic configurations for file sharing in the worker node.
Let us create a file testfile dot txt for sharing purposes in the manager node. | |
Enter the command as shown.
This creates a testfile in the shared directory with the given text in it. | ||
Type docker exec -it worker sh | Again let us open an interactive shell in the worker node with this command. | |
Enter the command as shown.
This reads the file from the worker node to verify shared file access of the manager node. | ||
Highlight the output | We can see the text from the testfile indicating successful sharing of files. | |
Type docker swarm leave | To leave the swarm cluster, type docker swarm leave and press Enter. | |
Highlight the output | We can see that the worker node has left the swarm. | |
Type exit and press Enter | To go back to the manager node, enter the exit command. | |
Type docker swarm leave --force | To exit the swarm cluster from the manager node, enter the command as shown.
Here we are extending the swarm leave command with hyphen hyphen force. | |
Highlight the output | We can see that the manager node has left the swarm. | |
Show Slide:
Summary |
This brings us to the end of this tutorial. Let us summarise.
In this tutorial, we have learnt about
| |
Show Slide:
Assignment |
As an assignment, please do the following
| |
Show Slide:
Assignment Observation |
myExample is upscaled to 5 replicas. | |
Show Slide:
About Spoken Tutorial project |
The video at the following link summarises the Spoken Tutorial project.
Please download and watch it. | |
Show Slide:
Spoken Tutorial Workshops |
The Spoken Tutorial Project team conducts workshops and gives certificates.
For more details, please write to us. | |
Show Slide:
Answers for THIS Spoken Tutorial |
Please post your timed queries in this forum. | |
Show Slide:
FOSSEE Forum |
For any general or technical questions on Docker, visit the FOSSEE forum and post your question. | |
Show slide:
Acknowledgement |
Spoken Tutorial Project was established by the Ministry of Education, Government of India. | |
Slide:
Thank you
|
This is Pranjal Mahajan, a FOSSEE Semester Long Intern 2024, IIT Bombay signing off
Thanks for joining. |