Creating 3-Node Swarm (Not local)
- How configure environment for this section please see Installation
Setup the Swarm
- Run init on one of nodes, preferable
node1
[node1] ~> docker swarm init
Swarm initialized: current node (qmwz0mcy9aau1inmscjdtxudy) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3zkxqbnuw3neb64f4g5tfegxelrk409rv0hs5cmp18ru3c9s5v-826kaqfj30xl76us6gtjm2im8 10.0.3.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
- Copy the join string and run it on second and third nodes
[node2] ~> docker swarm join --token SWMTKN-1-3zkxqbnuw3neb64f4g5tfegxelrk409rv0hs5cmp18ru3c9s5v-826kaqfj30xl76us6gtjm2im8 10.0.3.3:2377
p18ru3c9s5v-826kaqfj30xl76us6gtjm2im8 10.0.3.3:2377
This node joined a swarm as a worker.
[node1] ~> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
qmwz0mcy9aau1inmscjdtxudy * node1 Ready Active Leader
2647pngj4a6gt8pclx6y2c1i3 node2 Ready Active
- Node2 have joined our swarm but only as a worker.
- Worker have no access to swarm commands as you can see below
[node2] ~> docker node ls
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.
- We can promote
node2
to manager from node1
[node1] ~> docker node update --role manager node2
node2
[node1] ~> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
qmwz0mcy9aau1inmscjdtxudy * node1 Ready Active Leader
2647pngj4a6gt8pclx6y2c1i3 node2 Ready Active Reachable
- Let's add
node3
as manager by default
[node1] ~> docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3zkxqbnuw3neb64f4g5tfegxelrk409rv0hs5cmp18ru3c9s5v-5fmdw14iyw0vs342v3m145o5n 10.0.3.3:2377
[node3] ~> docker swarm join --token SWMTKN-1-3zkxqbnuw3neb64f4g5tfegxelrk409rv0hs5cmp18ru3c9s5v-5fmdw14iyw0vs342v3m145o5n 10.0.3.3:2377
This node joined a swarm as a manager.
[node1] ~> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
qmwz0mcy9aau1inmscjdtxudy * node1 Ready Active Leader
2647pngj4a6gt8pclx6y2c1i3 node2 Ready Active Reachable
yf54akcujp1duop1rcr93i3bn node3 Ready Active Reachable
[node1] ~> docker service create --replicas 3 alpine ping 8.8.8.8
z1fus356ta94n4zp52151sriv
[node1] ~> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
z1fus356ta94 infallible_sammet replicated 3/3 alpine:latest
- Let's see which task run on which node
[node1] ~> docker node ps
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
sj3q4c97y1f6 infallible_sammet.2 alpine:latest node1 Running Running 2 minutes ago
[node1] ~> docker node ps node2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
760trrpfmjjo infallible_sammet.3 alpine:latest node2 Running Running 2 minutes ago
[node1] ~> docker node ps node3
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wyk95lk6f82n infallible_sammet.1 alpine:latest node3 Running Running 2 minutes ago
[node1] ~> docker service ps infallible_sammet
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
wyk95lk6f82n infallible_sammet.1 alpine:latest node3 Running Running 4 minutes ago
sj3q4c97y1f6 infallible_sammet.2 alpine:latest node1 Running Running 4 minutes ago
760trrpfmjjo infallible_sammet.3 alpine:latest node2 Running Running 4 minutes ago