Label your Docker Swarm Nodes
You can label your nodes to place specific containers on specific nodes.
This becomes handy when you have different type of nodes in your swarm. One example would be:
- 1 Manager Node
- 3 Worker Nodes (1GB Memory)
- 1 Worker Node (4GB Memory)
Now we can place memory intensive tasks on our 4GB memory node.
That is one example, if you want to group specific tasks to specific data centers, you can also do that.
Use Case
In our use case today, I have 5 nodes in my raspberry pi docker swarm cluster:
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
u261mbr3dkwiubga70stm4q80 * rpi-01 Ready Active Leader 19.03.5
6s48hk0kzrp2er06zltd7h4bg rpi-02 Ready Active 18.06.3-ce
81br4uaayop19edv43es1hthb rpi-03 Ready Active 18.09.1
7ixvmd6gr2vm1csnvwu6t44ei rpi-04 Ready Active 18.09.1
igx5njlbw6zt1a3bew8ub7auh rpi-05 Ready Active 19.03.5
Nodes: rpi-02
, rpi-03
, rpi-04
are workers and have 1GB of Memory and rpi-05
is a worker and have 4GB of Memory.
We want to label rpi-05
with spec=memory
as this way we would identify the node spec. Then when we create a service we will add a constraint that the task can only reside on a node with the label node.labels.spec==memory
Labeling the Node
From our manager, let's label our node rpi-05
with the key spec
and the value memory
:
$ docker node update --label-add spec=memory rpi-05
Let's inspect the node to verify that the node has been labeled:
$ docker node inspect rpi-05
[
{
"CreatedAt": "2020-02-11T14:05:41.930599473Z",
"UpdatedAt": "2020-02-11T14:06:14.390067308Z",
"Spec": {
"Labels": {
"spec": "memory"
},
"Role": "worker",
"Availability": "active"
},
"Description": {
"Hostname": "rpi-05",
"Platform": {
"Architecture": "armv7l",
"OS": "linux"
},
...
}
]
Deploy some Workloads
Deploy workloads that will be hosted on our high memory node with a constraint flag:
$ docker service create \
--name high-memory-spec-task \
--constraint node.labels.spec==memory \
pistacks/alpine ping localhost
When we verify:
$ docker service ps high-memory-spec-task
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
05sjl7pf71h2 high-memory-spec-task.1 pistacks/alpine:latest rpi-05 Running Running 30 seconds ago
We can see that our task is running on our rpi-05
node.
Other types of constraints can look like this:
--constraint node.labels.spec!=memory
--constraint node.role==worker