I had been using docker compose for a while to deploy, I started noticing there is a slight downtime while deploying docker compose.
Moreover, If there is some error while deploying it could be disruptive. I finally started using docker swarm, It’s not very different from docker compose, It is good for rolling updates.
Running docker in swarm mode is just a single command docker swarm init
which will spin up the manager node. And we can add more nodes to the swarm later on.
Exploring a simple stack
Here is a simple docker compose ( stack ) file with caddy ( reverse proxy ), web application and database as a whole stack.
services:
caddy:
image: caddy:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- app_overlay
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
app:
image: registry.app:latest
networks:
- app_overlay
environment:
DB_USER_FILE: /run/secrets/pguser
DB_PASS_FILE: /run/secrets/pgpass
DB_NAME_FILE: /run/secrets/pgdb
secrets:
- pguser
- pgpass
- pgdb
deploy:
replicas: 1
db:
image: postgres:latest
networks:
- app_overlay
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER_FILE: /run/secrets/pguser
POSTGRES_PASSWORD_FILE: /run/secrets/pgpass
POSTGRES_DB_FILE: /run/secrets/pgdb
secrets:
- pguser
- pgpass
- pgdb
deploy:
replicas: 1
volumes:
caddy_data:
caddy_config:
postgres_data:
secrets:
pguser:
file: ./secrets/postgres_user
pgpass:
file: ./secrets/postgres_password
pgdb:
file: ./secrets/postgres_db
configs:
application_config:
file: ./configs/application_config
networks:
app_overlay:
driver: overlay
name: app_overlay_network
In this stack, we have 3 services, caddy, app, db.
- caddy: Reverse proxy
- app: Web application
- db: Database
We can run this stack with docker stack deploy -c docker-compose.yaml full_stack
command. Where the full_stack
is the name of the stack.
We can deploy this stack from our local machiene by adding DOCKER_HOST
environment variable.
If we have ssh acces then deploying this stack is as easy as DOCKER_HOST=ssh://user@host docker stack deploy -c docker-compose.yaml full_stack
- Secrets: As it’s configured to use the files, it will create secrets taking values from the local files in the specified path.
- Configs: Similar to secrets, it will create configs taking values from the local files in the specified path.
Breaking down the stack
We can also break down the stack into smaller components. For example, caddy, app and db can be deployed separately as individual stacks with shared overlay network.
Example Stacks
Caddy Stack: docker-compose.caddy.yaml
services:
caddy:
image: caddy:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- app_overlay
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
volumes:
caddy_data:
caddy_config:
networks:
app_overlay:
driver: overlay
name: app_overlay_network
Note that, this stack will create the network app_overlay
if it doesn’t exist, which can be attached from other stacks as well.
It’s a good idea to deploy all the dependencies first, like Database, Redis etc. which are used by the app.
Database Stack: docker-compose.db.yaml
services:
db:
image: postgres:latest
networks:
- app_overlay
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER_FILE: /run/secrets/pguser
POSTGRES_PASSWORD_FILE: /run/secrets/pgpass
POSTGRES_DB_FILE: /run/secrets/pgdb
secrets:
- pguser
- pgpass
- pgdb
deploy:
replicas: 1
volumes:
postgres_data:
secrets:
pguser:
file: ./secrets/postgres_user
pgpass:
file: ./secrets/postgres_password
pgdb:
file: ./secrets/postgres_db
networks:
app_overlay:
external: true
name: app_overlay_network
Here, we are defining app overlay network as external, so we are connecting to an existing external network rather than creating a new one.
Also the secrets for the database will be created in the database stack, which can be used by the app stack.
App Stack: docker-compose.app.yaml
services:
app:
image: registry.app:latest
networks:
- app_overlay
environment:
DB_USER_FILE: /run/secrets/pguser
DB_PASS_FILE: /run/secrets/pgpass
DB_NAME_FILE: /run/secrets/pgdb
secrets:
- pguser
- pgpass
- pgdb
deploy:
replicas: 1
secrets:
pguser:
external: true
pgpass:
external: true
pgdb:
external: true
configs:
application_config:
file: ./configs/application_config
networks:
app_overlay:
external: true
name: app_overlay_network
In this final stack, we are using the secrets from the database stack, since we mark it as external this stack expects the secrets to be present in the swarm environment.
deploying each stack:
docker stack deploy -c docker-compose.caddy.yaml caddy_stack
docker stack deploy -c docker-compose.db.yaml db_stack
docker stack deploy -c docker-compose.app.yaml app_stack
For any updates, we just need to re-run the docker stack deploy
command.
Inspecting the stack
- We can list all the stacks using
docker stack ls
command. - We can list all services using
docker service ls
command. ( services are the containers in the stack ) - To check logs of a service in a stack, we can use
docker service logs service_name
command. - To check status of services in a stack, we can use
docker stack ps stack_name
command. - To check service run issues we can use
docker service ps service_name
command.
Conclusion
- Docker swarm is pretty good and easy to use. It will easily deploy
stacks
in a distributed environment. - It is ideal for production use, where we can perform rolling updates without breaking anything.
- It is also very easy to scale up or scale down the cluster.
- We can deploy stacks from local machine using
DOCKER_HOST
environment variable.