Rapid Multi-App Deployment with Docker

Background

Our team now ships several in-house products, so we needed a delivery process that can stand up a dozen independent apps in a day.

The stack includes ELK, databases, Python backends, Java services, and Node frontends.

High-level approach

To move fast we adopted a few guiding principles:

  • Use a consistent deployment method across apps without changing their source code.
  • Cut down on manual steps.
  • Group apps by project and shared dependencies.
  • Keep environment-specific values (e.g., MySQL hosts) in editable config files.
  • Store every config in one place so deployers do not have to compare apps.

Given the systems we run, the concrete plan looks like this:

  • Everything ships in Docker except for specialized agents (osquery, salt-minion, Prometheus node exporters, etc.).
  • Replace manual setup (salt-master minion accept, sync_modules, seed data, etc.) with automation wherever possible.
  • Group services with docker-compose.
  • Tweak settings through the standard .env file that compose reads.
  • Keep configs side by side in a shared directory.

Roles

We split responsibilities into three roles: developers, packagers, and deployers.

Developers

Engineers focus on building the service. The only packaging prep they handle is:

  • Externalize config files (or make them overridable via environment variables).
  • Ensure the app runs in Docker.
  • Document mounts required for configs and persistent data.
  • Automate seeding so no one has to run SQL by hand.
  • Write a verification checklist for the first run.

Delivering a branch that meets those expectations is enough.

Packagers

Packagers integrate every service and create the deployment bundle—the goal is to turn raw code into something you can install on-site.

Their tasks:

  • Polish Dockerfiles (trim image sizes, remove secrets and junk data, etc.).
  • Group services and dependencies into logical units. For instance, if a project’s frontend and backend must be up together, place them in the same docker-compose.yml.
  • Author the compose files.
  • Script the packaging workflow (encryption, npm build, mvn package, unified config tweaks).
  • Write comprehensive deployment docs assuming the installer is neither the developer nor the packager.
  • Fill in customer-specific settings, build the bundle, and export the images.

Example output for apps app1_backend, app1_front, app2_api, plus MySQL and Redis:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Deployment Guide.txt

# Container images
image/db.tar # mysql + redis
image/app1.tar # app1 group
image/app2.tar # app2 group

# Config files
config/app1/app1_backend/config.py
config/app1/app1_front/config.js
config/app1/app1_front/config.yml

# Compose files
run/docker-compose-db.yml
run/docker-compose-app1.yml
run/docker-compose-app2.yml
run/.env

# Data seeds and mount points
data/init/mysql
data/app1/app1_backend

Bundle everything into a .tar.gz for easier transfer.

Deployers

Deployers know their way around Docker commands and troubleshoot runtime issues alongside developers.

Q&A

Why not put everything into one docker-compose.yml per machine?

Launching dozens of services at once produces a wall of logs and makes troubleshooting tough. Smaller compose files are easier to read and maintain.

Why do packagers, rather than developers, optimize Docker images?

Optimizing images takes Docker expertise. Having every developer re-learn it is inefficient; a dedicated packager can apply consistent best practices.

Why ship exported images instead of source code?

In theory you could ship the source bundle, which is much smaller. But installing from source requires internet access to pull base images and dependencies. Without strict version pinning you risk compatibility issues with whatever gets downloaded onsite. Unless bandwidth is extremely limited, prebuilt Docker images deploy faster and more reliably.