Smart Scaling
dockmesh scales individual services within a stack horizontally — spinning up or tearing down replicas of the same container. Scaling works on a single host; for spreading replicas across a fleet, combine with Migration.
Manual scaling
Section titled “Manual scaling”On a stack’s detail page, each service has a Replicas input. Enter a target count and click Apply — dockmesh runs compose up --scale <service>=<n> under the hood and streams the progress.
Auto-scaling
Section titled “Auto-scaling”Open a service’s Scaling tab and define a rule:
| Field | Example | Notes |
|---|---|---|
| Metric | CPU % or Memory % | Per-container average |
| Scale-up threshold | 80 | Breached for window duration triggers scale-up |
| Scale-down threshold | 30 | Below for window duration triggers scale-down |
| Window | 5m | How long the threshold must hold |
| Min replicas | 2 | Never scale below this |
| Max replicas | 10 | Never scale above this |
| Cooldown | 3m | Time to wait after a scaling event before the next one |
Rules are evaluated by the server every 30s using the metrics pipeline. Events are logged to the audit log.
Safety checks
Section titled “Safety checks”Before scaling, dockmesh runs pre-flight checks and refuses if:
- The service has a
container_name:set (Docker refuses to create a second container with the same name) - The service has static host port bindings (
"8080:80") that would conflict with additional replicas — you need an ephemeral port or a reverse proxy - The service declares a bind-mounted volume that is not shared-safe (e.g. SQLite DB) — stateful services should not be scaled horizontally
The UI shows each check with a pass/fail indicator and a remediation hint.
Observing scaling
Section titled “Observing scaling”The scaling tab shows a live chart of replica count over time, overlaid with the driving metric. You can see exactly when a scale-up fired and how the metric responded.