Understanding Container Resource Limits
Without resource limits, one container can starve all others. CPU limits, memory limits, and OOM killers explained.
Why Limits Matter
A container without resource limits can consume all available CPU and memory. When that happens:
Memory Limits
Setting Limits
docker run --memory 512m myapp
Or in docker-compose:
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
What Happens at the Limit
When a container exceeds its memory limit, the OOM (Out of Memory) killer terminates it. The container restarts (if restart policy is set).
Reservations vs Limits
Set reservations for what the app needs normally, limits for the maximum it should ever use.
CPU Limits
CPU Shares (Relative)
--cpu-shares 512 (default is 1024)
Relative weight. A container with 512 shares gets half the CPU of one with 1024 — but only when there's contention.
CPU Quota (Absolute)
--cpus 0.5 (half a CPU core)
--cpus 2 (two CPU cores)
Hard limit regardless of whether other containers need CPU.
Which to Use
Practical Guidelines
Lightweight Apps
(Uptime Kuma, Homepage, ntfy)
Memory: 256 MB limit, 128 MB reservation
CPU: 0.25 cores
Medium Apps
(Nextcloud, Ghost, Gitea)
Memory: 512 MB-1 GB limit
CPU: 0.5-1 core
Heavy Apps
(PostHog, GitLab, Keycloak)
Memory: 2-4 GB limit
CPU: 1-2 cores
Databases
(PostgreSQL, MySQL, Redis)
Memory: Set based on your tuning (shared_buffers, innodb_buffer_pool_size)
CPU: 1-2 cores
Monitoring
Check actual usage before setting limits:
docker stats
Watch for a few days under normal load, then set limits with headroom.
TinyPod
TinyPod sets appropriate resource limits for each application based on its catalog recommendations. No manual configuration needed.