Log Management for Self-Hosted Infrastructure
Centralize and search logs from all your self-hosted applications. Essential for debugging, auditing, and security monitoring.
Why Centralize Logs?
When something breaks, logs tell you why. But when you're running 20 applications across multiple containers, checking logs one-by-one is painful.
Centralized logging collects all logs in one place where you can search, filter, and alert.
The Log Pipeline
1. Collection
Gather logs from all sources: applications, containers, system services, reverse proxy.
2. Processing
Parse log formats, extract fields, enrich with metadata (hostname, service name, etc.).
3. Storage
Store processed logs in a searchable database.
4. Visualization
Search, filter, and create dashboards from log data.
Tool Options
Loki + Grafana (Recommended)
Loki is Prometheus but for logs. Lightweight, integrates perfectly with Grafana dashboards. Uses labels instead of full-text indexing, making it much cheaper to run.
ELK Stack (Elasticsearch + Logstash + Kibana)
The traditional log management stack. Very powerful but resource-heavy. Elasticsearch alone needs 2+ GB RAM.
Graylog
Open-source log management with a good UI. Easier to set up than ELK, more features than Loki.
Essential Logs to Collect
Application Logs
Errors, warnings, and request logs from each application.
Reverse Proxy Logs
Caddy access logs show every request: URL, status code, response time, client IP.
Container Logs
Container start/stop events, resource usage, health check results.
System Logs
SSH access attempts, package updates, kernel messages.
Log Retention
Logs take space. Set retention policies:
Alerting on Logs
Set up alerts for:
Loki's alerting integrates with Grafana's notification system, giving you Slack, email, and webhook alerts.