Shipping Data to Loki for Log Management
Now that you can collect metrics, it's time to add log management. This lesson shows how Alloy collects logs from files, Docker, and systemd, then ships them to Loki.
Learning Goals:
- Understand Loki's role in the Grafana observability stack
- Configure Alloy to collect logs from files, Docker, and journald
- Use
loki.source.*,loki.relabel, andloki.write - Verify log delivery in Grafana
Why Loki for Log Management?
Loki indexes only labels (metadata), not log content, which keeps storage and index costs low. Alloy acts as the log collector and forwarder.
The course examples forward logs to the Loki destination defined in 01-global.alloy:
loki.write "grafana_cloud_loki" {
endpoint {
url = "https://logs-prod-006.grafana.net/loki/api/v1/push"
basic_auth {
username = "1022411"
password = sys.env("GCLOUD_RW_API_KEY")
}
}
}
Collecting Logs from Files
Use local.file_match to build targets and loki.source.file to tail files:
local.file_match "system_logs" {
path_targets = [
{ "__path__" = "/var/log/syslog", "job" = "system", "instance" = constants.hostname },
{ "__path__" = "/var/log/auth.log", "job" = "auth", "instance" = constants.hostname },
{ "__path__" = "/var/log/kern.log", "job" = "kernel", "instance" = constants.hostname },
]
}
loki.source.file "system_logs_source" {
targets = local.file_match.system_logs.targets
forward_to = [loki.write.grafana_cloud_loki.receiver]
}
Collecting Docker Logs
Docker log collection uses discovery and relabeling before shipping to Loki:
discovery.docker "linux" {
host = "unix:///var/run/docker.sock"
}
discovery.relabel "logs_integrations_docker" {
targets = []
rule {
source_labels = ["__meta_docker_container_name"]
regex = "/(.*)"
target_label = "service_name"
}
}
loki.source.docker "default" {
host = "unix:///var/run/docker.sock"
targets = discovery.docker.linux.targets
labels = {"platform" = "docker"}
relabel_rules = discovery.relabel.logs_integrations_docker.rules
forward_to = [loki.write.grafana_cloud_loki.receiver]
}
Collecting Journald Logs
Systemd logs are collected with loki.source.journal, and labels are added via loki.relabel:
loki.relabel "journal" {
forward_to = []
rule {
source_labels = ["__journal__systemd_unit"]
target_label = "unit"
}
}
loki.source.journal "read" {
forward_to = [loki.write.grafana_cloud_loki.receiver]
relabel_rules = loki.relabel.journal.rules
labels = {component = "loki.source.journal"}
}
Optional: Processing Logs with loki.process
Use loki.process when you want to parse, enrich, or filter log entries:
loki.process "local" {
forward_to = [loki.write.grafana_cloud_loki.receiver]
stage.json {
expressions = { "extracted_env" = "environment" }
}
stage.labels {
values = { "env" = "extracted_env" }
}
}
Verifying Log Delivery
-
Check Alloy metrics:
curl http://localhost:12345/metrics | grep loki -
Check Alloy logs:
journalctl -u alloy -f -
Query in Grafana Explore:
{job="system"}{platform="docker"}
Common Pitfalls
- Missing file permissions: Alloy must be able to read log files and Docker/journal sockets.
- Label explosion: Avoid using high-cardinality labels like user IDs or request IDs.
- Service dependencies: Docker and journald sources require those services to be present.
- Auth failures: Grafana Cloud Loki requires a valid API token.
Summary
In this lesson, you learned how to configure Grafana Alloy as a log collector for Loki. You can now:
- Collect logs from files and systemd journals
- Collect container logs with Docker discovery
- Add labels with
loki.relabel - Send logs to Loki using
loki.write
With metrics from Prometheus and logs from Loki, you have a powerful observability foundation. In the next lesson, we'll add distributed tracing with Tempo.
Quiz
Loki Log Shipping - Quick Check
What is Loki's primary design advantage over traditional log aggregation systems?