Collecting Metrics with Prometheus Components
Welcome to the metrics lesson. Here you'll learn how Alloy collects Prometheus metrics using exporters and prometheus.scrape, then forwards them to your configured outputs.
Learning Goals
By the end of this lesson, you'll be able to:
- Configure Prometheus components in Alloy to scrape metrics
- Set up scrape jobs for common services and exporters
- Understand how exporter targets flow into scrapes and relabeling
- Validate and test your metric collection configuration
How Metrics Collection Works in Alloy
Alloy uses a pipeline model:
- Exporters expose metrics in Prometheus format (
prometheus.exporter.*). - Scrapes collect metrics (
prometheus.scrape). - Relabeling normalizes labels (
prometheus.relabel). - Remote write ships metrics (
prometheus.remote_write).
The course examples use a shared relabeler defined in 01-global.alloy that forwards metrics to remote write endpoints.
livedebugging {
enabled = true
}
// 01-global.alloy
remotecfg {
url = "https://fleet-management-prod-008.grafana.net"
id = "vikasjha001"
attributes = {
"os" = "linux",
"env" = "production",
"hostname" = constants.hostname,
}
basic_auth {
username = "1064633"
password = sys.env("GCLOUD_RW_API_KEY")
}
}
prometheus.remote_write "grafana_cloud" {
endpoint {
url = "http://localhost:9090/api/v1/write"
}
endpoint {
url = "https://prometheus-prod-13-prod-us-east-0.grafana.net/api/prom/push"
basic_auth {
username = "1846252"
// Use the API Key you provided
password = sys.env("GCLOUD_RW_API_KEY")
}
}
}
// Update your relabeler to forward to the Cloud destination
prometheus.relabel "common_labels" {
forward_to = [prometheus.remote_write.grafana_cloud.receiver]
rule {
target_label = "env"
replacement = "production"
}
}
loki.write "grafana_cloud_loki" {
endpoint {
url = "https://logs-prod-006.grafana.net/loki/api/v1/push"
basic_auth {
username = "1022411"
password = sys.env("GCLOUD_RW_API_KEY")
}
}
}
otelcol.auth.basic "grafana_cloud" {
username = "1016726"
password = sys.env("GCLOUD_RW_API_KEY")
}
otelcol.exporter.otlp "grafana_cloud" {
client {
endpoint = "tempo-prod-04-prod-us-east-0.grafana.net:443"
auth = otelcol.auth.basic.grafana_cloud.handler
}
}
pyroscope.write "production_backend" {
endpoint {
url = "https://profiles-prod-001.grafana.net"
basic_auth {
username = "1064633"
password = env("GCLOUD_RW_API_KEY")
}
}
}
Example: Host Metrics (Node/Unix Exporter)
prometheus.exporter.unix "local" { }
prometheus.scrape "host" {
targets = prometheus.exporter.unix.local.targets
forward_to = [prometheus.relabel.common_labels.receiver]
}
This exports host metrics and forwards them to the shared prometheus.relabel.common_labels pipeline in 01-global.alloy.
Example: Redis Metrics
prometheus.exporter.redis "local" {
redis_addr = "localhost:6379"
}
prometheus.scrape "redis" {
targets = prometheus.exporter.redis.local.targets
forward_to = [prometheus.relabel.common_labels.receiver]
}
Example: Docker Metrics (cAdvisor)
prometheus.exporter.cadvisor "local" {
docker_host = "unix:///var/run/docker.sock"
docker_only = true
disable_root_cgroup_stats = true
storage_duration = "5m"
}
prometheus.scrape "scraper" {
targets = prometheus.exporter.cadvisor.local.targets
forward_to = [ prometheus.relabel.common_labels.receiver ]
}
Example: Traefik Metrics (Static Target + Relabel)
// Discovery block to set the address
discovery.relabel "traefik_node" {
targets = [{
__address__ = "localhost:8080",
}]
// Set a clean instance name
rule {
target_label = "instance"
replacement = "traefik-proxy"
}
}
// Scraper block
prometheus.scrape "traefik" {
targets = discovery.relabel.traefik_node.output
// Pointing to your global standardization gate
forward_to = [prometheus.relabel.common_labels.receiver]
job_name = "traefik"
}
Example: Blackbox Probes (Synthetic Monitoring)
prometheus.exporter.blackbox "local" {
config = "{ modules: { http_2xx: { prober: http, timeout: 5s } } }"
target {
name = "techbloomeracademy"
address = "https://techbloomeracademy.com"
module = "http_2xx"
}
target {
name = "gobotify"
address = "https://gobotify.com"
module = "http_2xx"
}
}
// Configure a prometheus.scrape component to collect blackbox metrics.
prometheus.scrape "demo" {
targets = prometheus.exporter.blackbox.local.targets
forward_to = [prometheus.relabel.common_labels.receiver]
}
Example: Collecting system logs
local.file_match "system_logs" {
path_targets = [
{ "__path__" = "/var/log/syslog", "job" = "system", "instance" = constants.hostname },
{ "__path__" = "/var/log/auth.log", "job" = "auth", "instance" = constants.hostname },
{ "__path__" = "/var/log/kern.log", "job" = "kernel", "instance" = constants.hostname },
]
}
loki.source.file "system_logs_source" {
targets = local.file_match.system_logs.targets
forward_to = [loki.write.grafana_cloud_loki.receiver]
}
Example: Monitoring Journal Logs
loki.relabel "journal" {
forward_to = []
rule {
source_labels = ["__journal__systemd_unit"]
target_label = "unit"
}
}
loki.source.journal "read" {
forward_to = [loki.write.grafana_cloud_loki.receiver]
relabel_rules = loki.relabel.journal.rules
labels = {component = "loki.source.journal"}
}
Example: Monitoring Application Logs using EBPF
// 09-ebpf.alloy
beyla.ebpf "tg_otp" {
discovery {
instrument {
open_ports = "8002"
name = "tg_otp"
}
}
ebpf {
heuristic_sql_detect = true
}
output {
traces = [otelcol.exporter.otlp.grafana_cloud.input]
}
}
Example: GitHub Repository Metrics
// 15-github.alloy
prometheus.exporter.github "my_github_stats" {
api_token = sys.env("GITHUB_TOKEN")
repositories = ["learnwithvikasjha/tba-docs","learnwithvikasjha/gobotify"]
}
discovery.relabel "github_relabel" {
targets = prometheus.exporter.github.my_github_stats.targets
rule {
target_label = "job"
replacement = "integrations/github"
}
}
prometheus.scrape "github_scrape" {
targets = discovery.relabel.github_relabel.output
forward_to = [prometheus.relabel.common_labels.receiver]
}
Example: Profiling all linux processes
// 1. Automatically find all processes running on this Linux host
discovery.process "all_linux_processes" {
}
// 2. Use eBPF to profile those discovered processes
pyroscope.ebpf "all_jobs" {
forward_to = [pyroscope.write.production_backend.receiver]
targets = discovery.process.all_linux_processes.targets
}
Example: Profiling your application
// 1. Discover all processes on the host
discovery.process "local_processes" {
}
// 2. Filter for only our python script
discovery.relabel "filter_gobotify" {
targets = discovery.process.local_processes.targets
// Look at the command line of the process
rule {
source_labels = ["__meta_process_commandline"]
regex = ".*gobotify.*"
action = "keep"
}
// Make the name look clean in Grafana
rule {
action = "replace"
target_label = "service_name"
replacement = "gobotify-app"
}
}
// 3. Attach eBPF to the filtered process
pyroscope.ebpf "gobotify_monitor" {
forward_to = [pyroscope.write.production_backend.receiver]
targets = discovery.relabel.filter_gobotify.output
}
Sample Python Program to test Pyroscope Profiling
import time
import math
import hashlib
import random
def verify_request_signature(loops=30000):
"""Simulates security overhead/signing."""
secret = "vikas-secret-key"
for _ in range(loops):
hashlib.sha512(secret.encode()).hexdigest()
def normalize_sensor_data(size=150000):
"""Simulates mathematical normalization of a large dataset."""
data = []
for i in range(size):
# Triggers complex math for CPU profiling
val = (math.sin(i) * math.cos(i)) / (math.sqrt(i) + 1)
data.append(val)
return sum(data)
def generate_encrypted_report():
"""Simulates final packaging of data."""
report_content = "Report_" * 1000
# Nested call to show hierarchy
for _ in range(10):
hashlib.sha256(report_content.encode()).hexdigest()
def main_execution_loop():
"""Orchestrates the steps in order."""
print("--- Starting Cycle ---")
# Step 1: Authentication (Security Peak)
verify_request_signature()
# Step 2: Processing (Data Science Peak)
normalize_sensor_data()
# Step 3: Reporting (Output Peak)
generate_encrypted_report()
# Simulate a small break (gap in CPU usage)
time.sleep(0.5)
if __name__ == "__main__":
print("🚀 Gobotify Pipeline v2.0 is active...")
print("Targeting service_name: gobotify-app")
try:
while True:
main_execution_loop()
except KeyboardInterrupt:
print("\nStopping...")
Validating and Running Your Config
alloy validate config.alloy
alloy validate /path/to/configs
alloy run /path/to/configs
Verifying Metric Collection
- Alloy metrics endpoint:
curl http://localhost:12345/metricsshows Alloy's own metrics and scrape stats. - Remote write backend: verify metrics in Grafana Cloud or Prometheus.
- Exporter logs: check Alloy logs if a target is unreachable.
Always test your configuration in a non-production environment first. A misconfigured scrape job with a very short interval could overwhelm your applications with requests.
Common Pitfalls
- Missing
forward_to: Scraped metrics are discarded if they aren't forwarded to a receiver. - Wrong target format: Targets must be objects with at least
__address__. - Exporter dependencies missing: Redis, Docker, or GitHub exporters require the corresponding service or token.
- Permission issues: Docker and journald scrapes require access to sockets or files.
- Network problems: Firewalls or TLS misconfigurations prevent scrapes.
Summary
In this lesson, you've learned how to use Alloy's Prometheus components to collect metrics from various sources. You can now:
- Configure scrape jobs for host, Redis, Docker, Traefik, blackbox probes, and GitHub
- Use exporters and
prometheus.scrapeto collect metrics - Forward metrics through the shared relabel pipeline to remote write destinations
Remember that prometheus.scrape is the entry point; downstream components do the processing and delivery.
Quiz
Prometheus Scraping in Alloy - Quick Check
What is the purpose of the `forward_to` attribute in a `prometheus.scrape` component?