8000 Add support for Loki and Alloy · Issue #311 · stefanprodan/dockprom · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add support for Loki and Alloy #311

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
waitesgithub opened this issue Apr 3, 2025 · 4 comments
Open

Add support for Loki and Alloy #311

waitesgithub opened this issue Apr 3, 2025 · 4 comments

Comments

@waitesgithub
Copy link
waitesgithub commented Apr 3, 2025

Loki now requires tsdb. The Loki configuration is wrong. Also promtail is now depreciated. We should look at also utilizing Loki and Alloy only. Another option is to use the Docker driver client using the docker Loki plugin: https://grafana.com/docs/loki/latest/send-data/docker-driver/

Highly recommend developing for Loki and Alloy. PR #266 will need rework, however if users want to continue using Promtail with Loki do so at your risk to your environment.

Grafana Alloy: https://grafana.com/docs/alloy/latest/set-up/install/docker/
Loki configuration with tsdb: https://grafana.com/docs/loki/latest/configure/examples/configuration-examples/

  • Need to add a folder to start development of Loki with configuration files
  • Need to add a folder to start development of Alloy with configuration files
  • Anyone that has tested Alloy and Loki with Dockprom should contribute and provide comments; I have to keep creating my own configuration of Loki every time. Refer to this feat: Add Loki to collect container logs #266

Bad News Alert:
Promtail is now deprecated and will enter into Long-Term Support (LTS) beginning Feb. 13, 2025. This means that Promtail will no longer receive any new feature updates, but it will receive critical bug fixes and security fixes. Commercial support will end after the LTS phase, which we anticipate will extend for about 12 months until February 28, 2026. End-of-Life (EOL) phase for Promtail will begin once LTS ends. Promtail is expected to reach EOL on March 2, 2026, afterwards no future support or updates will be provided. All future feature development will occur in Grafana Alloy.

If you are currently using Promtail, you should plan your migration to Alloy. The Alloy migration documentation includes a migration tool for converting your Promtail configuration to an Alloy configuration with a single command.

@valyala
Copy link
valyala commented Apr 18, 2025

Please consider the migration to VictoriaLogs. It supports the same log stream concept as Loki, but it is much easier to configure, upgrade and maintain. It doesn't break data storage format with new releases. It also doesn't break configs with new releases. See https://itnext.io/why-victorialogs-is-a-better-alternative-to-grafana-loki-7e941567c4d5

@waitesgithub
Copy link
Author
waitesgithub commented May 9, 2025

Thanks for your feedback. Do you have a PR that shows how VictoriaLogs is working with dockprom. Looking forward to your screenshots, if anyone has some alternatives besides Loki, please comment here.

@waitesgithub
Copy link
Author
waitesgithub commented May 9, 2025
# Updated dockprom/docker-compose.yml with Dockprom, Vector and VictoriaLogs
version: '3.8'

Networks:
  monitor-net:
    driver: bridge

volumes:
    prometheus_data: {}
    grafana_data: {}
    # Added volumes for VictoriaLogs and Vector
    victorialogs_data: {}
    vector_data: {}

services:

  prometheus:
    image: prom/prometheus:v3.1.0 # Using your specified version
    container_name: prometheus
    volumes:
      - ./prometheus:/etc/prometheus
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--storage.tsdb.retention.time=200h' # Using your specified retention
      - '--web.enable-lifecycle'
    restart: unless-stopped
    expose:
      - 9090
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

  alertmanager:
    image: prom/alertmanager:v0.28.0 # Using your specified version
    container_name: alertmanager
    volumes:
      - ./alertmanager:/etc/alertmanager
    command:
      - '--config.file=/etc/alertmanager/config.yml'
      - '--storage.path=/alertmanager'
    restart: unless-stopped
    expose:
      - 9093
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

  nodeexporter:
    image: prom/node-exporter:v1.8.2 # Using your specified version
    container_name: nodeexporter
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
      - /etc/localtime:/etc/localtime:ro # Added for time sync
    command:
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/rootfs'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
    restart: unless-stopped
    expose:
      - 9100
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:v0.51.0 # Using your specified version
    container_name: cadvisor
    privileged: true
    devices:
      - /dev/kmsg:/dev/kmsg
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker:/var/lib/docker:ro # Needed to collect container stats
      # - /cgroup:/cgroup:ro # uncomment if needed for your Linux distribution
      - /etc/machine-id:/etc/machine-id:ro # Added for cAdvisor
      - /etc/hostname:/etc/hostname:ro     # Added for cAdvisor
    restart: unless-stopped
    expose:
      - 8080
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

  grafana:
    image: grafana/grafana:11.5.1 # Using your specified version
    container_name: grafana
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
      - ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
    environment:
      - GF_SECURITY_ADMIN_USER=${ADMIN_USER:-admin}   # Use environment variables for credentials
      - GF_SECURITY_ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin} # Use environment variables for credentials
      - GF_USERS_ALLOW_SIGN_UP=false
    restart: unless-stopped
    expose:
      - 3000
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"
    # Added dependencies for VictoriaLogs
    depends_on:
      - prometheus
      - victorialogs

  pushgateway:
    image: prom/pushgateway:v1.11.0 # Using your specified version
    container_name: pushgateway
    restart: unless-stopped
    expose:
      - 9091
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

  caddy:
    image: caddy:2.9.1 # Using your specified version
    container_name: caddy
    ports:
      - "3000:3000"
      - "8080:8080"
      - "9090:9090"
      - "9093:9093"
      - "9091:9091"
      - "9428:9428" # Added port for VictoriaLogs
    volumes:
      - ./caddy:/etc/caddy
    environment:
      - ADMIN_USER=${ADMIN_USER:-admin}
      - ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
      - ADMIN_PASSWORD_HASH=${ADMIN_PASSWORD_HASH:-$2a$14$1l.IozJx7xQRVmlkEQ32OeEEfP5mRxTpbDTCTcXRqn19gXD8YK1pO}
    restart: unless-stopped
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"
    # Added dependency for VictoriaLogs to be available before Caddy routes traffic
    depends_on:
      - prometheus
      - alertmanager
      - grafana
      - pushgateway
      - victorialogs # Caddy depends on VictoriaLogs if routing to it

  # Added VictoriaLogs Service
  victorialogs:
    image: victoriametrics/victoria-logs:latest # Using latest stable version
    container_name: victorialogs
    volumes:
      - victorialogs_data:/victoria-logs-data # Mount volume for storing log data
      # - ./victorialogs/config:/config # Mount config if you need a custom one
    ports:
      - 9428:9428 # Default VictoriaLogs HTTP port for ingestion and querying
    networks:
      - monitor-net
    restart: unless-stopped
    command:
      - '-storageDataPath=/victoria-logs-data'
      # Add any other VictoriaLogs command-line flags here, e.g.:
      # - '-retentionPeriod=30d' # Example: retain logs for 30 days
      # - '-logNewStreams' # Log newly created streams (useful for debugging ingestion)
      # - '-disableRPC' # Disable RPC if not using clustering

  # Added Vector Service for Log Collection
  vector:
    image: timberio/vector:latest-alpine # Using latest stable alpine version
    container_name: vector
    volumes:
      - ./vector/vector.yaml:/etc/vector/vector.yaml:ro # Mount Vector configuration
      # Mount host paths needed by the docker_logs source
      # /var/lib/docker/containers is the standard location for Docker's json-file logs
      - /var/lib/docker/containers/:/var/lib/docker/containers/:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro # Access Docker API for logs and metadata
      - vector_data:/vector-data # Volume for Vector's persistent data (e.g., positions file)
      # Optional: Mount /var/log if you need to collect other host logs
      # - /var/log/:/var/log/:ro
    networks:
      - monitor-net
    restart: unless-stopped
    # Vector needs VictoriaLogs to send logs to
    depends_on:
      - victorialogs


@waitesgithub
Copy link
Author

# vector/vector.yaml

# Global settings (optional)
# global:
#   data_dir: /vector-data # Ensure this matches the volume mount in docker-compose.yml

# Sources: Define where logs are collected from

# Source 1: Collect logs from Docker containers
# This source reads logs directly from the Docker daemon's logging facility.
sources:
  docker_logs:
    type: docker_logs
    include_labels:
      # Optional: Include specific labels from your Docker containers as fields.
      # Example: Only collect logs from containers with a specific label.
      # io.victoriametrics.logs.enabled: "true"
    exclude_labels:
      # Optional: Exclude logs from containers with specific labels.
      # Example: Exclude logs from monitoring containers.
      # org.label-schema.group: "monitoring"
    exclude_container_names:
      # Exclude the Vector container itself to prevent logging loops.
      - 'vector'
      # Add other container names to exclude if needed (e.g., monitoring components)
      # - 'prometheus'
      # - 'grafana'

  # Source 2: Collect logs from specific host files
  # This source tails log files on the host system.
  host_files:
    type: file
    include:
      # Specify the absolute paths to the log files you want to collect.
      # Remember that these paths are *inside the Vector container*,
      # so they must correspond to the host directories mounted into the container.
      - "/var/log/syslog"        # Common path for system logs on Debian/Ubuntu
      - "/var/log/auth.log"      # Common path for authentication logs
      - "/var/log/daemon.log"    # Common path for daemon logs
      - "/var/log/kern.log"      # Common path for kernel logs
      - "/var/log/messages"      # Common path for system logs on CentOS/RHEL
      - "/var/log/secure"        # Common path for authentication logs on CentOS/RHEL
      # Add paths to your specific application log files here:
      # - "/var/log/myapp/access.log"
      # - "/opt/myotherapp/logs/*.log" # Supports glob patterns
    exclude:
      # Optional: List of patterns to exclude files from the 'include' list.
      # Example: Exclude compressed or old log files.
      - "*.gz"
      - "*.old"
    read_from: beginning # Start reading from the beginning of the file on first start
    # If you need to add custom fields to logs from this source (e.g., to identify the host)
    # You can use transforms or add fields at the sink level.
    # Example: Add a 'source_type' field
    # transforms:
    #   add_host_fields:
    #     type: add_fields
    #     inputs: [host_files]
    #     fields:
    #       source_type: "host_file"
    #       hostname: "{{hostname}}" # Requires hostname to be available in Vector's environment

# Transform: (Optional) Process logs before sending
# You can add transforms here to parse logs, add/remove fields, etc.
# If you want to apply a transform to logs from *both* sources, list both sources under 'inputs'.
# transforms:
#   parse_json_messages:
#     type: json_parser
#     inputs: [docker_logs, host_files] # Apply this transform to logs from both sources
#     field: message # Assuming the log message is often in a 'message' field
#     target: . # Parse the JSON found in the 'message' field into the top level

# Sink: Send logs to VictoriaLogs via Elasticsearch sink
# This sink is configured to receive logs from multiple sources.
sinks:
  victorialogs_sink:
    type: elasticsearch
    # List the sources (or transforms) that should send data to this sink.
    inputs:
      - docker_logs  # Receive logs from the docker_logs source
      - host_files   # Receive logs from the host_files source
      # If you used a transform, list the transform here instead:
      # - parse_json_messages
      # - add_host_fields # Example if you added a transform for host files

    endpoint: http://victorialogs:9428/insert/elasticsearch/ # URL for VictoriaLogs Elasticsearch ingestion endpoint
    api_version: v8 # Use v8 as recommended by VictoriaLogs documentation
    encoding:
      codec: json # Send data as JSON
    healthcheck:
      enabled: false # Disable Vector's healthcheck if needed for this sink

    # Optional: Add query parameters to the ingestion URL
    # These can influence how VictoriaLogs processes the incoming logs.
    # See VictoriaLogs data ingestion documentation for available parameters.
    # query:
      # _msg_field: message # Explicitly tell VL which field contains the log message
      # _time_field: timestamp # Explicitly tell VL which field contains the timestamp
      # _stream_fields: container_name,image_name,job,host # Example: Use these fields for streams (VictoriaLogs handles high cardinality well)
      # debug: "1" # Uncomment to enable debug logging in VictoriaLogs for incoming events

    # Optional: Add headers to the ingestion request (e.g., for multi-tenancy)
    # request:
    #   headers:
    #     AccountID: "12"
    #     ProjectID: "34"

# Ensure data_dir is set for the file source to store positions
data_dir: /vector-data # Must match the volume mount in docker-compose.yml


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0