...
You can find info about enabling this feature by checking out our November 2024 release notes - November 2024 - Dashboard Release Notes
Example:
This configuration bellow will do the following:
Launch the Dashboard with Java Agent Enabled: The application is started with the OpenTelemetry Java agent, which uses the OTLP exporter configured. This setup will allow the application to send telemetry data (traces and metrics) to the OpenTelemetry Collector for further processing.
Collect and Process Telemetry Data: The OpenTelemetry Collector will be set up to receive telemetry data from the application via OTLP protocols (HTTP on port
4318
and gRPC on port4317
). It will process the incoming data using a batch processor that ensures the efficient handling of metrics.Export Metrics for Monitoring: The processed metrics will then exposed for Prometheus to scrape at the endpoint
0.0.0.0:8889
. This integration will allows real-time monitoring and visualisation of the dashboard’s performance.
First we need to define the configuration file for OpenTelemetry Collector. Example:
config.yml
Code Block |
---|
receivers:
otlp: # Defines the OTLP receiver to accept telemetry data (traces/metrics)
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch/traces:
timeout: 1s # The maximum time to wait before sending a batch of telemetry data
send_batch_size: 50 # The maximum number of telemetry data items to send per batch
exporters:
prometheus: # Defines an exporter for Prometheus to expose telemetry data
endpoint: 0.0.0.0:8889 # The network address and port where Prometheus can scrape metrics
service:
pipelines:
metrics:
receivers: [otlp] # OTLP receiver is responsible for receiving the metrics data
processors: [batch/traces] # The batch processor is used to process the received metrics
exporters: [prometheus] # The processed metrics are exported to Prometheus |
We then define a configuration file for Prometheus. Example:
prometheus.yml
Code Block |
---|
global:
scrape_interval: 15s # Default time interval between scrapes for all jobs unless overridden
evaluation_interval: 15s # Default time interval to evaluate alerting and recording rules
scrape_configs: # List of scrape configurations defining how Prometheus should scrape metrics from different targets
- job_name: 'example'
metrics_path: '/metrics' # Endpoint where metrics are exposed for the target
scrape_interval: 5s # Override the global `scrape_interval` to scrape this job every 5 seconds
static_configs:
- targets: ['otel-collector:8889'] # The address of the target to scrape |
We then put everything together in one docker compose file. This will include:
Latest version of the dashboard with environment variables defined that will enable Java Agent and will have endpoint and service name specified
External db
OpenTelemetry Collector
Prometheus
docker-compose.yml
Code Block |
---|
services:
dashboard:
image: registry.panintelligence.cloud/panintelligence/dashboard/pi:latest
ports:
- 8226:8226
environment:
PI_DB_HOST: database
PI_DB_PASSWORD: password
PI_DB_USERNAME: root
PI_DB_SCHEMA_NAME: dashboard
PI_DB_PORT: 3306
PI_EXTERNAL_DB: "true"
PI_LICENCE: ae8360ce-d208-4daa-b776-8022f37ff150
PI_TOMCAT_OBSERVABILITY_ENABLE_JAVA_AGENT: "true"
PI_TOMCAT_OBSERVABILITY_EXPORTER_ENDPOINT: "http://otel-collector:4318"
PI_TOMCAT_OBSERVABILITY_SERVICE_NAME: "pi-dashboard"
PI_TOMCAT_PORT: 8226
healthcheck:
test: [ "CMD", "/bin/bash", "/var/panintelligence/tomcat_healthcheck.sh" ]
interval: 10s
start_period: 60s
retries: 3
database:
image: mariadb:10.9.4
environment:
MARIADB_DATABASE: dashboard
MARIADB_ROOT_PASSWORD: password
LANG: C.UTF-8
command: --lower_case_table_names=1 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
restart: always
ports:
- "3306:3306"
otel-collector:
image: otel/opentelemetry-collector
container_name: otel-collector
command: [ "--config=/etc/config.yml" ]
volumes:
- /path_to_config_files/config.yml:/etc/config.yml # Mounts the local collector configuration file into the container at the specified path
ports:
- "4317:4317" # Maps port 4317 for OTLP gRPC protocol for receiving telemetry data
- "4318:4318" # Maps port 4318 for OTLP HTTP protocol for receiving telemetry data
- "8889:8889" # Maps port 8889 for Prometheus scraping metrics from the collector
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090" # Maps port 9090 for accessing the Prometheus web interface
volumes:
- /path_to_prometheus_config/prometheus.yml:/etc/prometheus/prometheus.yml # Mounts the local Prometheus configuration file into the container
|
Things to note:
All three files (config.yml, prometheus.yml and docker-compose.yml) should exist for this example to work successfully
Amend volumes for otel-collector and prometheus - make sure to point to the correct path within your local directory
Make sure all ports specified are available
After running docker compose up, navigate to the dashboard and log in - make sure all is working nicely.
Navigate to http://localhost:8889/metrics - you will see some output from prometheus of the metrics that have been scraped. Example: