Telemetry comes out of the box with all components. We use opentelemetry to collect metrics and traces across the entire codebase. This allows you to troubleshoot and monitor the performance and latency for every plugin and processor used by your agents.
Simplest way to test tracing setup, is to run Jaeger locally with Docker. This makes it easy to verify that telemetry is working as expected. Because the library uses otel internally, all you need to do is to install the exporter and setup the instrumentation.Step 1 - Install open telemetry OTLP exporter
Copy
Ask AI
# with uv:uv install opentelemetry-sdk opentelemetry-exporter-otlp# or with pip:pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc
Step 2 - Setup tracing instrumentation in your codeMake sure to setup the instrumentation before you start the agent/server
Metrics are automatically collected for all plugins and processors, a common approach is to expose the metrics from your Python program to Prometheus.Step 1 - Install prometheus exporter
Copy
Ask AI
# with uv:uv install opentelemetry-exporter-prometheus prometheus-client# or with pip:pip install opentelemetry-exporter-prometheus prometheus-client
Step 2 - Setup metrics instrumentation in your codeMake sure to setup the instrumentation before you start the agent/server
You can now see the metrics at http://localhost:9464/metrics (make sure that your Python program keeps running), after this you can setup your Prometheus server to scrape this endpoint.