Why Use Automated Monitoring?
- Effortless Integration: Add comprehensive monitoring to your application in just a few lines of code.
- Complete Data Capture: Automatically track requests, responses, latency, token usage, and costs without manual effort.
- Real-time Compliance: Seamlessly integrate with the Compliance Engine to scan all LLM responses for potential violations.
- Production Ready: Designed for high performance with efficient batching and asynchronous processing, ensuring minimal impact on your application’s latency.
Supported Integrations
We provide dedicated monitors for the most popular LLM providers in the financial services industry.Anthropic
Automatic monitoring for all Claude models, including
claude-3-5-sonnet
.OpenAI
Full support for GPT models, including
gpt-4o
and gpt-4-turbo
.How It Works
The automated monitoring process is straightforward:1
1. Choose the Right Monitor
Instead of the base
AgentMonitor
, you instantiate a provider-specific monitor, such as AnthropicAgentMonitor
or OpenAIAgentMonitor
.2
2. Register Your Agent
Just as with manual tracking, you register your agent’s profile, including its model and compliance settings.
3
3. Wrap Your LLM Client
Use the provided wrapper method (e.g.,
wrapAnthropic
or wrapOpenAI
) on your existing LLM client instance.4
4. Use as Normal
Use the newly created “monitored” client exactly as you would the original. The wrapper intercepts the API calls, captures the data, and then passes the request to the provider.
Example: Wrapping the Anthropic Client
This example shows how simple it is to add monitoring to an existing application using the Anthropic SDK.What’s Tracked Automatically?
By using an integration wrapper, you automatically capture the following for each LLM call:Conversation Events
Conversation Events
conversation_start
: Triggered on the first interaction of a new session.user_message
: The content sent to the LLM.agent_response
: The content received from the LLM.tool_call
/function_call
: Any tools or functions the LLM decides to use, including parameters.
Performance Metrics
Performance Metrics
- Latency: The end-to-end duration of the LLM API call in milliseconds.
- Token Usage: A detailed breakdown of input, output, and total tokens.
- Cost: The calculated cost of the API call based on the specific model’s pricing.
Compliance & Risk
Compliance & Risk
- PII Violations: If
enableComplianceChecks
istrue
, responses are scanned for PII. - Fair Lending Violations: Checks for discriminatory language.
- BSA/AML Keywords: Monitors for suspicious activity indicators.
- Risk Score: A calculated score based on any detected violations.
While automated monitoring is powerful, you can still use manual tracking methods like
monitor.trackToolCall()
or monitor.trackError()
alongside the wrappers to add even more context to your workflows.Next Steps
Anthropic Integration Guide
Dive deeper into monitoring Anthropic’s Claude models.
OpenAI Integration Guide
Learn the specifics of monitoring OpenAI’s GPT models.
Manual Event Tracking
Explore how to track custom events for unsupported providers or complex workflows.
Compliance Engine
Understand how the compliance engine works with automated monitoring.