Skip to content

Commit edd6ef7

Browse files
authored
docs: add scorecard integration (#94)
1 parent baad8c8 commit edd6ef7

File tree

3 files changed

+108
-3
lines changed

3 files changed

+108
-3
lines changed

mint.json

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -128,6 +128,7 @@
128128
"openllmetry/integrations/newrelic",
129129
"openllmetry/integrations/otel-collector",
130130
"openllmetry/integrations/oraclecloud",
131+
"openllmetry/integrations/scorecard",
131132
"openllmetry/integrations/service-now",
132133
"openllmetry/integrations/signoz",
133134
"openllmetry/integrations/sentry",
@@ -183,9 +184,7 @@
183184
},
184185
{
185186
"group": "Costs",
186-
"pages": [
187-
"api-reference/costs/property_costs"
188-
]
187+
"pages": ["api-reference/costs/property_costs"]
189188
}
190189
],
191190
"redirects": [

openllmetry/integrations/introduction.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@ in any observability platform that supports OpenTelemetry.
4040
title="Oracle Cloud"
4141
href="/openllmetry/integrations/oraclecloud"
4242
></Card>
43+
<Card title="Scorecard" href="/openllmetry/integrations/scorecard"></Card>
4344
<Card
4445
title="Service Now Cloud Observability"
4546
href="/openllmetry/integrations/service-now"
Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
---
2+
title: "LLM Observability with Scorecard and OpenLLMetry"
3+
sidebarTitle: "Scorecard"
4+
---
5+
6+
Scorecard is an [AI evaluation and optimization platform](https://www.scorecard.io/) that helps teams build reliable AI systems with comprehensive testing, evaluation, and continuous monitoring capabilities.
7+
8+
## Setup
9+
10+
To integrate OpenLLMetry with Scorecard, you'll need to configure your tracing endpoint and authentication:
11+
12+
### 1. Get your Scorecard API Key
13+
14+
1. Visit your [Settings Page](https://app.scorecard.io/settings)
15+
2. Copy your API Key
16+
17+
### 2. Configure Environment Variables
18+
19+
```bash
20+
TRACELOOP_BASE_URL="https://tracing.scorecard.io/otel"
21+
TRACELOOP_HEADERS="Authorization=Bearer <YOUR_SCORECARD_API_KEY>"
22+
```
23+
24+
### 3. Instrument your code
25+
26+
First, install OpenLLMetry and your LLM library:
27+
28+
<CodeGroup>
29+
```sh Python
30+
pip install traceloop-sdk openai
31+
```
32+
33+
```sh JavaScript
34+
npm install @traceloop/node-server-sdk openai
35+
```
36+
</CodeGroup>
37+
38+
Then initialize OpenLLMetry and structure your application using workflows and tasks:
39+
40+
<CodeGroup>
41+
```py Python
42+
from traceloop.sdk import Traceloop
43+
from traceloop.sdk.decorators import workflow, task
44+
from traceloop.sdk.instruments import Instruments
45+
from openai import OpenAI
46+
47+
# Initialize OpenAI client
48+
openai_client = OpenAI()
49+
50+
# Initialize OpenLLMetry (reads config from environment variables)
51+
Traceloop.init(disable_batch=True, instruments={Instruments.OPENAI})
52+
53+
@workflow(name="simple_chat")
54+
def simple_workflow():
55+
completion = openai_client.chat.completions.create(
56+
model="gpt-4o-mini",
57+
messages=[{"role": "user", "content": "Tell me a joke"}]
58+
)
59+
return completion.choices[0].message.content
60+
61+
# Run the workflow - all LLM calls will be automatically traced
62+
simple_workflow()
63+
print("Check Scorecard for traces!")
64+
```
65+
66+
```js JavaScript
67+
import * as traceloop from "@traceloop/node-server-sdk";
68+
import OpenAI from "openai";
69+
70+
// Initialize OpenAI client
71+
const openai = new OpenAI();
72+
73+
// Initialize OpenLLMetry with automatic instrumentation
74+
traceloop.initialize({
75+
disableBatch: true, // Ensures immediate trace sending
76+
instrumentModules: { openAI: OpenAI },
77+
});
78+
79+
async function simpleWorkflow() {
80+
return await traceloop.withWorkflow({ name: "simple_chat" }, async () => {
81+
const completion = await openai.chat.completions.create({
82+
model: "gpt-4o-mini",
83+
messages: [{ role: "user", content: "Tell me a joke" }],
84+
});
85+
return completion.choices[0].message.content;
86+
});
87+
}
88+
89+
# Run the workflow - all LLM calls will be automatically traced
90+
simpleWorkflow();
91+
console.log("Check Scorecard for traces!");
92+
```
93+
</CodeGroup>
94+
95+
## Features
96+
97+
Once configured, you'll have access to Scorecard's comprehensive observability features:
98+
99+
- **Automatic LLM instrumentation** for popular libraries (OpenAI, Anthropic, etc.)
100+
- **Structured tracing** with workflows and tasks using `@workflow` and `@task` decorators
101+
- **Performance monitoring** including latency, token usage, and cost tracking
102+
- **Real-time evaluation** with continuous monitoring of AI system performance
103+
- **Production debugging** with detailed trace analysis
104+
105+
For more detailed setup instructions and examples, check out the [Scorecard Tracing Quickstart](https://docs.scorecard.io/intro/tracing-quickstart).

0 commit comments

Comments
 (0)