Skip to content

Commit 26e7067

Browse files
Feat/docs add requesty ai provider (vercel#6660)
**Background** Added documentation for the Requesty AI SDK provider to help developers integrate with Requesty's unified LLM gateway. Summary Created comprehensive documentation for the Requesty provider (content/providers/03-community-providers/5-requesty.mdx) including setup instructions, API key configuration, usage examples, and advanced features. Also added Requesty to the main providers list in the foundations documentation. **Tasks** - [x] Documentation has been added (for new provider) - [x] Formatting issues have been fixed --------- Co-authored-by: nicoalbanese <gcalbanese96@gmail.com>
1 parent 65e042a commit 26e7067

File tree

2 files changed

+272
-0
lines changed

2 files changed

+272
-0
lines changed

content/docs/02-foundations/02-providers-and-models.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ The open-source community has created the following providers:
6363
- [Portkey Provider](/providers/community-providers/portkey) (`@portkey-ai/vercel-provider`)
6464
- [Cloudflare Workers AI Provider](/providers/community-providers/cloudflare-workers-ai) (`workers-ai-provider`)
6565
- [OpenRouter Provider](/providers/community-providers/openrouter) (`@openrouter/ai-sdk-provider`)
66+
- [Requesty Provider](/providers/community-providers/requesty) (`@requesty/ai-sdk`)
6667
- [Crosshatch Provider](/providers/community-providers/crosshatch) (`@crosshatch/ai-provider`)
6768
- [Mixedbread Provider](/providers/community-providers/mixedbread) (`mixedbread-ai-provider`)
6869
- [Voyage AI Provider](/providers/community-providers/voyage-ai) (`voyage-ai-provider`)
Lines changed: 271 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,271 @@
1+
---
2+
title: Requesty
3+
description: Requesty Provider for the AI SDK
4+
---
5+
6+
# Requesty
7+
8+
[Requesty](https://requesty.ai/) is a unified LLM gateway that provides access to over 300 large language models from leading providers like OpenAI, Anthropic, Google, Mistral, AWS, and more. The Requesty provider for the AI SDK enables seamless integration with all these models while offering enterprise-grade advantages:
9+
10+
- **Universal Model Access**: One API key for 300+ models from multiple providers
11+
- **99.99% Uptime SLA**: Enterprise-grade infrastructure with intelligent failover and load balancing
12+
- **Cost Optimization**: Pay-as-you-go pricing with intelligent routing and prompt caching to reduce costs by up to 80%
13+
- **Advanced Security**: Prompt injection detection, end-to-end encryption, and GDPR compliance
14+
- **Real-time Observability**: Built-in monitoring, tracing, and analytics
15+
- **Intelligent Routing**: Automatic failover and performance-based routing
16+
- **Reasoning Support**: Advanced reasoning capabilities with configurable effort levels
17+
18+
Learn more about Requesty's capabilities in the [Requesty Documentation](https://docs.requesty.ai).
19+
20+
## Setup
21+
22+
The Requesty provider is available in the `@requesty/ai-sdk` module. You can install it with:
23+
24+
<Tabs items={['pnpm', 'npm', 'yarn']}>
25+
<Tab>
26+
<Snippet text="pnpm add @requesty/ai-sdk" dark />
27+
</Tab>
28+
<Tab>
29+
<Snippet text="npm install @requesty/ai-sdk" dark />
30+
</Tab>
31+
<Tab>
32+
<Snippet text="yarn add @requesty/ai-sdk" dark />
33+
</Tab>
34+
</Tabs>
35+
36+
## API Key Setup
37+
38+
For security, you should set your API key as an environment variable named exactly `REQUESTY_API_KEY`:
39+
40+
```bash
41+
# Linux/Mac
42+
export REQUESTY_API_KEY=your_api_key_here
43+
44+
# Windows Command Prompt
45+
set REQUESTY_API_KEY=your_api_key_here
46+
47+
# Windows PowerShell
48+
$env:REQUESTY_API_KEY="your_api_key_here"
49+
```
50+
51+
You can obtain your Requesty API key from the [Requesty Dashboard](https://app.requesty.ai/api-keys).
52+
53+
## Provider Instance
54+
55+
You can import the default provider instance `requesty` from `@requesty/ai-sdk`:
56+
57+
```typescript
58+
import { requesty } from '@requesty/ai-sdk';
59+
```
60+
61+
Alternatively, you can create a custom provider instance using `createRequesty`:
62+
63+
```typescript
64+
import { createRequesty } from '@requesty/ai-sdk';
65+
66+
const customRequesty = createRequesty({
67+
apiKey: 'YOUR_REQUESTY_API_KEY',
68+
});
69+
```
70+
71+
## Language Models
72+
73+
Requesty supports both chat and completion models with a simple, unified interface:
74+
75+
```typescript
76+
// Using the default provider instance
77+
const model = requesty('openai/gpt-4o');
78+
79+
// Using a custom provider instance
80+
const customModel = customRequesty('anthropic/claude-3.5-sonnet');
81+
```
82+
83+
You can find the full list of available models in the [Requesty Models documentation](https://requesty.ai/models).
84+
85+
## Examples
86+
87+
Here are examples of using Requesty with the AI SDK:
88+
89+
### `generateText`
90+
91+
```javascript
92+
import { requesty } from '@requesty/ai-sdk';
93+
import { generateText } from 'ai';
94+
95+
const { text } = await generateText({
96+
model: requesty('openai/gpt-4o'),
97+
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
98+
});
99+
100+
console.log(text);
101+
```
102+
103+
### `streamText`
104+
105+
```javascript
106+
import { requesty } from '@requesty/ai-sdk';
107+
import { streamText } from 'ai';
108+
109+
const result = streamText({
110+
model: requesty('anthropic/claude-3.5-sonnet'),
111+
prompt: 'Write a short story about AI.',
112+
});
113+
114+
for await (const chunk of result.textStream) {
115+
console.log(chunk);
116+
}
117+
```
118+
119+
### Tool Usage
120+
121+
```javascript
122+
import { requesty } from '@requesty/ai-sdk';
123+
import { generateObject } from 'ai';
124+
import { z } from 'zod';
125+
126+
const { object } = await generateObject({
127+
model: requesty('openai/gpt-4o'),
128+
schema: z.object({
129+
recipe: z.object({
130+
name: z.string(),
131+
ingredients: z.array(z.string()),
132+
steps: z.array(z.string()),
133+
}),
134+
}),
135+
prompt: 'Generate a recipe for chocolate chip cookies.',
136+
});
137+
138+
console.log(object.recipe);
139+
```
140+
141+
## Advanced Features
142+
143+
### Reasoning Support
144+
145+
Requesty provides advanced reasoning capabilities with configurable effort levels for supported models:
146+
147+
```javascript
148+
import { createRequesty } from '@requesty/ai-sdk';
149+
import { generateText } from 'ai';
150+
151+
const requesty = createRequesty({ apiKey: process.env.REQUESTY_API_KEY });
152+
153+
// Using reasoning effort
154+
const { text, reasoning } = await generateText({
155+
model: requesty('openai/o3-mini', {
156+
reasoningEffort: 'medium',
157+
}),
158+
prompt: 'Solve this complex problem step by step...',
159+
});
160+
161+
console.log('Response:', text);
162+
console.log('Reasoning:', reasoning);
163+
```
164+
165+
#### Reasoning Effort Values
166+
167+
- `'low'` - Minimal reasoning effort
168+
- `'medium'` - Moderate reasoning effort
169+
- `'high'` - High reasoning effort
170+
- `'max'` - Maximum reasoning effort (Requesty-specific)
171+
- Budget strings (e.g., `"10000"`) - Specific token budget for reasoning
172+
173+
#### Supported Reasoning Models
174+
175+
- **OpenAI**: `openai/o3-mini`, `openai/o3`
176+
- **Anthropic**: `anthropic/claude-sonnet-4-0`, other Claude reasoning models
177+
- **Deepseek**: All Deepseek reasoning models (automatic reasoning)
178+
179+
### Custom Configuration
180+
181+
Configure Requesty with custom settings:
182+
183+
```javascript
184+
import { createRequesty } from '@requesty/ai-sdk';
185+
186+
const requesty = createRequesty({
187+
apiKey: process.env.REQUESTY_API_KEY,
188+
baseURL: 'https://router.requesty.ai/v1',
189+
headers: {
190+
'Custom-Header': 'custom-value',
191+
},
192+
extraBody: {
193+
custom_field: 'value',
194+
},
195+
});
196+
```
197+
198+
### Passing Extra Body Parameters
199+
200+
There are three ways to pass extra body parameters to Requesty:
201+
202+
#### 1. Via Provider Options
203+
204+
```javascript
205+
await streamText({
206+
model: requesty('anthropic/claude-3.5-sonnet'),
207+
messages: [{ role: 'user', content: 'Hello' }],
208+
providerOptions: {
209+
requesty: {
210+
custom_field: 'value',
211+
reasoning_effort: 'high',
212+
},
213+
},
214+
});
215+
```
216+
217+
#### 2. Via Model Settings
218+
219+
```javascript
220+
const model = requesty('anthropic/claude-3.5-sonnet', {
221+
extraBody: {
222+
custom_field: 'value',
223+
},
224+
});
225+
```
226+
227+
#### 3. Via Provider Factory
228+
229+
```javascript
230+
const requesty = createRequesty({
231+
apiKey: process.env.REQUESTY_API_KEY,
232+
extraBody: {
233+
custom_field: 'value',
234+
},
235+
});
236+
```
237+
238+
## Enterprise Features
239+
240+
Requesty offers several enterprise-grade features:
241+
242+
1. **99.99% Uptime SLA**: Advanced routing and failover mechanisms keep your AI application online when other services fail.
243+
244+
2. **Intelligent Load Balancing**: Real-time performance-based routing automatically selects the best-performing providers.
245+
246+
3. **Cost Optimization**: Intelligent routing can reduce API costs by up to 40% while maintaining response quality.
247+
248+
4. **Advanced Security**: Built-in prompt injection detection, end-to-end encryption, and GDPR compliance.
249+
250+
5. **Real-time Observability**: Comprehensive monitoring, tracing, and analytics for all requests.
251+
252+
6. **Geographic Restrictions**: Comply with regional regulations through configurable geographic controls.
253+
254+
7. **Model Access Control**: Fine-grained control over which models and providers can be accessed.
255+
256+
## Key Benefits
257+
258+
- **Zero Downtime**: Automatic failover with \<50ms switching time
259+
- **Multi-Provider Redundancy**: Seamless switching between healthy providers
260+
- **Intelligent Queuing**: Retry logic with exponential backoff
261+
- **Developer-Friendly**: Straightforward setup with unified API
262+
- **Flexibility**: Switch between models and providers without code changes
263+
- **Enterprise Support**: Available for high-volume users with custom SLAs
264+
265+
## Additional Resources
266+
267+
- [Requesty Provider Repository](https://github.com/requestyai/ai-sdk-requesty)
268+
- [Requesty Documentation](https://docs.requesty.ai/)
269+
- [Requesty Dashboard](https://app.requesty.ai/analytics)
270+
- [Requesty Discord Community](https://discord.com/invite/Td3rwAHgt4)
271+
- [Requesty Status Page](https://status.requesty.ai)

0 commit comments

Comments
 (0)