You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* update max_tokens position and doc to that of openai docs; add user field in CreateChatCompletionRequest
* add language to CreateTranscriptionRequest
* sync openapi spec from openai-openapi
/// The maximum number of [tokens](/tokenizer) to generate in the completion.
728
-
///
729
-
/// The token count of your prompt plus `max_tokens` cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
730
-
#[serde(skip_serializing_if = "Option::is_none")]
731
-
pubmax_tokens:Option<u16>,
732
-
733
727
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
734
728
///
735
729
/// We generally recommend altering this or `top_p` but not both.
/// The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
752
+
#[serde(skip_serializing_if = "Option::is_none")]
753
+
pubmax_tokens:Option<u16>,// default: inf
754
+
757
755
/// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
758
756
///
759
757
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/api-reference/parameter-details)
/// Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
/// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids).
/// The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
858
859
pubtemperature:Option<f32>,// default: 0
860
+
861
+
/// The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
Copy file name to clipboardExpand all lines: openapi.yaml
+42-33Lines changed: 42 additions & 33 deletions
Original file line number
Diff line number
Diff line change
@@ -2223,7 +2223,7 @@ components:
2223
2223
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
2224
2224
required:
2225
2225
- model
2226
-
2226
+
2227
2227
CreateCompletionResponse:
2228
2228
type: object
2229
2229
properties:
@@ -2275,11 +2275,11 @@ components:
2275
2275
type: integer
2276
2276
total_tokens:
2277
2277
type: integer
2278
-
required:
2278
+
required:
2279
2279
- prompt_tokens
2280
2280
- completion_tokens
2281
2281
- total_tokens
2282
-
required:
2282
+
required:
2283
2283
- id
2284
2284
- object
2285
2285
- created
@@ -2299,7 +2299,7 @@ components:
2299
2299
name:
2300
2300
type: string
2301
2301
description: The name of the user in a multi-user chat
2302
-
required:
2302
+
required:
2303
2303
- role
2304
2304
- content
2305
2305
@@ -2313,7 +2313,7 @@ components:
2313
2313
content:
2314
2314
type: string
2315
2315
description: The contents of the message
2316
-
required:
2316
+
required:
2317
2317
- role
2318
2318
- content
2319
2319
@@ -2372,6 +2372,11 @@ components:
2372
2372
maxItems: 4
2373
2373
items:
2374
2374
type: string
2375
+
max_tokens:
2376
+
description: |
2377
+
The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
2378
+
default: inf
2379
+
type: integer
2375
2380
presence_penalty:
2376
2381
type: number
2377
2382
default: 0
@@ -2431,11 +2436,11 @@ components:
2431
2436
type: integer
2432
2437
total_tokens:
2433
2438
type: integer
2434
-
required:
2439
+
required:
2435
2440
- prompt_tokens
2436
2441
- completion_tokens
2437
2442
- total_tokens
2438
-
required:
2443
+
required:
2439
2444
- id
2440
2445
- object
2441
2446
- created
@@ -2536,11 +2541,11 @@ components:
2536
2541
type: integer
2537
2542
total_tokens:
2538
2543
type: integer
2539
-
required:
2544
+
required:
2540
2545
- prompt_tokens
2541
2546
- completion_tokens
2542
2547
- total_tokens
2543
-
required:
2548
+
required:
2544
2549
- object
2545
2550
- created
2546
2551
- choices
@@ -2690,7 +2695,7 @@ components:
2690
2695
type: boolean
2691
2696
violence/graphic:
2692
2697
type: boolean
2693
-
required:
2698
+
required:
2694
2699
- hate
2695
2700
- hate/threatening
2696
2701
- self-harm
@@ -2715,19 +2720,19 @@ components:
2715
2720
type: number
2716
2721
violence/graphic:
2717
2722
type: number
2718
-
required:
2723
+
required:
2719
2724
- hate
2720
2725
- hate/threatening
2721
2726
- self-harm
2722
2727
- sexual
2723
2728
- sexual/minors
2724
2729
- violence
2725
2730
- violence/graphic
2726
-
required:
2731
+
required:
2727
2732
- flagged
2728
2733
- categories
2729
2734
- category_scores
2730
-
required:
2735
+
required:
2731
2736
- id
2732
2737
- model
2733
2738
- results
@@ -2810,7 +2815,7 @@ components:
2810
2815
type: array
2811
2816
items:
2812
2817
$ref: '#/components/schemas/OpenAIFile'
2813
-
required:
2818
+
required:
2814
2819
- object
2815
2820
- data
2816
2821
@@ -2845,7 +2850,7 @@ components:
2845
2850
type: string
2846
2851
deleted:
2847
2852
type: boolean
2848
-
required:
2853
+
required:
2849
2854
- id
2850
2855
- object
2851
2856
- deleted
@@ -3249,7 +3254,7 @@ components:
3249
3254
type: array
3250
3255
items:
3251
3256
$ref: '#/components/schemas/FineTune'
3252
-
required:
3257
+
required:
3253
3258
- object
3254
3259
- data
3255
3260
@@ -3262,7 +3267,7 @@ components:
3262
3267
type: array
3263
3268
items:
3264
3269
$ref: '#/components/schemas/FineTuneEvent'
3265
-
required:
3270
+
required:
3266
3271
- object
3267
3272
- data
3268
3273
@@ -3322,7 +3327,7 @@ components:
3322
3327
type: array
3323
3328
items:
3324
3329
type: number
3325
-
required:
3330
+
required:
3326
3331
- index
3327
3332
- object
3328
3333
- embedding
@@ -3333,10 +3338,10 @@ components:
3333
3338
type: integer
3334
3339
total_tokens:
3335
3340
type: integer
3336
-
required:
3341
+
required:
3337
3342
- prompt_tokens
3338
3343
- total_tokens
3339
-
required:
3344
+
required:
3340
3345
- object
3341
3346
- model
3342
3347
- data
@@ -3346,12 +3351,12 @@ components:
3346
3351
type: object
3347
3352
additionalProperties: false
3348
3353
properties:
3349
-
file:
3354
+
file:
3350
3355
description: |
3351
3356
The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3352
3357
type: string
3353
3358
format: binary
3354
-
model:
3359
+
model:
3355
3360
description: |
3356
3361
ID of the model to use. Only `whisper-1` is currently available.
3357
3362
type: string
@@ -3369,29 +3374,33 @@ components:
3369
3374
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
3370
3375
type: number
3371
3376
default: 0
3377
+
language:
3378
+
description: |
3379
+
The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
3380
+
type: string
3372
3381
required:
3373
3382
- file
3374
3383
- model
3375
3384
3376
-
# Note: This does not currently support the non-default response format types.
3385
+
# Note: This does not currently support the non-default response format types.
3377
3386
CreateTranscriptionResponse:
3378
3387
type: object
3379
3388
properties:
3380
3389
text:
3381
3390
type: string
3382
-
required:
3391
+
required:
3383
3392
- text
3384
3393
3385
3394
CreateTranslationRequest:
3386
3395
type: object
3387
3396
additionalProperties: false
3388
3397
properties:
3389
-
file:
3398
+
file:
3390
3399
description: |
3391
3400
The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
3392
3401
type: string
3393
3402
format: binary
3394
-
model:
3403
+
model:
3395
3404
description: |
3396
3405
ID of the model to use. Only `whisper-1` is currently available.
3397
3406
type: string
@@ -3413,13 +3422,13 @@ components:
3413
3422
- file
3414
3423
- model
3415
3424
3416
-
# Note: This does not currently support the non-default response format types.
3425
+
# Note: This does not currently support the non-default response format types.
0 commit comments