Fix inconsistent token configs for gpt-5 models #14942
Merged
+26
−26
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Title
Fix inconsistencies in token limits for gpt-5 models
Relevant issues
Fixes #13853
Fixes #14930
Fixes #14931
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/
directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit
Type
🐛 Bug Fix
Changes
All of the GPT-5 models (including the newer gpt-5-codex) have 400k token context windows -- this means 272k input tokens and 128k output tokens. For some gpt-5 variants in the config here, though, we have
max_input_tokens
of 400k -- this should be 272k.