Skip to content

Conversation

danielmklein
Copy link
Contributor

@danielmklein danielmklein commented Sep 26, 2025

Title

Fix inconsistencies in token limits for gpt-5 models

Relevant issues

Fixes #13853
Fixes #14930
Fixes #14931

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

All of the GPT-5 models (including the newer gpt-5-codex) have 400k token context windows -- this means 272k input tokens and 128k output tokens. For some gpt-5 variants in the config here, though, we have max_input_tokens of 400k -- this should be 272k.

Copy link

vercel bot commented Sep 26, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Error Error Sep 26, 2025 3:30pm

@krrishdholakia krrishdholakia merged commit f8a6434 into BerriAI:main Sep 27, 2025
3 of 6 checks passed
@bachya
Copy link
Contributor

bachya commented Sep 30, 2025

I don't know that this is correct: #15057

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants