Skip to content

[Bug]: 'litellm_params' passed to OpenAI API causing 'Unknown parameter' error, works fine with Vertex AI #14901

@oleamm

Description

@oleamm

Description:

I am encountering an issue when using the LiteLLM Python SDK (not the LiteLLM Proxy) with OpenAI models, where the internal litellm_params are erroneously passed to the OpenAI API endpoint, resulting in an OpenAIException - Unknown parameter: 'litellm_params' error. This issue does not occur when using Google Vertex AI models under similar circumstances.

Steps to Reproduce:

  1. Configure LiteLLM to use an OpenAI model (e.g., gpt-5-mini as seen in logs).
  2. Make a completion or acompletion call, ensuring that litellm_params (e.g., for trace_metadata for LangFuse traceability) are included in the request context, which are then typically encapsulated within an extra_body dictionary by LiteLLM's internal processing.

Illustrative code example of req_params formation before calling acompletion:

# Assuming 'metadata' is a dictionary like {'cid': 277, 'mid': 4504}
# 'model_name' is an OpenAI model identifier, e.g., "openai/gpt-5-mini"
# 'request_messages' is a list of chat messages

req_params = {
    "model": model_name,
    "messages": request_messages,
    "tools": None,
    "temperature": 0.7,
    "stream": True,
    "litellm_params": {
        "metadata": {
            "trace_metadata": metadata # {'cid': 277, 'mid': 4504}
        }
    }
}

# The 'acompletion' call that triggers the issue for OpenAI
async for chunk in await acompletion(**req_params):
    # ...

Expected Behavior:

LiteLLM should internally consume or strip the litellm_params (and the extra_body containing it) before forwarding the request to the external OpenAI API. The OpenAI API should receive only parameters it explicitly supports. This is the observed behavior when making calls to Google Vertex AI models, where litellm_params are correctly handled internally by LiteLLM and not exposed to the Vertex AI API.

Actual Behavior:

When making a request to an OpenAI model, the extra_body parameter, containing litellm_params, is included in the JSON payload sent to api.openai.com/v1. The OpenAI API responds with a 400 Bad Request error: Unknown parameter: 'litellm_params'.

Log Snippets (Relevant portions from the provided logs):

  • Successful Vertex AI Call (Excerpt):

    POST Request Sent from LiteLLM:
    curl -X POST \
    https://us-central1-aiplatform.googleapis.com/v1/projects/xxx/locations/us-central1/publishers/google/models/gemini-2.5-flash-lite:streamGenerateContent?alt=sse \
    -H 'Content-Type: application/json' -H 'Authorization: Be****u_' \
    -d '{'contents': [...], 'system_instruction': [...], 'tools': [...], 'generationConfig': {'temperature': 1.0, 'thinkingConfig': {'includeThoughts': True, 'thinkingBudget': 24576}}}'
    

    Note: litellm_params (or extra_body) is absent from the actual API payload.

  • Failed OpenAI Call (Excerpt):

    POST Request Sent from LiteLLM:
    curl -X POST \
    https://api.openai.com/v1 \
    -d '{'model': 'gpt-5-mini', 'messages': [...], 'temperature': 1.0, 'tools': [...], 'extra_body': {'litellm_params': {'metadata': {'trace_metadata': {'cid': 277, 'mid': 4504}}}}, 'stream': True, 'stream_options': {'include_usage': True}}'
    
    ERROR    root            - Error: LLM call failed - BadRequestError: litellm.BadRequestError: OpenAIException - Unknown parameter: 'litellm_params'.
    ...
    openai.BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'litellm_params'.", 'type': 'invalid_request_error', 'param': 'litellm_params', 'code': 'unknown_parameter'}}
    

    Note: extra_body containing litellm_params is explicitly present in the API payload.

Root Cause Analysis (from debugging LiteLLM code):

The discrepancy lies in how LiteLLM's internal parameter handling (litellm_params for tracking, etc.) is implemented for different providers:

  1. For OpenAI:

    • The function add_provider_specific_params_to_optional_params (in litellm/utils.py) correctly identifies OpenAI-compatible providers and collects additional non-OpenAI parameters into an extra_body dictionary. This extra_body is then placed into the optional_params dictionary.
    • Later, in litellm/llms/openai/openai.py, during the make_openai_chat_completion_request call (within completion/acompletion methods), the data dictionary (which contains optional_params, and thus extra_body) is unpacked directly into the openai-python client call using **data.
    • The openai-python library, upon receiving extra_body as a top-level keyword argument, interprets it as a parameter to be sent directly to the OpenAI API. Since extra_body is not a recognized parameter by the OpenAI API, it rejects the request.
  2. For Google Vertex AI (Gemini):

    • In contrast, for Vertex AI models (e.g., litellm/llms/vertex_ai/gemini/transformation.py's _transform_request_body), the request payload is constructed by explicitly mapping LiteLLM parameters to a Pydantic RequestBody model.
    • Any parameters not defined in this Pydantic model (such as litellm_params or a generic extra_body) are effectively filtered out and not included in the final JSON sent to the Vertex AI API. This ensures that only API-supported parameters reach the endpoint.

Suggested Fix:

The litellm_params (and the encapsulating extra_body field) should be removed from the final data dictionary before it is passed as **data to the openai_aclient.chat.completions.create method in litellm/llms/openai/openai.py. LiteLLM should handle its internal parameters for OpenAI models with the same filtering strictness it applies to Vertex AI models.

Environment:

  • LiteLLM Version: 1.77.1
  • Python Version: 3.11.x
  • Operating System: Windows 11

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions