-
Notifications
You must be signed in to change notification settings - Fork 96
Open
Description
Description
See the exception posted below. I believe I'm running out of tokens, though the codebase is not big at all. I'd expect it, even if it runs out of tokens, to automatically truncate the prompt instead of throwing an error.
Interestingly it works fine when I set model=gpt-4o
, which supposedly has a smaller context window.
Please provide a link to a minimal reproduction of the bug
No response
Please provide the exception or error you saw
patchwork GenerateREADME folder_path=. model=claude-3-5-sonnet-latest anthropic_api_key=...
Patchflow GenerateREADME loaded from patchwork.patchflows
╭────────────────────────────────────────────────── Patchflow GenerateREADME inputs ──────────────────────────────────────────────────╮
│ Run started CallCode2Prompt │
│ Run completed CallCode2Prompt │
│ Run started LLM │
│ Run started PreparePrompt │
│ Run completed PreparePrompt │
│ Run started CallLLM │
│ Step CallLLM message: Input token limit exceeded. │
│ Step CallLLM failed │
│ Step LLM failed │
│ Error running patchflow GenerateREADME: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': │
│ 'messages: at least one message is required'}} │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Finished GenerateREADME ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:01
Anything else?
No response
Metadata
Metadata
Assignees
Labels
No labels