-
Notifications
You must be signed in to change notification settings - Fork 4.6k
feat: add thinking mode support to fact retrieval messages #3626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add thinking mode support to fact retrieval messages #3626
Conversation
- Add thinking mode configuration to memory utils - Create settings module for configuration management - Update configs __init__.py to export settings - Fix failing test for thinking mode in fact retrieval
| return AGENT_MEMORY_EXTRACTION_PROMPT, f"Input:\n{message}" | ||
| else: | ||
| return USER_MEMORY_EXTRACTION_PROMPT, f"Input:\n{message}" | ||
| from mem0.configs import settings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not how we are planning to introduce the config, there are set of parameters for every provider - ollama, lmstudio where thinking mode toggle can be configured, this is a prompt based mechanism and not a config base.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Vedant817 please look at the comments here, this seems to be unresolved.
|
@Vedant817 thank you for the effort but please re write the pr description the issue number seems to be misleading and keep it to the point, this will be easier for us as maintainers. Thanks! |
Yes, I'll make the changes. I found I made some mistakes in the description. |
Hey @parshvadaftari I have resolved the asked changes and updated the PR description, please review it. |
| return AGENT_MEMORY_EXTRACTION_PROMPT, f"Input:\n{message}" | ||
| else: | ||
| return USER_MEMORY_EXTRACTION_PROMPT, f"Input:\n{message}" | ||
| from mem0.configs import settings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Vedant817 please look at the comments here, this seems to be unresolved.
Description
This PR fixes the JSON parsing error in
new_retrieved_factsreported in issue #3564.Root Cause:
The issue occurs with models like Qwen3-8B (as reported in #3564) that in reasoning/thinking mode return non-JSON tokens before the actual JSON response. These additional tokens cause the
json.loads()call to fail with "Expecting value: line 1 column 1 (char 0)".Solution:
This PR adds a configurable
thinking_modeparameter that handles reasoning mode outputs by instructing models to include thinking in their structured JSON response, preventing JSON parsing failures.Technical Changes:
enable_thinking_modeconfiguration parameterget_fact_retrieval_messages()to conditionally prepend thinking instructionFixes #3564