Replies: 3 comments
-
Any progress on this? |
Beta Was this translation helpful? Give feedback.
-
Does anybody know any workarounds for this? |
Beta Was this translation helpful? Give feedback.
-
Here's a workaround that worked for me (only for Pydantic models). Not sure if it applies to the original request but I'll post it anyway in case anyone finds it useful. Define a subclass of from langchain_core.language_models.fake_chat_models import FakeListChatModel
from langchain_core.runnables import RunnableLambda
class FakeStructuredListChatModel(FakeListChatModel):
@staticmethod
def parse_response(schema, response):
return schema.model_validate_json(response.content)
def with_structured_output(self, schema, *, include_raw = False, **kwargs):
return self | RunnableLambda(lambda response: FakeStructuredListChatModel.parse_response(schema, response)) Instantiate a fake model providing an array of Pydantic objects dumped to strings. Here I am using a dummy model model = FakeStructuredListChatModel(responses=[
MyModel(field1="value1", field2="value2").model_dump_json(),
MyModel(field1="value2", field2="value3").model_dump_json()
]).with_structured_output(MyModel) When invoked, the model will return the objects in order: prompt = ChatPromptTemplate([("system", "You are a helpful AI bot."),])
chain = prompt | model
res1 = chain.invoke({})
print(res1) # prints field1='value1' field2='value2'
res2 = chain.invoke({})
print(res2) # prints field1='value2' field2='value3' |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
I'd like to have a fake LLM for testing purposes that supports the
with_structured_output
method so that instead of testing "raw" output from LLM like with FakeListChatModel I could test returning a dict with either "parsed" or "parsing_error" or "raw" for testing the chainMotivation
I am writing complex chains and want to write unit tests for them.
If I had a fake LLM that supports with_structured_output I could test the chain with fake outputs from an LLM and thus see what my chain does in various scenarios
Proposal (If applicable)
No response
Beta Was this translation helpful? Give feedback.
All reactions