-
Notifications
You must be signed in to change notification settings - Fork 15
test(examples): Add some examples leveraging pydantic-AI and other chatlas alternatives #66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
karangattu
wants to merge
31
commits into
main
Choose a base branch
from
add-pydantic-ai-examples
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 7 commits
Commits
Show all changes
31 commits
Select commit
Hold shift + click to select a range
e785f63
Add some examples leveraging pydantic-AI
karangattu 2f19d5a
linting files
karangattu 02df719
remove requirements.txt
karangattu 6205148
add more alternatives to chatlas
karangattu 027acf5
remove comments in tool calling example
karangattu 63e93bd
add extra packages
karangattu 657015b
add missing structured_output for llm
karangattu 7727633
Update pkg-py/tests/playwright/chat/langchain/structured_output/app.py
karangattu c3d5f53
Update pkg-py/tests/playwright/chat/llama-index/structured_output/app.py
karangattu d3bdfaa
Update pkg-py/tests/playwright/chat/pydantic-ai/basic/app.py
karangattu cf765ed
Update pkg-py/tests/playwright/chat/pydantic-ai/structured_output/app.py
karangattu 25f82b5
Update pkg-py/tests/playwright/chat/pydantic-ai/tool_calling/app.py
karangattu 7cad49a
remove data sci adventure example for pydantic
karangattu 5e13404
update the example to make it into a shiny app
karangattu 8128c03
use context for maintaining state
karangattu dee6dcf
maintain context across the conversation
karangattu 2ac6b95
make it more langchain specific
karangattu b656214
preserve context within the chat conversation
karangattu 6cf3494
allow streaming now in tool calling example
karangattu f9c38c8
Add session-based chat history retrieval function
karangattu b00f8a6
remove workout planner app example for pydantic ai
karangattu 8ff507a
make the basic llama-index have streaming responses
karangattu 61e0269
Allow maintaining context and streaming for tool calling example
karangattu 527e985
remove multiple output structures and allow streaming
karangattu a3e7261
make the app even streamlined
karangattu 7093e2b
use a streaming and stateful chatbot for pydantic ai basic app
karangattu 622c82e
Update pkg-py/tests/playwright/chat/llama-index/basic/app.py
karangattu 9877d83
Update pkg-py/tests/playwright/chat/llm_package/basic/app.py
karangattu c05c0c4
Update pkg-py/tests/playwright/chat/llama-index/structured_output/app.py
karangattu 539e207
Refactor chat UI initialization and improve assistant prompts
karangattu d35d3a2
Reorder imports in app.py for consistency
karangattu File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,34 @@ | ||
import os | ||
|
||
from dotenv import load_dotenv | ||
from langchain_openai import ChatOpenAI | ||
from shiny.express import ui | ||
|
||
_ = load_dotenv() | ||
chat_client = ChatOpenAI( | ||
api_key=os.environ.get("OPENAI_API_KEY"), | ||
model="gpt-4o", | ||
) | ||
|
||
ui.page_opts( | ||
title="Hello LangChain Chat Models", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
chat = ui.Chat( | ||
id="chat", | ||
messages=["Hello! How can I help you today?"], | ||
) | ||
chat.ui() | ||
|
||
|
||
@chat.on_user_submit | ||
async def handle_user_input(user_input: str): | ||
response = chat_client.astream(user_input) | ||
|
||
async def stream_wrapper(): | ||
async for item in response: | ||
yield item.content | ||
|
||
await chat.append_message_stream(stream_wrapper()) | ||
44 changes: 44 additions & 0 deletions
44
pkg-py/tests/playwright/chat/langchain/structured_output/app.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
import os | ||
from typing import Optional | ||
|
||
from dotenv import load_dotenv | ||
from langchain_openai import ChatOpenAI | ||
from pydantic import BaseModel, Field | ||
from shiny.express import ui | ||
|
||
_ = load_dotenv() | ||
|
||
|
||
class Joke(BaseModel): | ||
"""Joke to tell user.""" | ||
|
||
setup: str = Field(description="The setup of the joke") | ||
punchline: str = Field(description="The punchline to the joke") | ||
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10") | ||
|
||
|
||
_ = Joke.model_rebuild() | ||
|
||
chat_client = ChatOpenAI( | ||
api_key=os.environ.get("OPENAI_API_KEY"), | ||
model="gpt-4o", | ||
) | ||
|
||
ui.page_opts( | ||
title="Hello LangChain Chat Model using structured output", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
chat = ui.Chat( | ||
id="chat", | ||
messages=["Hello! How can I help you today?"], | ||
karangattu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
) | ||
chat.ui() | ||
|
||
|
||
@chat.on_user_submit | ||
async def handle_user_input(user_input: str): | ||
joke = chat_client.with_structured_output(Joke).invoke(user_input) | ||
joke_text = f"{joke.setup}\n\n{joke.punchline}\n\nRating: {joke.rating if joke.rating is not None else 'N/A'}" | ||
await chat.append_message_stream(joke_text) | ||
karangattu marked this conversation as resolved.
Show resolved
Hide resolved
|
87 changes: 87 additions & 0 deletions
87
pkg-py/tests/playwright/chat/langchain/tool_calling/app.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,87 @@ | ||
import os | ||
from datetime import datetime | ||
|
||
from dotenv import load_dotenv | ||
from langchain_core.messages import HumanMessage | ||
from langchain_core.tools import tool | ||
from langchain_openai import ChatOpenAI | ||
from shiny.express import ui | ||
|
||
_ = load_dotenv() | ||
|
||
|
||
@tool | ||
def get_current_time() -> str: | ||
"""Get the current time in HH:MM:SS format.""" | ||
return datetime.now().strftime("%H:%M:%S") | ||
|
||
|
||
@tool | ||
def get_current_date() -> str: | ||
"""Get the current date in YYYY-MM-DD format.""" | ||
return datetime.now().strftime("%Y-%m-%d") | ||
|
||
|
||
@tool | ||
def get_current_datetime() -> str: | ||
"""Get the current date and time in a readable format.""" | ||
return datetime.now().strftime("%Y-%m-%d %H:%M:%S") | ||
|
||
|
||
tools = [get_current_time, get_current_date, get_current_datetime] | ||
|
||
tool_registry = {tool.name: tool for tool in tools} | ||
|
||
chat_client = ChatOpenAI( | ||
api_key=os.environ.get("OPENAI_API_KEY"), | ||
model="gpt-4o", | ||
).bind_tools(tools) | ||
|
||
ui.page_opts( | ||
title="Hello LangChain Chat Models with Tools", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
chat = ui.Chat( | ||
id="chat", | ||
messages=[ | ||
"Hello! How can I help you today? I can tell you the current time, date, or both!" | ||
], | ||
) | ||
chat.ui() | ||
|
||
|
||
@chat.on_user_submit | ||
async def handle_user_input(user_input: str): | ||
messages = [HumanMessage(content=user_input)] | ||
|
||
async def stream_response(): | ||
accumulated_tool_calls = [] | ||
|
||
async for chunk in chat_client.astream(messages): | ||
tool_calls = getattr(chunk, "tool_calls", None) | ||
if tool_calls: | ||
accumulated_tool_calls.extend(tool_calls) | ||
|
||
if chunk.content: | ||
content = chunk.content | ||
if isinstance(content, str): | ||
yield content | ||
elif isinstance(content, list): | ||
for part in content: | ||
if isinstance(part, str): | ||
yield part | ||
|
||
for tool_call in accumulated_tool_calls: | ||
tool_name = tool_call.get("name", "") | ||
if not tool_name: | ||
continue | ||
|
||
if tool_name in tool_registry: | ||
result = tool_registry[tool_name].invoke({}) | ||
karangattu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
yield f"\n\n🔧 {tool_name}: {result}" | ||
else: | ||
yield f"\n\n❌ Unknown tool: {tool_name}" | ||
|
||
await chat.append_message_stream(stream_response()) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
from dotenv import load_dotenv | ||
from llama_index.core.llms import ChatMessage | ||
from llama_index.llms.openai import OpenAI | ||
from shiny.express import ui | ||
|
||
# Load environment variables from .env file | ||
_ = load_dotenv() | ||
|
||
llm = OpenAI( | ||
model="gpt-4o-mini", | ||
) | ||
|
||
ui.page_opts( | ||
title="Shiny Chat with LlamaIndex", | ||
fillable=True, | ||
fillable_mobile=True, | ||
) | ||
|
||
|
||
chat = ui.Chat( | ||
id="chat", | ||
messages=[ | ||
{"role": "system", "content": "You are a pirate with a colorful personality."}, | ||
{"role": "user", "content": "What is your name, pirate?"}, | ||
{ | ||
"role": "assistant", | ||
"content": "Arrr, they call me Captain Cog, the chattiest pirate on the seven seas!", | ||
}, | ||
], | ||
) | ||
chat.ui() | ||
|
||
|
||
async def get_response_tokens(conversation: list[ChatMessage]): | ||
response_stream = await llm.astream_chat(conversation) | ||
async for r in response_stream: | ||
yield r.delta | ||
|
||
|
||
@chat.on_user_submit | ||
async def handle_user_input(): | ||
conversation = [ | ||
ChatMessage(role=msg["role"], content=msg["content"]) for msg in chat.messages() | ||
] | ||
karangattu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
await chat.append_message_stream(get_response_tokens(conversation)) |
109 changes: 109 additions & 0 deletions
109
pkg-py/tests/playwright/chat/llama-index/rag_with_chatlas/rag_example.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
import os | ||
|
||
from chatlas import ChatOpenAI | ||
from llama_index.core import StorageContext, load_index_from_storage | ||
|
||
_ = os.environ.get("OPENAI_API_KEY") | ||
|
||
|
||
# Load the knowledge store (index) from disk | ||
try: | ||
storage_context = StorageContext.from_defaults(persist_dir="./storage") | ||
index = load_index_from_storage(storage_context) | ||
print("LlamaIndex loaded successfully from ./storage") | ||
except Exception as e: | ||
print(f"Error loading LlamaIndex: {e}") | ||
print( | ||
"Please ensure you have run the index creation script first if this is your initial run." | ||
) | ||
from llama_index.core import Document, VectorStoreIndex | ||
|
||
print("Creating a dummy index for demonstration purposes...") | ||
bookstore_documents = [ | ||
"Our shipping policy states that standard shipping takes 3-5 business days. Express shipping takes 1-2 business days. Free shipping is offered on all orders over $50.", | ||
"Returns are accepted within 30 days of purchase, provided the book is in its original condition. To initiate a return, please visit our 'Returns' page on the website and fill out the form.", | ||
"The 'BookWorm Rewards' program offers members 10% off all purchases and early access to sales. You earn 1 point for every $1 spent.", | ||
"We accept Visa, Mastercard, American Express, and PayPal.", | ||
"Currently, we do not offer international shipping outside of the United States and Canada.", | ||
"The book 'The Midnight Library' by Matt Haig is a New York Times bestseller. It explores themes of regret and parallel lives.", | ||
"Orders placed before 2 PM EST are processed on the same day.", | ||
] | ||
documents = [Document(text=d) for d in bookstore_documents] | ||
index = VectorStoreIndex.from_documents(documents) | ||
index.storage_context.persist(persist_dir="./storage") | ||
print("Dummy index created and saved.") | ||
|
||
|
||
def retrieve_trusted_content( | ||
query: str, top_k: int = 3 | ||
): | ||
""" | ||
Retrieve relevant content from the bookstore's knowledge base. | ||
This acts as the "lookup" for our customer service assistant. | ||
|
||
Parameters | ||
---------- | ||
query | ||
The customer's question used to semantically search the knowledge store. | ||
top_k | ||
The number of most relevant policy/book excerpts to retrieve. | ||
""" | ||
retriever = index.as_retriever(similarity_top_k=top_k) | ||
nodes = retriever.retrieve(query) | ||
# Format the retrieved content clearly so Chatlas can use it as "trusted" information | ||
return [f"<excerpt>{x.text}</excerpt>" for x in nodes] | ||
|
||
|
||
chat = ChatOpenAI( | ||
system_prompt=( | ||
"You are 'BookWorm Haven's Customer Service Assistant'. " | ||
"Your primary goal is to help customers with their queries about shipping, returns, " | ||
"payment methods, and book information based *only* on the provided trusted content. " | ||
"If you cannot answer the question using the trusted content, politely state that " | ||
"you don't have that information and suggest they visit the 'Help' section of the website." | ||
), | ||
model="gpt-4o-mini", | ||
) | ||
|
||
# This is where Chatlas learns to "look up" information when needed. | ||
chat.register_tool(retrieve_trusted_content) | ||
|
||
|
||
# Example Customer Interactions with the Assistant | ||
print("\n--- BookWorm Haven Customer Service ---\n") | ||
|
||
# Example 1: Question that can be answered from the knowledge base (shipping policy) | ||
customer_query_1 = "How long does standard shipping take?" | ||
print(f"Customer: {customer_query_1}") | ||
response_1 = chat.chat(customer_query_1) | ||
print(f"Assistant: {response_1}\n") | ||
|
||
karangattu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
# Example 2: Another question answerable from the knowledge base (return policy) | ||
customer_query_2 = "What's your return policy?" | ||
print(f"Customer: {customer_query_2}") | ||
response_2 = chat.chat(customer_query_2) | ||
print(f"Assistant: {response_2}\n") | ||
|
||
# Example 3: Question about a specific book (information available) | ||
customer_query_3 = "Tell me about 'The Midnight Library'." | ||
print(f"Customer: {customer_query_3}") | ||
response_3 = chat.chat(customer_query_3) | ||
print(f"Assistant: {response_3}\n") | ||
|
||
# Example 4: Question with information not in the knowledge base | ||
customer_query_4 = "Do you have any books about quantum physics for beginners?" | ||
print(f"Customer: {customer_query_4}") | ||
response_4 = chat.chat(customer_query_4) | ||
print(f"Assistant: {response_4}\n") | ||
|
||
# Example 5: Combining information | ||
customer_query_5 = "Is free shipping available and what payment methods do you accept?" | ||
print(f"Customer: {customer_query_5}") | ||
response_5 = chat.chat(customer_query_5) | ||
print(f"Assistant: {response_5}\n") | ||
|
||
# Example 6: More nuanced query requiring retrieval | ||
customer_query_6 = "I want to return a book, what should I do?" | ||
print(f"Customer: {customer_query_6}") | ||
response_6 = chat.chat(customer_query_6) | ||
print(f"Assistant: {response_6}\n") |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.