Skip to content

v23.3.0 #113

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 31, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ Build your AI agents in three lines of code!
* Extensible Tooling
* Automatic Tool Workflows
* Autonomous Operation
* Structured Outputs
* Knowledge Base
* MCP Support
* Guardrails
Expand Down Expand Up @@ -60,6 +61,7 @@ Build your AI agents in three lines of code!
* Input and output guardrails for content filtering, safety, and data sanitization
* Generate custom images based on text prompts with storage on S3 compatible services
* Automatic sequential tool workflows allowing agents to chain multiple tools
* Deterministically return structured outputs
* Combine with event-driven systems to create autonomous agents

## Stack
Expand Down Expand Up @@ -306,6 +308,38 @@ async for response in solana_agent.process("user123", "What is in this image? De
print(response, end="")
```

### Structured Outputs

```python
from solana_agent import SolanaAgent

config = {
"openai": {
"api_key": "your-openai-api-key",
},
"agents": [
{
"name": "researcher",
"instructions": "You are a research expert.",
"specialization": "Researcher",
}
],
}

solana_agent = SolanaAgent(config=config)

class ResearchProposal(BaseModel):
title: str
abstract: str
key_points: list[str]

full_response = None
async for response in solana_agent.process("user123", "Research the life of Ben Franklin - the founding Father.", output_model=ResearchProposal):
full_response = response

print(full_response.model_dump())
```

### Command Line Interface (CLI)

Solana Agent includes a command-line interface (CLI) for text-based chat using a configuration file.
Expand Down Expand Up @@ -540,6 +574,8 @@ async for response in solana_agent.process("user123", "Summarize the annual repo

Guardrails allow you to process and potentially modify user input before it reaches the agent (Input Guardrails) and agent output before it's sent back to the user (Output Guardrails). This is useful for implementing safety checks, content moderation, data sanitization, or custom transformations.

Guardrails don't work with structured outputs.

Solana Agent provides a built-in PII scrubber based on [scrubadub](https://github.com/LeapBeyond/scrubadub).

```python
Expand Down Expand Up @@ -580,6 +616,8 @@ config = {

#### Example Custom Guardrails

Guardrails don't work with structured outputs.

```python
from solana_agent import InputGuardrail, OutputGuardrail
import logging
Expand Down
38 changes: 38 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,40 @@ Image/Text Streaming
print(response, end="")


Structured Outputs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: python

from solana_agent import SolanaAgent

config = {
"openai": {
"api_key": "your-openai-api-key",
},
"agents": [
{
"name": "researcher",
"instructions": "You are a research expert.",
"specialization": "Researcher",
}
],
}

solana_agent = SolanaAgent(config=config)

class ResearchProposal(BaseModel):
title: str
abstract: str
key_points: list[str]

full_response = None
async for response in solana_agent.process("user123", "Research the life of Ben Franklin - the founding Father.", output_model=ResearchProposal):
full_response = response

print(full_response.model_dump())


Command Line Interface (CLI)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -474,6 +508,8 @@ Guardrails - Optional

Guardrails allow you to process and potentially modify user input before it reaches the agent (Input Guardrails) and agent output before it's sent back to the user (Output Guardrails). This is useful for implementing safety checks, content moderation, data sanitization, or custom transformations.

Guardrails don't apply to structured outputs.

Solana Agent provides a built-in PII scrubber based on scrubadub.

.. code-block:: python
Expand Down Expand Up @@ -515,6 +551,8 @@ Solana Agent provides a built-in PII scrubber based on scrubadub.
Example Custom Guardrails - Optional
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Guardrails don't apply to structured outputs.

.. code-block:: python

from solana_agent import InputGuardrail, OutputGuardrail
Expand Down
304 changes: 161 additions & 143 deletions poetry.lock

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "solana-agent"
version = "29.2.3"
version = "29.3.0"
description = "AI Agents for Solana"
authors = ["Bevan Hunt <bevan@bevanhunt.com>"]
license = "MIT"
Expand All @@ -24,13 +24,13 @@ python_paths = [".", "tests"]

[tool.poetry.dependencies]
python = ">=3.12,<4.0"
openai = "1.82.0"
openai = "1.82.1"
pydantic = ">=2"
pymongo = "4.13.0"
zep-cloud = "2.12.3"
instructor = "1.8.3"
pinecone = "7.0.1"
llama-index-core = "0.12.37"
pinecone = "7.0.2"
llama-index-core = "0.12.39"
llama-index-embeddings-openai = "0.3.1"
pypdf = "5.5.0"
scrubadub = "2.0.1"
Expand Down
21 changes: 14 additions & 7 deletions solana_agent/adapters/openai_adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -410,6 +410,8 @@ async def parse_structured_output(
api_key: Optional[str] = None,
base_url: Optional[str] = None,
model: Optional[str] = None,
functions: Optional[List[Dict[str, Any]]] = None,
function_call: Optional[Union[str, Dict[str, Any]]] = None,
) -> T: # pragma: no cover
"""Generate structured output using Pydantic model parsing with Instructor."""

Expand All @@ -431,13 +433,18 @@ async def parse_structured_output(

patched_client = instructor.from_openai(client, mode=Mode.TOOLS_STRICT)

# Use instructor's structured generation with function calling
response = await patched_client.chat.completions.create(
model=current_parse_model, # Use the determined model
messages=messages,
response_model=model_class,
max_retries=2, # Automatically retry on validation errors
)
create_args = {
"model": current_parse_model,
"messages": messages,
"response_model": model_class,
"max_retries": 2, # Automatically retry on validation errors
}
if functions:
create_args["tools"] = functions
if function_call:
create_args["function_call"] = function_call

response = await patched_client.chat.completions.create(**create_args)
return response
except Exception as e:
logger.warning(
Expand Down
9 changes: 7 additions & 2 deletions solana_agent/client/solana_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@

import json
import importlib.util
from typing import AsyncGenerator, Dict, Any, List, Literal, Optional, Union
from typing import AsyncGenerator, Dict, Any, List, Literal, Optional, Type, Union

from pydantic import BaseModel

from solana_agent.factories.agent_factory import SolanaAgentFactory
from solana_agent.interfaces.client.client import SolanaAgent as SolanaAgentInterface
Expand Down Expand Up @@ -69,7 +71,8 @@ async def process(
] = "mp4",
router: Optional[RoutingInterface] = None,
images: Optional[List[Union[str, bytes]]] = None,
) -> AsyncGenerator[Union[str, bytes], None]: # pragma: no cover
output_model: Optional[Type[BaseModel]] = None,
) -> AsyncGenerator[Union[str, bytes, BaseModel], None]: # pragma: no cover
"""Process a user message (text or audio) and optional images, returning the response stream.

Args:
Expand All @@ -83,6 +86,7 @@ async def process(
audio_input_format: Audio input format
router: Optional routing service for processing
images: Optional list of image URLs (str) or image bytes.
output_model: Optional Pydantic model for structured output

Returns:
Async generator yielding response chunks (text strings or audio bytes)
Expand All @@ -98,6 +102,7 @@ async def process(
audio_input_format=audio_input_format,
prompt=prompt,
router=router,
output_model=output_model,
):
yield chunk

Expand Down
7 changes: 5 additions & 2 deletions solana_agent/interfaces/client/client.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
from abc import ABC, abstractmethod
from typing import AsyncGenerator, Dict, Any, List, Literal, Optional, Union
from typing import AsyncGenerator, Dict, Any, List, Literal, Optional, Type, Union

from pydantic import BaseModel
from solana_agent.interfaces.plugins.plugins import Tool
from solana_agent.interfaces.services.routing import RoutingService as RoutingInterface

Expand Down Expand Up @@ -35,7 +37,8 @@ async def process(
] = "mp4",
router: Optional[RoutingInterface] = None,
images: Optional[List[Union[str, bytes]]] = None,
) -> AsyncGenerator[Union[str, bytes], None]:
output_model: Optional[Type[BaseModel]] = None,
) -> AsyncGenerator[Union[str, bytes, BaseModel], None]:
"""Process a user message and return the response stream."""
pass

Expand Down
2 changes: 2 additions & 0 deletions solana_agent/interfaces/providers/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ async def parse_structured_output(
api_key: Optional[str] = None,
base_url: Optional[str] = None,
model: Optional[str] = None,
functions: Optional[List[Dict[str, Any]]] = None,
function_call: Optional[Union[str, Dict[str, Any]]] = None,
) -> T:
"""Generate structured output using a specific model class."""
pass
Expand Down
7 changes: 5 additions & 2 deletions solana_agent/interfaces/services/agent.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
from abc import ABC, abstractmethod
from typing import Any, AsyncGenerator, Dict, List, Literal, Optional, Union
from typing import Any, AsyncGenerator, Dict, List, Literal, Optional, Type, Union

from pydantic import BaseModel

from solana_agent.domains.agent import AIAgent

Expand Down Expand Up @@ -45,7 +47,8 @@ async def generate_response(
] = "aac",
prompt: Optional[str] = None,
images: Optional[List[Union[str, bytes]]] = None,
) -> AsyncGenerator[Union[str, bytes], None]:
output_model: Optional[Type[BaseModel]] = None,
) -> AsyncGenerator[Union[str, bytes, BaseModel], None]:
"""Generate a response from an agent."""
pass

Expand Down
7 changes: 5 additions & 2 deletions solana_agent/interfaces/services/query.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
from abc import ABC, abstractmethod
from typing import Any, AsyncGenerator, Dict, List, Literal, Optional, Union
from typing import Any, AsyncGenerator, Dict, List, Literal, Optional, Type, Union

from pydantic import BaseModel

from solana_agent.interfaces.services.routing import RoutingService as RoutingInterface

Expand Down Expand Up @@ -35,7 +37,8 @@ async def process(
prompt: Optional[str] = None,
router: Optional[RoutingInterface] = None,
images: Optional[List[Union[str, bytes]]] = None,
) -> AsyncGenerator[Union[str, bytes], None]:
output_model: Optional[Type[BaseModel]] = None,
) -> AsyncGenerator[Union[str, bytes, BaseModel], None]:
"""Process the user request and generate a response."""
pass

Expand Down
Loading
Loading