-
Notifications
You must be signed in to change notification settings - Fork 308
FinanceAgent - enable on Xeon, remote endpoint, and refactor tests #2032
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
alexsin368
merged 42 commits into
opea-project:main
from
alexsin368:finance-agent-remote-endpoint-new
Aug 15, 2025
Merged
Changes from 10 commits
Commits
Show all changes
42 commits
Select commit
Hold shift + click to select a range
a9c1716
initial commit to enable remote endpoints
alexsin368 1f48bcb
fix typo on file name
alexsin368 f12c122
add common-env
alexsin368 68b824d
fix environment
alexsin368 7408871
correct path to top .set_env.sh file
alexsin368 c3f4e9d
explicitly set env variables for agent microservices
alexsin368 f6f123a
fix typos, initial commit for xeon test
alexsin368 62e3fbf
reorganize env variables
alexsin368 de7fe5a
refactor tests for Gaudi and Xeon
alexsin368 1c60777
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] bc7f7c4
address comments, added check for docsum-vllm-xeon in test
alexsin368 234f931
Merge branch 'finance-agent-remote-endpoint-new' of https://github.co…
alexsin368 dfe01b0
make test _ because lack of OPENAI_API_KEY means cannot test in CI/CD…
alexsin368 eae522a
Merge branch 'finance-agent-remote-endpoint-new' of https://github.co…
alexsin368 4170bbe
fix typo
alexsin368 6c1c5ab
Merge branch 'opea-project:main' into finance-agent-remote-endpoint-new
alexsin368 821bfad
update instructions
alexsin368 90d36d5
update link
alexsin368 56f01a4
initial commit to enable remote endpoints
alexsin368 1d753ec
fix typo on file name
alexsin368 8f6d917
add common-env
alexsin368 eb03afc
fix environment
alexsin368 e50f1a0
correct path to top .set_env.sh file
alexsin368 d62e3aa
explicitly set env variables for agent microservices
alexsin368 3ef6eac
fix typos, initial commit for xeon test
alexsin368 41753bc
reorganize env variables
alexsin368 58c3ecf
refactor tests for Gaudi and Xeon
alexsin368 0c5e003
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 9598966
address comments, added check for docsum-vllm-xeon in test
alexsin368 54bacdb
make test _ because lack of OPENAI_API_KEY means cannot test in CI/CD…
alexsin368 d597450
fix typo
alexsin368 9589f08
update instructions
alexsin368 f56df7c
update link
alexsin368 143a76f
Merge branch 'finance-agent-remote-endpoint-new' of https://github.co…
alexsin368 ed27bf9
add note on HW requirement
alexsin368 42685ba
Merge branch 'main' into finance-agent-remote-endpoint-new
alexsin368 f1784a9
remove denvr reference, fix link to finnhub, fix typos
alexsin368 3e8ab04
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] bd72480
Merge branch 'main' into finance-agent-remote-endpoint-new
chensuyue 6554f0e
Merge branch 'main' into finance-agent-remote-endpoint-new
alexsin368 29e7ac7
add agent_port, use common-env
alexsin368 7659360
Merge branch 'main' into finance-agent-remote-endpoint-new
alexsin368 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,202 @@ | ||
# Deploy Finance Agent on Intel® Xeon® Scalable processors with Docker Compose | ||
alexsin368 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
This README provides instructions for deploying the Finance Agent application using Docker Compose on systems equipped with Intel® Xeon® Scalable processors. | ||
|
||
## Table of Contents | ||
|
||
- [Overview](#overview) | ||
- [Prerequisites](#prerequisites) | ||
- [Start Deployment](#start-deployment) | ||
- [Validate Services](#validate-services) | ||
- [Accessing the User Interface (UI)](#accessing-the-user-interface-ui) | ||
|
||
## Overview | ||
|
||
This guide focuses on running the pre-configured Finance Agent service using Docker Compose on Intel® Xeon® Scalable processors. It runs with OpenAI LLM models, along with containers for other microservices like embedding, retrieval, data preparation and the UI. | ||
|
||
## Prerequisites | ||
|
||
- Docker and Docker Compose installed. | ||
- Intel® Xeon® Scalable processors on-prem or from the cloud. | ||
- If running OpenAI models, generate the API key by following these [instructions](https://platform.openai.com/api-keys). If using a remote server i.e. for LLM text generation, have the base URL and API key ready from the cloud service provider or owner of the on-prem machine. | ||
- Git installed (for cloning repository). | ||
- Hugging Face Hub API Token (for downloading models). | ||
- Access to the internet (or a private model cache). | ||
- Finnhub API Key. Go to https://docs.financialdatasets.ai/ to get your free api key. | ||
alexsin368 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
- Financial Datasets API Key. Go to https://docs.financialdatasets.ai/ to get your free api key. | ||
|
||
Clone the GenAIExamples repository: | ||
|
||
```bash | ||
mkdir /path/to/your/workspace/ | ||
export WORKDIR=/path/to/your/workspace/ | ||
cd $WORKDIR | ||
git clone https://github.com/opea-project/GenAIExamples.git | ||
cd GenAIExamples/FinanceAgent/docker_compose/intel/cpu/xeon | ||
``` | ||
|
||
## Start Deployment | ||
|
||
By default, it will run models from OpenAI. | ||
|
||
### Configure Environment | ||
|
||
Set required environment variables in your shell: | ||
|
||
```bash | ||
# Path to your model cache | ||
export HF_CACHE_DIR="./data" | ||
# Some models from Hugging Face require approval beforehand. Ensure you have the necessary permissions to access them. | ||
export HF_TOKEN="your_huggingface_token" | ||
export OPENAI_API_KEY="your-openai-api-key" | ||
export FINNHUB_API_KEY="your-finnhub-api-key" | ||
export FINANCIAL_DATASETS_API_KEY="your-financial-datasets-api-key" | ||
|
||
# Configure HOST_IP | ||
# Replace with your host's external IP address (do not use localhost or 127.0.0.1). | ||
export HOST_IP=$(hostname -I | awk '{print $1}') | ||
|
||
# Optional: Configure proxy if needed | ||
# export HTTP_PROXY="${http_proxy}" | ||
# export HTTPS_PROXY="${https_proxy}" | ||
# export NO_PROXY="${NO_PROXY},${HOST_IP}" | ||
|
||
source set_env.sh | ||
``` | ||
|
||
Note: The compose file might read additional variables from `set_env.sh`. Ensure all required variables like ports (LLM_SERVICE_PORT, TEI_EMBEDDER_PORT, etc.) are set if not using defaults from the compose file. For instance, edit the `set_env.sh` or overwrite LLM_MODEL_ID to change the LLM model. | ||
|
||
### [Optional] Running Models on a Remote Server | ||
|
||
To run models on a remote server i.e. deployed using Intel® AI for Enterprise Inference, a base URL and an API key are required to access them. To run the Agent microservice on Xeon while using models deployed on a remote server, set additional environment variables shown below. | ||
|
||
```bash | ||
# Overwrite this environment variable previously set in set_env.sh with a new value for the desired model | ||
export OPENAI_LLM_MODEL_ID=<name-of-model-card> | ||
alexsin368 marked this conversation as resolved.
Show resolved
Hide resolved
alexsin368 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
# The base URL given from the owner of the on-prem machine or cloud service provider. It will follow this format: "https://<DNS>". Here is an example: "https://api.inference.example.com". | ||
export REMOTE_ENDPOINT=<http-endpoint-of-remote-server> | ||
``` | ||
|
||
### Start Services | ||
|
||
The following services will be launched: | ||
|
||
- tei-embedding-serving | ||
- redis-vector-db | ||
- redis-kv-store | ||
- dataprep-redis-server-finance | ||
- finqa-agent-endpoint | ||
- research-agent-endpoint | ||
- docsum-vllm-xeon | ||
- supervisor-agent-endpoint | ||
- agent-ui | ||
|
||
Follow **ONE** option below to deploy these services. | ||
|
||
#### Option 1: Deploy with Docker Compose for OpenAI Models | ||
|
||
```bash | ||
docker compose -f compose_openai.yaml up -d | ||
``` | ||
|
||
#### [Optional] Option 2: Deploy with Docker Compose for Models on a Remote Server | ||
|
||
```bash | ||
docker compose -f compose_openai.yaml -f compose_remote.yaml up -d | ||
``` | ||
|
||
#### [Optional] Build docker images | ||
|
||
This is only needed if the Docker image is unavailable or the pull operation fails. | ||
|
||
```bash | ||
cd $WORKDIR/GenAIExamples/FinanceAgent/docker_image_build | ||
# get GenAIComps repo | ||
git clone https://github.com/opea-project/GenAIComps.git | ||
# build the images | ||
docker compose -f build.yaml build --no-cache | ||
``` | ||
|
||
## Validate Services | ||
|
||
Wait several minutes for models to download and services to initialize. Check container logs with this command: | ||
|
||
```bash | ||
docker compose logs -f <service_name>. | ||
``` | ||
|
||
### Validate Data Services | ||
|
||
Ingest data and retrieval from database | ||
|
||
```bash | ||
python3 $WORKDIR/GenAIExamples/FinanceAgent/tests/test_redis_finance.py --port 6007 --test_option ingest | ||
python3 $WORKDIR/GenAIExamples/FinanceAgent/tests/test_redis_finance.py --port 6007 --test_option get | ||
``` | ||
|
||
### Validate Agents | ||
|
||
FinQA Agent: | ||
|
||
```bash | ||
export agent_port="9095" | ||
prompt="What is Gap's revenue in 2024?" | ||
python3 $WORKDIR/GenAIExamples/FinanceAgent/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port | ||
``` | ||
|
||
Research Agent: | ||
|
||
```bash | ||
export agent_port="9096" | ||
prompt="generate NVDA financial research report" | ||
python3 $WORKDIR/GenAIExamples/FinanceAgent/tests/test.py --prompt "$prompt" --agent_role "worker" --ext_port $agent_port --tool_choice "get_current_date" --tool_choice "get_share_performance" | ||
``` | ||
|
||
Supervisor Agent single turns: | ||
|
||
```bash | ||
export agent_port="9090" | ||
python3 $WORKDIR/GenAIExamples/FinanceAgent/tests/test.py --agent_role "supervisor" --ext_port $agent_port --stream | ||
``` | ||
|
||
Supervisor Agent multi turn: | ||
|
||
```bash | ||
alexsin368 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
python3 $WORKDIR/GenAIExamples/FinanceAgent/tests/test.py --agent_role "supervisor" --ext_port $agent_port --multi-turn --stream | ||
``` | ||
|
||
## Accessing the User Interface (UI) | ||
|
||
The UI microservice is launched in the previous step with the other microservices. | ||
To see the UI, open a web browser to `http://${HOST_IP}:5175` to access the UI. Note the `HOST_IP` here is the host IP of the UI microservice. | ||
|
||
1. Create Admin Account with a random value | ||
|
||
2. Enter the endpoints in the `Connections` settings | ||
|
||
First, click on the user icon in the upper right corner to open `Settings`. Click on `Admin Settings`. Click on `Connections`. | ||
|
||
Then, enter the supervisor agent endpoint in the `OpenAI API` section: `http://${HOST_IP}:9090/v1`. Enter the API key as "empty". Add an arbitrary model id in `Model IDs`, for example, "opea_agent". The `HOST_IP` here should be the host ip of the agent microservice. | ||
|
||
Then, enter the dataprep endpoint in the `Icloud File API` section. You first need to enable `Icloud File API` by clicking on the button on the right to turn it into green and then enter the endpoint url, for example, `http://${HOST_IP}:6007/v1`. The `HOST_IP` here should be the host ip of the dataprep microservice. | ||
|
||
You should see screen like the screenshot below when the settings are done. | ||
|
||
 | ||
|
||
3. Upload documents with UI | ||
|
||
Click on the `Workplace` icon in the top left corner. Click `Knowledge`. Click on the "+" sign to the right of `iCloud Knowledge`. You can paste an url in the left hand side of the pop-up window, or upload a local file by click on the cloud icon on the right hand side of the pop-up window. Then click on the `Upload Confirm` button. Wait till the processing is done and the pop-up window will be closed on its own when the data ingestion is done. See the screenshot below. | ||
Then, enter the dataprep endpoint in the `iCloud File API` section. You first need to enable `iCloud File API` by clicking on the button on the right to turn it into green and then enter the endpoint url, for example, `http://${HOST_IP}:6007/v1`. The `HOST_IP` here should be the host ip of the dataprep microservice. | ||
Note: the data ingestion may take a few minutes depending on the length of the document. Please wait patiently and do not close the pop-up window. | ||
|
||
 | ||
|
||
4. Test agent with UI | ||
|
||
After the settings are done and documents are ingested, you can start to ask questions to the agent. Click on the `New Chat` icon in the top left corner, and type in your questions in the text box in the middle of the UI. | ||
|
||
The UI will stream the agent's response tokens. You need to expand the `Thinking` tab to see the agent's reasoning process. After the agent made tool calls, you would also see the tool output after the tool returns output to the agent. Note: it may take a while to get the tool output back if the tool execution takes time. | ||
|
||
 |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.