Skip to content

FinanceAgent - enable on Xeon, remote endpoint, and refactor tests #2032

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 40 commits into
base: main
Choose a base branch
from

Conversation

alexsin368
Copy link
Collaborator

Description

  • Enable support for running OpenAI models on Xeon
  • Enable remote endpoints for the LLMs on the agents only (docsum and dataprep will still run LLM locally)
  • Reorganize set_env.sh environment variables
  • Refactor tests: since Xeon test steps are similar to Gaudi's, breaking out the tests to make it easier to add new test steps or new tests for hardware

Issues

#1973

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Dependencies

None

Tests

Added new test script.
Verified FinanceAgent is running on the UI.

Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Copy link

github-actions bot commented Jun 4, 2025

Dependency Review

✅ No vulnerabilities or license issues found.

Scanned Files

None

pre-commit-ci bot and others added 6 commits July 3, 2025 16:40
Signed-off-by: alexsin368 <alex.sin@intel.com>
…, fix python cmd

Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
Signed-off-by: alexsin368 <alex.sin@intel.com>
@alexsin368 alexsin368 force-pushed the finance-agent-remote-endpoint-new branch from 90d36d5 to f56df7c Compare July 3, 2025 23:40
Signed-off-by: alexsin368 <alex.sin@intel.com>
@Copilot Copilot AI review requested due to automatic review settings August 5, 2025 22:32
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Enables FinanceAgent support for Intel® Xeon® processors with OpenAI models and remote endpoints, while reorganizing the test structure for better maintainability across hardware platforms.

  • Adds Xeon support with OpenAI model integration and remote endpoint configuration
  • Refactors test scripts into modular step-based components for reusability
  • Reorganizes environment variables and documentation for clearer configuration

Reviewed Changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
test_compose_on_gaudi.sh Refactored to use modular step scripts instead of inline functions
step*.sh New modular test step scripts extracted from original Gaudi test
_test_compose_openai_on_xeon.sh New test script for Xeon platform using OpenAI models
set_env.sh (gaudi/xeon) Environment configuration files with proper variable validation
compose_*.yaml (xeon) Docker compose files for Xeon deployment with OpenAI and remote endpoints
README.md files Updated documentation for Xeon support and corrected typos

Copy link
Collaborator Author

@alexsin368 alexsin368 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed all comments

@alexsin368
Copy link
Collaborator Author

@letonghan @XinyuYe-Intel may I get your review on this PR before the code freeze this Friday?

@joshuayao joshuayao added the feature New feature or request label Aug 13, 2025
@joshuayao joshuayao added this to OPEA Aug 13, 2025
@joshuayao joshuayao removed the feature New feature or request label Aug 13, 2025
@joshuayao joshuayao moved this to In review in OPEA Aug 13, 2025
Copy link
Collaborator

@letonghan letonghan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@alexsin368 alexsin368 requested a review from amberjain1 August 13, 2025 19:44
@joshuayao joshuayao removed this from the v1.4 milestone Aug 14, 2025

Set the following environment variables.

- `REMOTE_ENDPOINT` is the HTTPS endpoint of the remote server with the model of choice (i.e. https://api.example.com). **Note:** If the API for the models does not use LiteLLM, the second part of the model card needs to be appended to the URL. For example, set `REMOTE_ENDPOINT` to https://api.example.com/Llama-3.3-70B-Instruct if the model card is `meta-llama/Llama-3.3-70B-Instruct`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have instructions how users could figure out API using LiteLLM or not?


Supervisor Agent multi turn:

```bash
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be good to still have port export here.

Suggested change
```bash
```bash
export agent_port="9090"

@@ -0,0 +1,18 @@
# Copyright (C) 2025 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

try to use common environment and remove below section?

Suggested change
x-common-environment:
&common-env
llm_endpoint_url: ${REMOTE_ENDPOINT}
api_key: ${OPENAI_API_KEY}


services:
worker-finqa-agent:
environment:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
environment:
environment:
<<: *common-env

api_key: ${OPENAI_API_KEY}

worker-research-agent:
environment:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
environment:
environment:
<<: *common-env

api_key: ${OPENAI_API_KEY}

supervisor-react-agent:
environment:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
environment:
environment:
<<: *common-env


echo "=================== #2 Start services ===================="
start_all_services
echo "=================== #2 Endpoints for services started===================="
bash step2_start_services.sh gaudi_vllm
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably need to change file name when we have new step. I prefer no stepX in file name, but it is ok to keep it this way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In review
Development

Successfully merging this pull request may close these issues.

7 participants