Skip to content

Commit fd7267a

Browse files
pre-commit-ci[bot]alexsin368
authored andcommitted
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent 7fdff42 commit fd7267a

File tree

1 file changed

+5
-5
lines changed
  • CodeTrans/docker_compose/intel/cpu/xeon

1 file changed

+5
-5
lines changed

CodeTrans/docker_compose/intel/cpu/xeon/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -137,15 +137,15 @@ Key parameters are configured via environment variables set before running `dock
137137

138138
In the context of deploying a CodeTrans pipeline on an Intel® Xeon® platform, we can pick and choose different large language model serving frameworks. The table below outlines the various configurations that are available as part of the application. These configurations can be used as templates and can be extended to different components available in [GenAIComps](https://github.com/opea-project/GenAIComps.git).
139139

140-
| File | Description |
141-
| -------------------------------------- | ----------------------------------------------------------------------------------------- |
142-
| [compose.yaml](./compose.yaml) | Default compose file using vllm as serving framework and redis as vector database |
143-
| [compose_tgi.yaml](./compose_tgi.yaml) | The LLM serving framework is TGI. All other configurations remain the same as the default |
140+
| File | Description |
141+
| -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
142+
| [compose.yaml](./compose.yaml) | Default compose file using vllm as serving framework and redis as vector database |
143+
| [compose_tgi.yaml](./compose_tgi.yaml) | The LLM serving framework is TGI. All other configurations remain the same as the default |
144144
| [compose_remote.yaml](./compose_remote.yaml) | The LLM used is hosted on a remote server and an endpoint is used to access this model. Additional environment variables need to be set before running. See [instructions](#running-llm-models-deployed-on-remote-servers-with-compose_remoteyaml) below. |
145145

146146
### Running LLM models deployed on remote servers with `compose_remote.yaml`
147147

148-
To run the LLM model on a remote server, the environment variable `LLM_MODEL_ID` may need to be overwritten, and two new environment variables `REMOTE_ENDPOINT` and `OPENAI_API_KEY` need to be set. An example endpoint is https://api.inference.example.com, but the actual value will depend on how it is set up on the remote server. The key is used to access the remote server.
148+
To run the LLM model on a remote server, the environment variable `LLM_MODEL_ID` may need to be overwritten, and two new environment variables `REMOTE_ENDPOINT` and `OPENAI_API_KEY` need to be set. An example endpoint is https://api.inference.example.com, but the actual value will depend on how it is set up on the remote server. The key is used to access the remote server.
149149

150150
```bash
151151
export LLM_MODEL_ID=<name-of-llm-model-card>

0 commit comments

Comments
 (0)