Skip to content

Commit ef9290f

Browse files
DocSum - refactoring README.md for deploy application on ROCm (#1881)
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
1 parent 3b0bcb8 commit ef9290f

File tree

1 file changed

+110
-30
lines changed

1 file changed

+110
-30
lines changed

DocSum/docker_compose/amd/gpu/rocm/README.md

Lines changed: 110 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -23,17 +23,17 @@ This section describes how to quickly deploy and test the DocSum service manuall
2323

2424
### Access the Code
2525

26-
Clone the GenAIExample repository and access the ChatQnA AMD GPU platform Docker Compose files and supporting scripts:
26+
Clone the GenAIExample repository and access the DocSum AMD GPU platform Docker Compose files and supporting scripts:
2727

28-
```
28+
```bash
2929
git clone https://github.com/opea-project/GenAIExamples.git
3030
cd GenAIExamples/DocSum/docker_compose/amd/gpu/rocm
3131
```
3232

33-
Checkout a released version, such as v1.2:
33+
Checkout a released version, such as v1.3:
3434

3535
```
36-
git checkout v1.2
36+
git checkout v1.3
3737
```
3838

3939
### Generate a HuggingFace Access Token
@@ -42,33 +42,96 @@ Some HuggingFace resources, such as some models, are only accessible if you have
4242

4343
### Configure the Deployment Environment
4444

45-
To set up environment variables for deploying DocSum services, source the _set_env.sh_ script in this directory:
45+
To set up environment variables for deploying DocSum services, set up some parameters specific to the deployment environment and source the `set_env_*.sh` script in this directory:
4646

47-
```
48-
source ./set_env.sh
47+
- if used vLLM - set_env_vllm.sh
48+
- if used TGI - set_env.sh
49+
50+
Set the values of the variables:
51+
52+
- **HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
53+
54+
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
55+
56+
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
57+
58+
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
59+
60+
We set these values in the file set_env\*\*\*\*.sh
61+
62+
- **Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
63+
The values shown in the file set_env.sh or set_env_vllm.sh they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
64+
65+
Setting variables in the operating system environment:
66+
67+
```bash
68+
export HUGGINGFACEHUB_API_TOKEN="Your_HuggingFace_API_Token"
69+
source ./set_env_*.sh # replace the script name with the appropriate one
4970
```
5071

51-
The _set_env.sh_ script will prompt for required and optional environment variables used to configure the DocSum services. If a value is not entered, the script will use a default value for the same. It will also generate a _.env_ file defining the desired configuration. Consult the section on [DocSum Service configuration](#docsum-service-configuration) for information on how service specific configuration parameters affect deployments.
72+
Consult the section on [DocSum Service configuration](#docsum-configuration) for information on how service specific configuration parameters affect deployments.
5273

5374
### Deploy the Services Using Docker Compose
5475

55-
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute:
76+
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment with TGI, execute the command below. It uses the 'compose.yaml' file.
5677

5778
```bash
58-
docker compose up -d
79+
cd docker_compose/amd/gpu/rocm
80+
# if used TGI
81+
docker compose -f compose.yaml up -d
82+
# if used vLLM
83+
# docker compose -f compose_vllm.yaml up -d
84+
```
85+
86+
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
87+
88+
- compose_vllm.yaml - for vLLM-based application
89+
- compose.yaml - for TGI-based
90+
91+
```yaml
92+
shm_size: 1g
93+
devices:
94+
- /dev/kfd:/dev/kfd
95+
- /dev/dri:/dev/dri
96+
cap_add:
97+
- SYS_PTRACE
98+
group_add:
99+
- video
100+
security_opt:
101+
- seccomp:unconfined
59102
```
60103
61-
**Note**: developers should build docker image from source when:
104+
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
105+
106+
```yaml
107+
shm_size: 1g
108+
devices:
109+
- /dev/kfd:/dev/kfd
110+
- /dev/dri/card0:/dev/dri/card0
111+
- /dev/dri/render128:/dev/dri/render128
112+
cap_add:
113+
- SYS_PTRACE
114+
group_add:
115+
- video
116+
security_opt:
117+
- seccomp:unconfined
118+
```
119+
120+
**How to Identify GPU Device IDs:**
121+
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
62122

63-
- Developing off the git main branch (as the container's ports in the repo may be different from the published docker image).
64-
- Unable to download the docker image.
65-
- Use a specific version of Docker image.
123+
> **Note**: developers should build docker image from source when:
124+
>
125+
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
126+
> - Unable to download the docker image.
127+
> - Use a specific version of Docker image.
66128

67129
Please refer to the table below to build different microservices from source:
68130

69131
| Microservice | Deployment Guide |
70132
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
71133
| whisper | [whisper build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/whisper/src) |
134+
| TGI | [TGI project](https://github.com/huggingface/text-generation-inference.git) |
72135
| vLLM | [vLLM build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/third_parties/vllm#build-docker) |
73136
| llm-docsum | [LLM-DocSum build guide](https://github.com/opea-project/GenAIComps/tree/main/comps/llms/src/doc-summarization#12-build-docker-image) |
74137
| MegaService | [MegaService build guide](../../../../README_miscellaneous.md#build-megaservice-docker-image) |
@@ -84,6 +147,8 @@ docker ps -a
84147

85148
For the default deployment, the following 5 containers should have started:
86149

150+
If used TGI:
151+
87152
```
88153
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89154
748f577b3c78 opea/whisper:latest "python whisper_s…" 5 minutes ago Up About a minute 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
@@ -93,24 +158,39 @@ fds3dd5b9fd8 opea/docsum:latest "py
93158
78964d0c1hg5 ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp docsum-tgi-service
94159
```
95160

161+
If used vLLM:
162+
163+
```
164+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
165+
748f577b3c78 opea/whisper:latest "python whisper_s…" 5 minutes ago Up About a minute 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
166+
4eq8b7034fd9 opea/docsum-gradio-ui:latest "docker-entrypoint.s…" 5 minutes ago Up About a minute 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp docsum-ui-server
167+
fds3dd5b9fd8 opea/docsum:latest "python docsum.py" 5 minutes ago Up About a minute 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp docsum-backend-server
168+
78fsd6fabfs7 opea/llm-docsum:latest "bash entrypoint.sh" 5 minutes ago Up About a minute 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp docsum-llm-server
169+
78964d0c1hg5 opea/vllm-rocm:latest "python3 /workspace/…" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp docsum-vllm-service
170+
```
171+
96172
### Test the Pipeline
97173

98174
Once the DocSum services are running, test the pipeline using the following command:
99175

100176
```bash
101-
curl -X POST http://${host_ip}:8888/v1/docsum \
177+
curl -X POST http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
102178
-H "Content-Type: application/json" \
103179
-d '{"type": "text", "messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}'
104180
```
105181

106-
**Note** The value of _host_ip_ was set using the _set_env.sh_ script and can be found in the _.env_ file.
182+
**Note** The value of _HOST_IP_ was set using the _set_env.sh_ script and can be found in the _.env_ file.
107183

108184
### Cleanup the Deployment
109185

110186
To stop the containers associated with the deployment, execute the following command:
111187

112-
```
188+
```bash
189+
# if used TGI
113190
docker compose -f compose.yaml down
191+
# if used vLLM
192+
# docker compose -f compose_vllm.yaml down
193+
114194
```
115195

116196
All the DocSum containers will be stopped and then removed on completion of the "down" command.
@@ -132,7 +212,7 @@ There are also some customized usage.
132212

133213
```bash
134214
# form input. Use English mode (default).
135-
curl http://${host_ip}:8888/v1/docsum \
215+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
136216
-H "Content-Type: multipart/form-data" \
137217
-F "type=text" \
138218
-F "messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5." \
@@ -141,7 +221,7 @@ curl http://${host_ip}:8888/v1/docsum \
141221
-F "stream=True"
142222
143223
# Use Chinese mode.
144-
curl http://${host_ip}:8888/v1/docsum \
224+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
145225
-H "Content-Type: multipart/form-data" \
146226
-F "type=text" \
147227
-F "messages=2024年9月26日,北京——今日,英特尔正式发布英特尔® 至强® 6性能核处理器(代号Granite Rapids),为AI、数据分析、科学计算等计算密集型业务提供卓越性能。" \
@@ -150,7 +230,7 @@ curl http://${host_ip}:8888/v1/docsum \
150230
-F "stream=True"
151231
152232
# Upload file
153-
curl http://${host_ip}:8888/v1/docsum \
233+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
154234
-H "Content-Type: multipart/form-data" \
155235
-F "type=text" \
156236
-F "messages=" \
@@ -166,11 +246,11 @@ curl http://${host_ip}:8888/v1/docsum \
166246
Audio:
167247

168248
```bash
169-
curl -X POST http://${host_ip}:8888/v1/docsum \
249+
curl -X POST http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
170250
-H "Content-Type: application/json" \
171251
-d '{"type": "audio", "messages": "UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA"}'
172252
173-
curl http://${host_ip}:8888/v1/docsum \
253+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
174254
-H "Content-Type: multipart/form-data" \
175255
-F "type=audio" \
176256
-F "messages=UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA" \
@@ -182,11 +262,11 @@ curl http://${host_ip}:8888/v1/docsum \
182262
Video:
183263

184264
```bash
185-
curl -X POST http://${host_ip}:8888/v1/docsum \
265+
curl -X POST http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
186266
-H "Content-Type: application/json" \
187267
-d '{"type": "video", "messages": "convert your video to base64 data type"}'
188268
189-
curl http://${host_ip}:8888/v1/docsum \
269+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
190270
-H "Content-Type: multipart/form-data" \
191271
-F "type=video" \
192272
-F "messages=convert your video to base64 data type" \
@@ -208,7 +288,7 @@ If you want to deal with long context, can set following parameters and select s
208288
"summary_type" is set to be "auto" by default, in this mode we will check input token length, if it exceed `MAX_INPUT_TOKENS`, `summary_type` will automatically be set to `refine` mode, otherwise will be set to `stuff` mode.
209289

210290
```bash
211-
curl http://${host_ip}:8888/v1/docsum \
291+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
212292
-H "Content-Type: multipart/form-data" \
213293
-F "type=text" \
214294
-F "messages=" \
@@ -223,7 +303,7 @@ curl http://${host_ip}:8888/v1/docsum \
223303
In this mode LLM generate summary based on complete input text. In this case please carefully set `MAX_INPUT_TOKENS` and `MAX_TOTAL_TOKENS` according to your model and device memory, otherwise it may exceed LLM context limit and raise error when meet long context.
224304

225305
```bash
226-
curl http://${host_ip}:8888/v1/docsum \
306+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
227307
-H "Content-Type: multipart/form-data" \
228308
-F "type=text" \
229309
-F "messages=" \
@@ -238,7 +318,7 @@ curl http://${host_ip}:8888/v1/docsum \
238318
Truncate mode will truncate the input text and keep only the first chunk, whose length is equal to `min(MAX_TOTAL_TOKENS - input.max_tokens - 50, MAX_INPUT_TOKENS)`
239319

240320
```bash
241-
curl http://${host_ip}:8888/v1/docsum \
321+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
242322
-H "Content-Type: multipart/form-data" \
243323
-F "type=text" \
244324
-F "messages=" \
@@ -255,7 +335,7 @@ Map_reduce mode will split the inputs into multiple chunks, map each document to
255335
In this mode, default `chunk_size` is set to be `min(MAX_TOTAL_TOKENS - input.max_tokens - 50, MAX_INPUT_TOKENS)`
256336

257337
```bash
258-
curl http://${host_ip}:8888/v1/docsum \
338+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
259339
-H "Content-Type: multipart/form-data" \
260340
-F "type=text" \
261341
-F "messages=" \
@@ -272,7 +352,7 @@ Refin mode will split the inputs into multiple chunks, generate summary for the
272352
In this mode, default `chunk_size` is set to be `min(MAX_TOTAL_TOKENS - 2 * input.max_tokens - 128, MAX_INPUT_TOKENS)`.
273353

274354
```bash
275-
curl http://${host_ip}:8888/v1/docsum \
355+
curl http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
276356
-H "Content-Type: multipart/form-data" \
277357
-F "type=text" \
278358
-F "messages=" \
@@ -288,7 +368,7 @@ Several UI options are provided. If you need to work with multimedia documents,
288368

289369
### Gradio UI
290370

291-
To access the UI, use the URL - http://${EXTERNAL_HOST_IP}:${FAGGEN_UI_PORT}
371+
To access the UI, use the URL - http://${HOST_IP}:${DOCSUM_FRONTEND_PORT}
292372
A page should open when you click through to this address:
293373

294374
![UI start page](../../../../assets/img/ui-starting-page.png)

0 commit comments

Comments
 (0)