You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
cd GenAIExamples/DocSum/docker_compose/amd/gpu/rocm
31
31
```
32
32
33
-
Checkout a released version, such as v1.2:
33
+
Checkout a released version, such as v1.3:
34
34
35
35
```
36
-
git checkout v1.2
36
+
git checkout v1.3
37
37
```
38
38
39
39
### Generate a HuggingFace Access Token
@@ -42,33 +42,96 @@ Some HuggingFace resources, such as some models, are only accessible if you have
42
42
43
43
### Configure the Deployment Environment
44
44
45
-
To set up environment variables for deploying DocSum services, source the _set_env.sh_ script in this directory:
45
+
To set up environment variables for deploying DocSum services, set up some parameters specific to the deployment environment and source the `set_env_*.sh` script in this directory:
46
46
47
-
```
48
-
source ./set_env.sh
47
+
- if used vLLM - set_env_vllm.sh
48
+
- if used TGI - set_env.sh
49
+
50
+
Set the values of the variables:
51
+
52
+
-**HOST_IP, HOST_IP_EXTERNAL** - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
53
+
54
+
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server's internal name/address.
55
+
56
+
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server's external name/address.
57
+
58
+
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
59
+
60
+
We set these values in the file set_env\*\*\*\*.sh
61
+
62
+
-**Variables with names like "**\*\*\*\*\*\*\_PORT"\*\* - These variables set the IP port numbers for establishing network connections to the application services.
63
+
The values shown in the file set_env.sh or set_env_vllm.sh they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment's server, and must not overlap with the IP ports of other applications that are already in use.
64
+
65
+
Setting variables in the operating system environment:
source ./set_env_*.sh # replace the script name with the appropriate one
49
70
```
50
71
51
-
The _set_env.sh_ script will prompt for required and optional environment variables used to configure the DocSum services. If a value is not entered, the script will use a default value for the same. It will also generate a _.env_ file defining the desired configuration. Consult the section on [DocSum Service configuration](#docsum-service-configuration) for information on how service specific configuration parameters affect deployments.
72
+
Consult the section on [DocSum Service configuration](#docsum-configuration) for information on how service specific configuration parameters affect deployments.
52
73
53
74
### Deploy the Services Using Docker Compose
54
75
55
-
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment, execute:
76
+
To deploy the DocSum services, execute the `docker compose up` command with the appropriate arguments. For a default deployment with TGI, execute the command below. It uses the 'compose.yaml' file.
56
77
57
78
```bash
58
-
docker compose up -d
79
+
cd docker_compose/amd/gpu/rocm
80
+
# if used TGI
81
+
docker compose -f compose.yaml up -d
82
+
# if used vLLM
83
+
# docker compose -f compose_vllm.yaml up -d
84
+
```
85
+
86
+
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
87
+
88
+
- compose_vllm.yaml - for vLLM-based application
89
+
- compose.yaml - for TGI-based
90
+
91
+
```yaml
92
+
shm_size: 1g
93
+
devices:
94
+
- /dev/kfd:/dev/kfd
95
+
- /dev/dri:/dev/dri
96
+
cap_add:
97
+
- SYS_PTRACE
98
+
group_add:
99
+
- video
100
+
security_opt:
101
+
- seccomp:unconfined
59
102
```
60
103
61
-
**Note**: developers should build docker image from source when:
104
+
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its `cardN` and `renderN` device IDs. For example:
105
+
106
+
```yaml
107
+
shm_size: 1g
108
+
devices:
109
+
- /dev/kfd:/dev/kfd
110
+
- /dev/dri/card0:/dev/dri/card0
111
+
- /dev/dri/render128:/dev/dri/render128
112
+
cap_add:
113
+
- SYS_PTRACE
114
+
group_add:
115
+
- video
116
+
security_opt:
117
+
- seccomp:unconfined
118
+
```
119
+
120
+
**How to Identify GPU Device IDs:**
121
+
Use AMD GPU driver utilities to determine the correct `cardN` and `renderN` IDs for your GPU.
62
122
63
-
- Developing off the git main branch (as the container's ports in the repo may be different from the published docker image).
64
-
- Unable to download the docker image.
65
-
- Use a specific version of Docker image.
123
+
> **Note**: developers should build docker image from source when:
124
+
>
125
+
> - Developing off the git main branch (as the container's ports in the repo may be different > from the published docker image).
126
+
> - Unable to download the docker image.
127
+
> - Use a specific version of Docker image.
66
128
67
129
Please refer to the table below to build different microservices from source:
78964d0c1hg5 ghcr.io/huggingface/text-generation-inference:2.4.1-rocm "/tgi-entrypoint.sh" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp docsum-tgi-service
94
159
```
95
160
161
+
If used vLLM:
162
+
163
+
```
164
+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
165
+
748f577b3c78 opea/whisper:latest "python whisper_s…" 5 minutes ago Up About a minute 0.0.0.0:7066->7066/tcp, :::7066->7066/tcp whisper-service
166
+
4eq8b7034fd9 opea/docsum-gradio-ui:latest "docker-entrypoint.s…" 5 minutes ago Up About a minute 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp docsum-ui-server
167
+
fds3dd5b9fd8 opea/docsum:latest "python docsum.py" 5 minutes ago Up About a minute 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp docsum-backend-server
168
+
78fsd6fabfs7 opea/llm-docsum:latest "bash entrypoint.sh" 5 minutes ago Up About a minute 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp docsum-llm-server
169
+
78964d0c1hg5 opea/vllm-rocm:latest "python3 /workspace/…" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:8008->80/tcp, [::]:8008->80/tcp docsum-vllm-service
170
+
```
171
+
96
172
### Test the Pipeline
97
173
98
174
Once the DocSum services are running, test the pipeline using the following command:
99
175
100
176
```bash
101
-
curl -X POST http://${host_ip}:8888/v1/docsum \
177
+
curl -X POST http://${HOST_IP}:${DOCSUM_BACKEND_SERVER_PORT}/v1/docsum \
102
178
-H "Content-Type: application/json" \
103
179
-d '{"type": "text", "messages": "Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5."}'
104
180
```
105
181
106
-
**Note** The value of _host_ip_ was set using the _set_env.sh_ script and can be found in the _.env_ file.
182
+
**Note** The value of _HOST_IP_ was set using the _set_env.sh_ script and can be found in the _.env_ file.
107
183
108
184
### Cleanup the Deployment
109
185
110
186
To stop the containers associated with the deployment, execute the following command:
111
187
112
-
```
188
+
```bash
189
+
# if used TGI
113
190
docker compose -f compose.yaml down
191
+
# if used vLLM
192
+
# docker compose -f compose_vllm.yaml down
193
+
114
194
```
115
195
116
196
All the DocSum containers will be stopped and then removed on completion of the "down" command.
@@ -132,7 +212,7 @@ There are also some customized usage.
-F "messages=Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5." \
-F "messages=convert your video to base64 data type" \
@@ -208,7 +288,7 @@ If you want to deal with long context, can set following parameters and select s
208
288
"summary_type"is set to be "auto" by default, in this mode we will check input token length, if it exceed `MAX_INPUT_TOKENS`, `summary_type` will automatically be set to `refine` mode, otherwise will be set to `stuff` mode.
In this mode LLM generate summary based on complete input text. In this case please carefully set `MAX_INPUT_TOKENS` and `MAX_TOTAL_TOKENS` according to your model and device memory, otherwise it may exceed LLM context limit and raise error when meet long context.
Truncate mode will truncate the input text and keep only the first chunk, whose length is equal to `min(MAX_TOTAL_TOKENS - input.max_tokens - 50, MAX_INPUT_TOKENS)`
0 commit comments