Skip to content

Commit d894428

Browse files
letonghanpre-commit-ci[bot]
authored andcommitted
Refine VideoQnA READMEs (opea-project#2179)
Signed-off-by: letonghan <letong.han@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Signed-off-by: alexsin368 <alex.sin@intel.com>
1 parent 20c91b2 commit d894428

File tree

2 files changed

+39
-100
lines changed

2 files changed

+39
-100
lines changed

VideoQnA/README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,13 @@
11
# VideoQnA Application
22

3+
## Table of Contents
4+
5+
- [Overview](#overview)
6+
- [Deploy VideoQnA Service](#deploy-videoqna-service)
7+
- [Validated Configurations](#validated-configurations)
8+
9+
## Overview
10+
311
VideoQnA is a framework that retrieves video based on provided user prompt. It uses only the video embeddings to perform vector similarity search in Intel's VDMS vector database and performs all operations on Intel Xeon CPU. The pipeline supports long form videos and time-based search.
412

513
VideoQnA is implemented on top of [GenAIComps](https://github.com/opea-project/GenAIComps), with the architecture flow chart shows below:

VideoQnA/docker_compose/intel/cpu/xeon/README.md

Lines changed: 31 additions & 100 deletions
Original file line numberDiff line numberDiff line change
@@ -4,47 +4,30 @@ This document outlines the deployment process for a videoqna application utilizi
44

55
VideoQnA is a pipeline that retrieves video based on provided user prompt. It uses only the video embeddings to perform vector similarity search in Intel's VDMS vector database and performs all operations on Intel Xeon CPU. The pipeline supports long form videos and time-based search.
66

7-
## 🚀 Port used for the microservices
8-
9-
```
10-
dataprep
11-
========
12-
Port 6007 - Open to 0.0.0.0/0
13-
14-
vdms-vector-db
15-
===============
16-
Port 8001 - Open to 0.0.0.0/0
17-
18-
embedding
19-
=========
20-
Port 6990 - Open to 0.0.0.0/0
21-
22-
retriever
23-
=========
24-
Port 7000 - Open to 0.0.0.0/0
25-
26-
reranking
27-
=========
28-
Port 8000 - Open to 0.0.0.0/0
29-
30-
lvm video-llama
31-
===============
32-
Port 9009 - Open to 0.0.0.0/0
33-
34-
lvm
35-
===
36-
Port 9399 - Open to 0.0.0.0/0
37-
38-
videoqna-xeon-backend-server
39-
==========================
40-
Port 8888 - Open to 0.0.0.0/0
41-
42-
videoqna-xeon-ui-server
43-
=====================
44-
Port 5173 - Open to 0.0.0.0/0
45-
```
46-
47-
## 🚀 Build Docker Images
7+
## Table of Contents
8+
9+
- [Port used for the microservices](#port-used-for-the-microservices)
10+
- [Build Docker Images](#build-docker-images)
11+
- [Start Microservices](#start-microservices)
12+
- [Validate Microservices](#validate-microservices)
13+
- [Launch the UI](#launch-the-ui)
14+
- [Clean Microservices](#clean-microservices)
15+
16+
## Port used for the microservices
17+
18+
| Service | Port |
19+
| ---------------------------- | ---- |
20+
| dataprep | 6007 |
21+
| vdms-vector-db | 8001 |
22+
| embedding | 6990 |
23+
| retriever | 7000 |
24+
| reranking | 8000 |
25+
| lvm video-llama | 9009 |
26+
| lvm | 9399 |
27+
| videoqna-xeon-backend-server | 8888 |
28+
| videoqna-xeon-ui-server | 5173 |
29+
30+
## Build Docker Images
4831

4932
First of all, you need to build Docker Images locally and install the python package of it.
5033

@@ -115,7 +98,7 @@ Then run the command `docker images`, you will have the following 8 Docker Image
11598
1. `opea/videoqna:latest`
11699
1. `opea/videoqna-ui:latest`
117100

118-
## 🚀 Start Microservices
101+
## Start Microservices
119102

120103
### Setup Environment Variables
121104

@@ -125,77 +108,25 @@ Since the `compose.yaml` will consume some environment variables, you need to se
125108

126109
> Change the `External_Public_IP` below with the actual IPV4 value
127110
128-
```
111+
```bash
129112
export host_ip="External_Public_IP"
130113
```
131114

132115
**Export the value of your Huggingface API token to the `HF_TOKEN` environment variable**
133116

134117
> Change the `Your_Huggingface_API_Token` below with your actual Huggingface API Token value
135118
136-
```
119+
```bash
137120
export HF_TOKEN="Your_Huggingface_API_Token"
138121
```
139122

140123
**Append the value of the public IP address to the no_proxy list**
141124

142-
```
125+
```bash
143126
export no_proxy="${your_no_proxy},${host_ip}"
144127
```
145128

146-
Then you can run below commands or `source set_env.sh` to set all the variables
147-
148-
```bash
149-
export no_proxy=${your_no_proxy}
150-
export http_proxy=${your_http_proxy}
151-
export https_proxy=${your_http_proxy}
152-
153-
export HF_TOKEN=${HF_TOKEN}
154-
export HF_TOKEN=${HF_TOKEN}
155-
156-
export INDEX_NAME="mega-videoqna"
157-
export LLM_DOWNLOAD="True" # Set to "False" before redeploy LVM server to avoid model download
158-
export RERANK_COMPONENT_NAME="OPEA_VIDEO_RERANKING"
159-
export LVM_COMPONENT_NAME="OPEA_VIDEO_LLAMA_LVM"
160-
export EMBEDDING_COMPONENT_NAME="OPEA_CLIP_EMBEDDING"
161-
export USECLIP=1
162-
export LOGFLAG=True
163-
164-
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
165-
export LVM_SERVICE_HOST_IP=${host_ip}
166-
export MEGA_SERVICE_HOST_IP=${host_ip}
167-
export RERANK_SERVICE_HOST_IP=${host_ip}
168-
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
169-
export VDMS_HOST=${host_ip}
170-
171-
export BACKEND_PORT=8888
172-
export DATAPREP_PORT=6007
173-
export EMBEDDER_PORT=6990
174-
export MULTIMODAL_CLIP_EMBEDDER_PORT=6991
175-
export LVM_PORT=9399
176-
export RERANKING_PORT=8000
177-
export RETRIEVER_PORT=7000
178-
export UI_PORT=5173
179-
export VDMS_PORT=8001
180-
export VIDEO_LLAMA_PORT=9009
181-
182-
export BACKEND_HEALTH_CHECK_ENDPOINT="http://${host_ip}:${BACKEND_PORT}/v1/health_check"
183-
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:${BACKEND_PORT}/v1/videoqna"
184-
export CLIP_EMBEDDING_ENDPOINT="http://${host_ip}:${MULTIMODAL_CLIP_EMBEDDER_PORT}"
185-
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:${DATAPREP_PORT}/v1/dataprep/get"
186-
export DATAPREP_GET_VIDEO_LIST_ENDPOINT="http://${host_ip}:${DATAPREP_PORT}/v1/dataprep/get_videos"
187-
export DATAPREP_INGEST_SERVICE_ENDPOINT="http://${host_ip}:${DATAPREP_PORT}/v1/dataprep/ingest"
188-
export EMBEDDING_ENDPOINT="http://${host_ip}:${EMBEDDER_PORT}/v1/embeddings"
189-
export FRONTEND_ENDPOINT="http://${host_ip}:${UI_PORT}/_stcore/health"
190-
export LVM_ENDPOINT="http://${host_ip}:${VIDEO_LLAMA_PORT}"
191-
export LVM_VIDEO_ENDPOINT="http://${host_ip}:${VIDEO_LLAMA_PORT}/generate"
192-
export RERANKING_ENDPOINT="http://${host_ip}:${RERANKING_PORT}/v1/reranking"
193-
export RETRIEVER_ENDPOINT="http://${host_ip}:${RETRIEVER_PORT}/v1/retrieval"
194-
export TEI_RERANKING_ENDPOINT="http://${host_ip}:${TEI_RERANKING_PORT}"
195-
export UI_ENDPOINT="http://${host_ip}:${UI_PORT}/_stcore/health"
196-
197-
export no_proxy="${NO_PROXY},${host_ip},vdms-vector-db,dataprep-vdms-server,clip-embedding-server,reranking-tei-server,retriever-vdms-server,lvm-video-llama,lvm,videoqna-xeon-backend-server,videoqna-xeon-ui-server"
198-
```
129+
Then you can run `source set_env.sh` to set all the variables
199130

200131
Note: Replace with `host_ip` with you external IP address, do not use localhost.
201132

@@ -228,7 +159,7 @@ docker compose up -d
228159
# wait until all the services is up. The LVM server will download models, so it take ~1.5hr to get ready.
229160
```
230161

231-
### Validate Microservices
162+
## Validate Microservices
232163

233164
1. Dataprep Microservice
234165

@@ -339,7 +270,7 @@ docker compose up -d
339270

340271
> Note that the megaservice support only stream output.
341272
342-
## 🚀 Launch the UI
273+
## Launch the UI
343274

344275
To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below:
345276

0 commit comments

Comments
 (0)