You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -60,12 +60,12 @@ Codefuse-ModelCache is a semantic cache for large language models (LLMs). By cac
60
60
61
61
You can find the start script in `flask4modelcache.py` and `flask4modelcache_demo.py`.
62
62
63
-
-`flask4modelcache_demo.py` is a quick test service that embeds sqlite and faiss. You do not need to be concerned about database-related matters.
64
-
-`flask4modelcache.py` is the normal service that requires configuration of MySQL and Milvus.
63
+
-`flask4modelcache_demo.py`: A quick test service that embeds SQLite and FAISS. No database configuration required.
64
+
-`flask4modelcache.py`: The standard service that requires MySQL and Milvus configuration.
65
65
66
66
### Dependencies
67
67
68
-
- Python: V3.8 and above
68
+
- Python: V3.8 or above
69
69
- Package installation
70
70
71
71
```shell
@@ -74,10 +74,10 @@ You can find the start script in `flask4modelcache.py` and `flask4modelcache_dem
74
74
75
75
### Start service
76
76
77
-
#### Start Demo
77
+
#### Start demo
78
78
79
-
1. Download the embedding model bin file on[Hugging Face](https://huggingface.co/shibing624/text2vec-base-chinese/tree/main). Place the downloaded bin file in the model/text2vec-base-chinese folder.
80
-
2. Start the backend service by using `flask4modelcache_dome.py`.
79
+
1. Download the embedding model bin file from[Hugging Face](https://huggingface.co/shibing624/text2vec-base-chinese/tree/main). Place it in the `model/text2vec-base-chinese` folder.
80
+
2. Start the backend service:
81
81
82
82
```shell
83
83
cd CodeFuse-ModelCache
@@ -89,19 +89,23 @@ You can find the start script in `flask4modelcache.py` and `flask4modelcache_dem
89
89
90
90
#### Start normal service
91
91
92
-
Before you start normal service, make sure that you have completed these steps:
92
+
Before you start standard service, do these steps:
93
93
94
-
1. Install the relational database MySQL and import the SQL file to create the data tables. You can find the SQL file in`reference_doc/create_table.sql`.
94
+
1. Install MySQL and import the SQL file from`reference_doc/create_table.sql`.
95
95
2. Install vector database Milvus.
96
-
3.Add the database access information to the configuration files:
97
-
1.`modelcache/config/milvus_config.ini`
98
-
2.`modelcache/config/mysql_config.ini`
99
-
4. Download the embedding model bin file from [Hugging Face](https://huggingface.co/shibing624/text2vec-base-chinese/tree/main). Put the bin file in the `model/text2vec-base-chinese` directory.
100
-
5. Start the backend service by using the `flask4modelcache.py` script.
96
+
3.Configure database access in:
97
+
-`modelcache/config/milvus_config.ini`
98
+
-`modelcache/config/mysql_config.ini`
99
+
4. Download the embedding model bin file from [Hugging Face](https://huggingface.co/shibing624/text2vec-base-chinese/tree/main). Put it in `model/text2vec-base-chinese`.
100
+
5. Start the backend service:
101
101
102
-
## Access the service
102
+
```bash
103
+
python flask4modelcache.py
104
+
```
103
105
104
-
The current service provides three core functionalities through RESTful API.: Cache-Writing, Cache-Querying, and Cache-Clearing. Demos:
106
+
## Visit the service
107
+
108
+
The service provides three core RESTful API functionalities: Cache-Writing, Cache-Querying, and Cache-Clearing.
105
109
106
110
### Write cache
107
111
@@ -273,25 +277,35 @@ We've implemented several key updates to our repository. We've resolved network
273
277
</tr>
274
278
</table>
275
279
276
-
## Core-Features
277
-
278
-
In ModelCache, we adopted the main idea of GPTCache, includes core modules: adapter, embedding, similarity, and data_manager. The adapter module is responsible for handling the business logic of various tasks and can connect the embedding, similarity, and data_manager modules. The embedding module is mainly responsible for converting text into semantic vector representations, it transforms user queries into vector form.The rank module is used for sorting and evaluating the similarity of the recalled vectors. The data_manager module is primarily used for managing the database. In order to better facilitate industrial applications, we have made architectural and functional upgrades as follows:
279
-
280
-
-[x] We have modified it similar to Redis and embedded it into the LLMs product, providing semantic caching capabilities. This ensures that it does not interfere with LLM calls, security audits, and other functionalities, achieving compatibility with all large-scale model services.
281
-
-[x] Multiple Model Loading Schemes:
282
-
- Support loading local embedding models to address Hugging Face network connectivity issues.
283
-
- Support loading various pretrained model embedding layers.
284
-
-[x] Data Isolation Capability
285
-
- Environment Isolation: Can pull different database configurations based on the environment to achieve environment isolation (dev, prepub, prod).
286
-
- Multi-tenant Data Isolation: Dynamically create collections based on the model for data isolation, addressing data isolation issues in multi-model/services scenarios in LLMs products.
287
-
-[x] Support for System Commands: Adopting a concatenation approach to address the issue of system commands in the prompt format.
288
-
-[x] Differentiation of Long and Short Texts: Long texts pose more challenges for similarity evaluation. To address this, we have added differentiation between long and short texts, allowing for separate configuration of threshold values for determining similarity.
289
-
-[x] Milvus Performance Optimization: The consistency_level of Milvus has been adjusted to "Session" level, which can result in better performance.
290
-
-[x] Data Management Capability:
291
-
- Ability to clear the cache, used for data management after model upgrades.
292
-
- Hitquery recall for subsequent data analysis and model iteration reference.
293
-
- Asynchronous log write-back capability for data analysis and statistics.
294
-
- Added model field and data statistics field for feature expansion.
280
+
## Features
281
+
282
+
In ModelCache, we incorporated the core principles of GPTCache. ModelCache has four modules: adapter, embedding, similarity, and data_manager.
283
+
284
+
- The adapter module orchestrates the business logic for various tasks, integrate the embedding, similarity, and data_manager modules.
285
+
- The embedding module converts text into semantic vector representations, and transforms user queries into vectors.
286
+
- The rank module ranks and evaluate the similarity of recalled vectors.
287
+
- The data_manager module manages the databases.
288
+
289
+
To make ModelCache more suitable for industrial use, we made several improvements to its architecture and functionality:
- Embedded into LLM products using a Redis-like caching mode
293
+
- Provided semantic caching without interfering with LLM calls, security audits, and other functions
294
+
- Compatible with all LLM services
295
+
- [x] Multiple model loading:
296
+
- Supported local embedding model loading, and resolved Hugging Face network connectivity issues
297
+
- Supported loading embedding layers from various pre-trained models
298
+
- [x] Data isolation
299
+
- Environment isolation: Read different database configurations based on the environment. Isolate development, staging, and production environments.
300
+
- Multi-tenant data isolation: Dynamically create collections based on models for data isolation, addressing data separation issues in multi-model/service scenarios within large language model products
301
+
- [x] Supported system instruction: Adopted a concatenation approach to resolve issues with system instructions in the prompt paradigm.
302
+
- [x] Long and short text differentiation: Long texts bring more challenges for similarity assessment. Added differentiation between long and short texts, allowing for separate threshold configurations.
303
+
- [x] Milvus performance optimization: Adjusted Milvus consistency level to "Session" level for better performance.
304
+
- [x] Data management:
305
+
- One-click cache clearing to enable easy data management after model upgrades.
306
+
- Recall of hit queries for subsequent data analysis and model iteration reference.
307
+
- Asynchronous log write-back for data analysis and statistics
308
+
- Added model field and data statistics field to enhance features
In ModelCache, we adopted the main idea of GPTCache, includes core modules: adapter, embedding, similarity, and data_manager. The adapter module is responsible for handling the business logic of various tasks and can connect the embedding, similarity, and data_manager modules. The embedding module is mainly responsible for converting text into semantic vector representations, it transforms user queries into vector form.The rank module is used for sorting and evaluating the similarity of the recalled vectors. The data_manager module is primarily used for managing the database. In order to better facilitate industrial applications, we have made architectural and functional upgrades as follows:
We've implemented several key updates to our repository. We've resolved network issues with Hugging Face and improved inference speed by introducing local embedding capabilities. Due to limitations in SqlAlchemy, we've redesigned our relational database interaction module for more flexible operations. We've added multi-tenancy support to ModelCache, recognizing the need for multiple users and models in LLM products. Lastly, we've made initial adjustments for better compatibility with system commands and multi-turn dialogues.
This topic describes ModelCache features. In ModelCache, we incorporated the core principles of GPTCache. ModelCache has four modules: adapter, embedding, similarity, and data_manager.
4
+
5
+
- The adapter module orchestrates the business logic for various tasks, integrate the embedding, similarity, and data_manager modules.
6
+
- The embedding module converts text into semantic vector representations, and transforms user queries into vectors.
7
+
- The rank module ranks and evaluate the similarity of recalled vectors.
8
+
- The data_manager module manages the databases.
9
+
10
+
To make ModelCache more suitable for industrial use, we made several improvements to its architecture and functionality:
- Embedded into LLM products using a Redis-like caching mode.
14
+
- Provided semantic caching without interfering with LLM calls, security audits, and other functions.
15
+
- Compatible with all LLM services.
16
+
-[x] Multiple model loading:
17
+
- Supported local embedding model loading, and resolved Hugging Face network connectivity issues.
18
+
- Supported loading embedding layers from various pre-trained models.
19
+
-[x] Data isolation
20
+
- Environment isolation: Read different database configurations based on the environment. Isolate development, staging, and production environments.
21
+
- Multi-tenant data isolation: Dynamically create collections based on models for data isolation, addressing data separation issues in multi-model/service scenarios within large language model products.
22
+
-[x] Supported system instruction: Adopted a concatenation approach to resolve issues with system instructions in the prompt paradigm.
23
+
-[x] Long and short text differentiation: Long texts bring more challenges for similarity assessment. Added differentiation between long and short texts, allowing for separate threshold configurations.
24
+
-[x] Milvus performance optimization: Adjusted Milvus consistency level to "Session" level for better performance.
25
+
-[x] Data management:
26
+
- One-click cache clearing to enable easy data management after model upgrades.
27
+
- Recall of hit queries for subsequent data analysis and model iteration reference.
28
+
- Asynchronous log write-back for data analysis and statistics.
29
+
- Added model field and data statistics field to enhance features.
This topic describes how to set up and use ModelCache.
4
+
5
+
You can find the start script in `flask4modelcache.py` and `flask4modelcache_demo.py`.
6
+
7
+
-`flask4modelcache_demo.py`: A quick test service that embeds SQLite and FAISS. No database configuration required.
8
+
-`flask4modelcache.py`: The standard service that requires MySQL and Milvus configuration.
9
+
10
+
## Dependencies
11
+
12
+
- Python: V3.8 or above
13
+
- Package installation
14
+
15
+
```shell
16
+
pip install -r requirements.txt
17
+
```
18
+
19
+
## Start service
20
+
21
+
### Start demo
22
+
23
+
1. Download the embedding model bin file from [Hugging Face](https://huggingface.co/shibing624/text2vec-base-chinese/tree/main). Place it in the `model/text2vec-base-chinese` folder.
24
+
2. Start the backend service:
25
+
26
+
```shell
27
+
cd CodeFuse-ModelCache
28
+
```
29
+
30
+
```shell
31
+
python flask4modelcache_demo.py
32
+
```
33
+
34
+
### Start standard service
35
+
36
+
Before you start standard service, do these steps:
37
+
38
+
1. Install MySQL and import the SQL file from `reference_doc/create_table.sql`.
39
+
2. Install vector database Milvus.
40
+
3. Configure database access in:
41
+
-`modelcache/config/milvus_config.ini`
42
+
-`modelcache/config/mysql_config.ini`
43
+
4. Download the embedding model bin file from [Hugging Face](https://huggingface.co/shibing624/text2vec-base-chinese/tree/main). Put it in `model/text2vec-base-chinese`.
44
+
5. Start the backend service:
45
+
46
+
```bash
47
+
python flask4modelcache.py
48
+
```
49
+
50
+
## Visit the service
51
+
52
+
The service provides three core RESTful API functionalities: Cache-Writing, Cache-Querying, and Cache-Clearing.
53
+
54
+
### Write cache
55
+
56
+
```python
57
+
import json
58
+
import requests
59
+
url = 'http://127.0.0.1:5000/modelcache'
60
+
type = 'insert'
61
+
scope = {"model": "CODEGPT-1008"}
62
+
chat_info = [{"query": [{"role": "system", "content": "You are an AI code assistant and you must provide neutral and harmless answers to help users solve code-related problems."}, {"role": "user", "content": "你是谁?"}],
63
+
"answer": "Hello, I am an intelligent assistant. How can I assist you?"}]
64
+
data = {'type': type, 'scope': scope, 'chat_info': chat_info}
65
+
headers = {"Content-Type": "application/json"}
66
+
res = requests.post(url, headers=headers, json=json.dumps(data))
67
+
```
68
+
69
+
### Query cache
70
+
71
+
```python
72
+
import json
73
+
import requests
74
+
url = 'http://127.0.0.1:5000/modelcache'
75
+
type = 'query'
76
+
scope = {"model": "CODEGPT-1008"}
77
+
query = [{"role": "system", "content": "You are an AI code assistant and you must provide neutral and harmless answers to help users solve code-related problems."}, {"role": "user", "content": "Who are you?"}]
78
+
data = {'type': type, 'scope': scope, 'query': query}
79
+
80
+
headers = {"Content-Type": "application/json"}
81
+
res = requests.post(url, headers=headers, json=json.dumps(data))
82
+
```
83
+
84
+
### Clear cache
85
+
86
+
```python
87
+
import json
88
+
import requests
89
+
url = 'http://127.0.0.1:5000/modelcache'
90
+
type = 'remove'
91
+
scope = {"model": "CODEGPT-1008"}
92
+
remove_type = 'truncate_by_model'
93
+
data = {'type': type, 'scope': scope, 'remove_type': remove_type}
94
+
95
+
headers = {"Content-Type": "application/json"}
96
+
res = requests.post(url, headers=headers, json=json.dumps(data))
0 commit comments