Skip to content

Commit 406bba6

Browse files
committed
Remove old config files, update README
1 parent d317a25 commit 406bba6

File tree

17 files changed

+18
-158
lines changed

17 files changed

+18
-158
lines changed

README.md

Lines changed: 14 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Vector Inference: Easy inference on Slurm clusters
2-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the config files in the `vec_inf/models` folder and the environment variables in the model launching scripts in `vec_inf` accordingly.
2+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update [`launch_server.sh`](vec-inf/launch_server.sh), [`vllm.slurm`](vec-inf/vllm.slurm), [`multinode_vllm.slurm`](vec-inf/multinode_vllm.slurm) and [`models.csv`](vec-inf/models/models.csv) accordingly.
33

44
## Installation
55
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
@@ -9,15 +9,15 @@ pip install vec-inf
99
Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package
1010

1111
## Launch an inference server
12-
We will use the Llama 3 model as example, to launch an inference server for Llama 3 8B, run:
12+
We will use the Llama 3.1 model as example, to launch an OpenAI compatible inference server for Meta-Llama-3.1-8B-Instruct, run:
1313
```bash
14-
vec-inf launch llama-3
14+
vec-inf launch Meta-Llama-3.1-8B-Instruct
1515
```
1616
You should see an output like the following:
1717

18-
<img src="https://github.com/user-attachments/assets/c50646df-0991-4164-ad8f-6eb7e86b67e0" width="350">
18+
<img width="450" alt="launch_img" src="https://github.com/user-attachments/assets/557eb421-47db-4810-bccd-c49c526b1b43">
1919

20-
There is a default variant for every model family, which is specified in `vec_inf/models/{MODEL_FAMILY_NAME}/README.md`, you can switch to other variants with the `--model-variant` option, and make sure to change the requested resource accordingly. More information about the available options can be found in the [`vec_inf/models`](vec_inf/models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API.
20+
The model would be launched using the [default parameters](vec-inf/models/models.csv), you can override these values by providing additional options, use `--help` to see the full list.
2121

2222
You can check the inference server status by providing the Slurm job ID to the `status` command:
2323
```bash
@@ -26,18 +26,17 @@ vec-inf status 13014393
2626

2727
You should see an output like the following:
2828

29-
<img src="https://github.com/user-attachments/assets/310086fd-82ea-4bfc-8062-5c8e71c5650c" width="400">
29+
<img width="450" alt="status_img" src="https://github.com/user-attachments/assets/7385b9ca-9159-4ca9-bae2-7e26d80d9747">
3030

3131
There are 5 possible states:
3232

33-
* **PENDING**: Job submitted to Slurm, but not executed yet.
33+
* **PENDING**: Job submitted to Slurm, but not executed yet. Job pending reason will be shown.
3434
* **LAUNCHING**: Job is running but the server is not ready yet.
35-
* **READY**: Inference server running and ready to take requests.
36-
* **FAILED**: Inference server in an unhealthy state.
35+
* **READY**: Inference server running and ready to take requests.
36+
* **FAILED**: Inference server in an unhealthy state. Job failed reason will be shown.
3737
* **SHUTDOWN**: Inference server is shutdown/cancelled.
3838

3939
Note that the base URL is only available when model is in `READY` state.
40-
Both `launch` and `status` command supports `--json-mode`, where the output information would be structured as a JSON string.
4140

4241
Finally, when you're finished using a model, you can shut it down by providing the Slurm job ID:
4342
```bash
@@ -46,15 +45,13 @@ vec-inf shutdown 13014393
4645
> Shutting down model with Slurm Job ID: 13014393
4746
```
4847

49-
Here is a more complicated example that launches a model variant using multiple nodes, say we want to launch Mixtral 8x22B, run
48+
You call view the full list of available models by running the `list` command:
5049
```bash
51-
vec-inf launch mixtral --model-variant 8x22B-v0.1 --num-nodes 2 --num-gpus 4
50+
vec-inf list
5251
```
52+
<img width="1200" alt="list_img" src="https://github.com/user-attachments/assets/a4f0d896-989d-43bf-82a2-6a6e5d0d288f">
5353

54-
And for launching a multimodal model, here is an example for launching LLaVa-NEXT Mistral 7B (default variant)
55-
```bash
56-
vec-inf launch llava-v1.6 --is-vlm
57-
```
54+
`launch`, `list`, and `status` command supports `--json-mode`, where the command output would be structured as a JSON string.
5855

5956
## Send inference requests
6057
Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/completions.py`, and you should expect to see an output like the following:
@@ -67,4 +64,4 @@ If you want to run inference from your local device, you can open a SSH tunnel t
6764
```bash
6865
ssh -L 8081:172.17.8.29:8081 username@v.vectorinstitute.ai -N
6966
```
70-
The example provided above is for the vector cluster, change the variables accordingly for your environment
67+
Where the last number in the URL is the GPU number (gpu029 in this case). The example provided above is for the vector cluster, change the variables accordingly for your environment

vec_inf/README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
# `vec-inf` Commands
22

33
* `launch`: Specify a model family and other optional parameters to launch an OpenAI compatible inference server, `--json-mode` supported. Check [`here`](./models/README.md) for complete list of available options.
4+
* `list`: List all available model names, `--json-mode` supported.
45
* `status`: Check the model status by providing its Slurm job ID, `--json-mode` supported.
5-
* `shutdown`: Shutdown a model by providing its Slurm job ID.
6+
* `shutdown`: Shutdown a model by providing its Slurm job ID.
7+
8+
Use `--help` to see all available options

vec_inf/models/CodeLlama/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

vec_inf/models/Llama-2/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

vec_inf/models/Meta-Llama-3.1/config.sh

Lines changed: 0 additions & 6 deletions
This file was deleted.

vec_inf/models/Meta-Llama-3/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

vec_inf/models/Mistral/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

vec_inf/models/Mixtral/config.sh

Lines changed: 0 additions & 5 deletions
This file was deleted.

vec_inf/models/Phi-3/config.sh

Lines changed: 0 additions & 6 deletions
This file was deleted.

vec_inf/models/README.md

Lines changed: 0 additions & 31 deletions
This file was deleted.

0 commit comments

Comments
 (0)