You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Vector Inference: Easy inference on Slurm clusters
2
-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update the config files in the `vec_inf/models` folder and the environment variables in the model launching scripts in `vec_inf` accordingly.
2
+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using [vLLM](https://docs.vllm.ai/en/latest/). **All scripts in this repository runs natively on the Vector Institute cluster environment**. To adapt to other environments, update [`launch_server.sh`](vec-inf/launch_server.sh), [`vllm.slurm`](vec-inf/vllm.slurm), [`multinode_vllm.slurm`](vec-inf/multinode_vllm.slurm)and [`models.csv`](vec-inf/models/models.csv) accordingly.
3
3
4
4
## Installation
5
5
If you are using the Vector cluster environment, and you don't need any customization to the inference server environment, run the following to install package:
@@ -9,15 +9,15 @@ pip install vec-inf
9
9
Otherwise, we recommend using the provided [`Dockerfile`](Dockerfile) to set up your own environment with the package
10
10
11
11
## Launch an inference server
12
-
We will use the Llama 3 model as example, to launch an inference server for Llama 3 8B, run:
12
+
We will use the Llama 3.1 model as example, to launch an OpenAI compatible inference server for Meta-Llama-3.1-8B-Instruct, run:
There is a default variant for every model family, which is specified in `vec_inf/models/{MODEL_FAMILY_NAME}/README.md`, you can switch to other variants with the `--model-variant` option, and make sure to change the requested resource accordingly. More information about the available options can be found in the [`vec_inf/models`](vec_inf/models) folder. The inference server is compatible with the OpenAI `Completion` and `ChatCompletion` API.
20
+
The model would be launched using the [default parameters](vec-inf/models/models.csv), you can override these values by providing additional options, use `--help`to see the full list.
21
21
22
22
You can check the inference server status by providing the Slurm job ID to the `status` command:
And for launching a multimodal model, here is an example for launching LLaVa-NEXT Mistral 7B (default variant)
55
-
```bash
56
-
vec-inf launch llava-v1.6 --is-vlm
57
-
```
54
+
`launch`, `list`, and `status` command supports `--json-mode`, where the command output would be structured as a JSON string.
58
55
59
56
## Send inference requests
60
57
Once the inference server is ready, you can start sending in inference requests. We provide example scripts for sending inference requests in [`examples`](examples) folder. Make sure to update the model server URL and the model weights location in the scripts. For example, you can run `python examples/inference/llm/completions.py`, and you should expect to see an output like the following:
@@ -67,4 +64,4 @@ If you want to run inference from your local device, you can open a SSH tunnel t
The example provided above is for the vector cluster, change the variables accordingly for your environment
67
+
Where the last number in the URL is the GPU number (gpu029 in this case). The example provided above is for the vector cluster, change the variables accordingly for your environment
*`launch`: Specify a model family and other optional parameters to launch an OpenAI compatible inference server, `--json-mode` supported. Check [`here`](./models/README.md) for complete list of available options.
4
+
*`list`: List all available model names, `--json-mode` supported.
4
5
*`status`: Check the model status by providing its Slurm job ID, `--json-mode` supported.
5
-
*`shutdown`: Shutdown a model by providing its Slurm job ID.
6
+
*`shutdown`: Shutdown a model by providing its Slurm job ID.
0 commit comments