Skip to content

Commit 3aa92e9

Browse files
authored
Merge pull request #185 from VectorInstitute/f/update-dep
Update dependencies to support audio models and latest inference engine versions, fix caching issue with torch
2 parents 855ded1 + 001f6ba commit 3aa92e9

12 files changed

Lines changed: 4165 additions & 1844 deletions

MODEL_TRACKING.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,8 +175,9 @@ This document tracks all model weights available in the `/model-weights` directo
175175
### Qwen: Qwen3
176176
| Model | Configuration |
177177
|:------|:-------------|
178-
| `Qwen3-14B` ||
178+
| `Qwen3-0.6B` ||
179179
| `Qwen3-8B` ||
180+
| `Qwen3-14B` ||
180181
| `Qwen3-32B` ||
181182
| `Qwen3-235B-A22B` ||
182183
| `Qwen3-Embedding-8B` ||
@@ -233,7 +234,8 @@ This document tracks all model weights available in the `/model-weights` directo
233234
#### Moonshot AI: Kimi
234235
| Model | Configuration |
235236
|:------|:-------------|
236-
| `Kimi-K2-Instruct` ||
237+
| `Kimi-K2-Instruct` ||
238+
| `Kimi-K2.5` ||
237239

238240
#### Mistral AI: Ministral
239241
| Model | Configuration |

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@
77
[![code checks](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/code_checks.yml)
88
[![docs](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml/badge.svg)](https://github.com/VectorInstitute/vector-inference/actions/workflows/docs.yml)
99
[![codecov](https://codecov.io/github/VectorInstitute/vector-inference/branch/main/graph/badge.svg?token=NI88QSIGAC)](https://app.codecov.io/github/VectorInstitute/vector-inference/tree/main)
10-
[![vLLM](https://img.shields.io/badge/vLLM-0.12.0-blue)](https://docs.vllm.ai/en/v0.12.0/)
11-
[![SGLang](https://img.shields.io/badge/SGLang-0.5.5.post3-blue)](https://docs.sglang.io/index.html)
10+
[![vLLM](https://img.shields.io/badge/vLLM-0.15.0-blue)](https://docs.vllm.ai/en/v0.15.0/)
11+
[![SGLang](https://img.shields.io/badge/SGLang-0.5.8-blue)](https://docs.sglang.io/index.html)
1212
![GitHub License](https://img.shields.io/github/license/VectorInstitute/vector-inference)
1313

14-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using open-source inference engines ([vLLM](https://docs.vllm.ai/en/v0.12.0/), [SGLang](https://docs.sglang.io/index.html)). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
14+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using open-source inference engines ([vLLM](https://docs.vllm.ai/en/v0.15.0/), [SGLang](https://docs.sglang.io/index.html)). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
1515

1616
**NOTE**: Supported models on Killarney are tracked [here](./MODEL_TRACKING.md)
1717

@@ -49,7 +49,7 @@ You should see an output like the following:
4949
* `--account`, `-A`: The Slurm account, this argument can be set to default by setting environment variable `VEC_INF_ACCOUNT`.
5050
* `--work-dir`, `-D`: A working directory other than your home directory, this argument can be set to default by seeting environment variable `VEC_INF_WORK_DIR`.
5151

52-
Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be overriden. You can also launch your own custom model as long as the model architecture is supported by the underlying inference engine. For detailed instructions on how to customize your model launch, check out the [`launch` command section in User Guide](https://vectorinstitute.github.io/vector-inference/latest/user_guide/#launch-command)
52+
Models that are already supported by `vec-inf` would be launched using the cached configuration (set in [slurm_vars.py](vec_inf/client/slurm_vars.py)) or [default configuration](vec_inf/config/models.yaml). You can override these values by providing additional parameters. Use `vec-inf launch --help` to see the full list of parameters that can be overriden. You can also launch your own custom model as long as the model architecture is supported by the underlying inference engine. For detailed instructions on how to customize your model launch, check out the [`launch` command section in User Guide](https://vectorinstitute.github.io/vector-inference/latest/user_guide/#launch-command). During the launch process, relevant log files and scripts will be written to a log directory (default to `.vec-inf-logs` in your home directory), and a cache directory (`.vec-inf-cache`) will be created in your working directory (defaults to your home directory if not specified or required) for torch compile cache.
5353

5454
#### Other commands
5555

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Vector Inference: Easy inference on Slurm clusters
22

3-
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using open-source inference engines ([vLLM](https://docs.vllm.ai/en/v0.12.0/), [SGLang](https://docs.sglang.io/index.html)). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
3+
This repository provides an easy-to-use solution to run inference servers on [Slurm](https://slurm.schedmd.com/overview.html)-managed computing clusters using open-source inference engines ([vLLM](https://docs.vllm.ai/en/v0.15.0/), [SGLang](https://docs.sglang.io/index.html)). **This package runs natively on the Vector Institute cluster environments**. To adapt to other environments, follow the instructions in [Installation](#installation).
44

55

66
**NOTE**: Supported models on Killarney are tracked [here](https://github.com/VectorInstitute/vector-inference/blob/main/MODEL_TRACKING.md)

docs/user_guide.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,8 @@ export VEC_INF_MODEL_CONFIG=/h/<username>/my-model-config.yaml
110110

111111
**NOTE**: There are other parameters that can also be added to the config but not shown in this example, check the [`ModelConfig`](https://github.com/VectorInstitute/vector-inference/blob/main/vec_inf/client/config.py) for details.
112112

113+
During the launch process, relevant log files and scripts will be written to a log directory (default to `.vec-inf-logs` in your home directory), and a cache directory (`.vec-inf-cache`) will be created in your working directory (defaults to your home directory if not specified or required) for torch compile cache.
114+
113115
### `batch-launch` command
114116

115117
The `batch-launch` command allows users to launch multiple inference servers at once, here is an example of launching 2 models:

pyproject.toml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "vec-inf"
3-
version = "0.8.0"
3+
version = "0.8.1"
44
description = "Efficient LLM inference on Slurm clusters using vLLM."
55
readme = "README.md"
66
authors = [{name = "Marshall Wang", email = "marshall.wang@vectorinstitute.ai"}]
@@ -42,6 +42,9 @@ inference = [
4242
"torch>=2.7.0",
4343
"cupy-cuda12x>=12.3.0",
4444
"flashinfer-python>=0.4.0",
45+
"ax-platform>=1.1.0",
46+
"py3nvml",
47+
"wandb>=0.17.0",
4548
]
4649

4750
[project.optional-dependencies]
@@ -50,6 +53,9 @@ inference = [
5053
vllm = [
5154
"vllm>=0.11.2",
5255
"ray[default]>=2.51.0",
56+
"vllm[audio]",
57+
"vllm[bench]",
58+
"torchcodec>=0.9.0,<0.10.0",
5359
]
5460
# SGLang inference backend (conflicts with vllm due to dependency version conflicts)
5561
# Install with: uv sync --extra sglang --group inference

sglang.Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ RUN apt-get update && apt-get install -y \
1717
wget build-essential libssl-dev zlib1g-dev libbz2-dev \
1818
libreadline-dev libsqlite3-dev libffi-dev libncursesw5-dev \
1919
xz-utils tk-dev libxml2-dev libxmlsec1-dev liblzma-dev libnuma1 \
20-
git vim \
20+
git vim ffmpeg libavcodec-dev libavformat-dev libavutil-dev libswscale-dev libswresample-dev \
2121
&& rm -rf /var/lib/apt/lists/*
2222

2323
# Install Python

0 commit comments

Comments
 (0)