vLLM Recipes
This repo intends to host community maintained common recipes to run vLLM answering the question: How do I run model X on hardware Y for task Z?
Guides
DeepSeek
Ernie
GLM
inclusionAI
InternVL
InternLM
Jina AI
Llama
MiniMax
Mistral AI
Moonshotai
OpenAI
PaddlePaddle
Qwen
Seed
Tencent-Hunyuan
Contributing
Please feel free to contribute by adding a new recipe or improving an existing one, just send us a PR!
While the repo is designed to be directly viewable in GitHub (Markdown files as first citizen), you can build the docs as web pages locally.
uv venvsource .venv/bin/activateuv pip install -r requirements.txtuv run mkdocs serve
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

