Serving with Langchain

vLLM is also available via Langchain .

To install langchain, run

  1. $ pip install langchain langchain_community -q

To run inference on a single or multiple GPUs, use VLLM class from langchain.

  1. from langchain_community.llms import VLLM
  2. llm = VLLM(model="mosaicml/mpt-7b",
  3. trust_remote_code=True, # mandatory for hf models
  4. max_new_tokens=128,
  5. top_k=10,
  6. top_p=0.95,
  7. temperature=0.8,
  8. # tensor_parallel_size=... # for distributed inference
  9. )
  10. print(llm("What is the capital of France ?"))

Please refer to this Tutorial for more details.