PygmalionAI's large-scale inference engine
pygmalion.chat

It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving the Pygmalion models to a large number of users with blazing fast speeds (thanks to vLLM's Paged Attention).

AlpinDale 90cfc55065 wip vor 3 Monaten
.github 5dd0145414 chore: update the env.py script and the bug report template (#662) vor 4 Monaten
aphrodite 90cfc55065 wip vor 3 Monaten
assets b3df2351c8 readme: update with bsz1 graph vor 10 Monaten
cmake f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
docker f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
docs 9d9722b1c1 fix: metrics endpoint with RPC server (#747) vor 3 Monaten
examples 12e40ae6fd chore: update grafana template (#721) vor 3 Monaten
kernels ccbda97416 fix: types in AQLM and GGUF for dynamo support (#736) vor 3 Monaten
tests f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
.clang-format f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
.dockerignore f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
.gitignore f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
CMakeLists.txt ec32f999bc build: bump cmake to 3.26 (#691) vor 3 Monaten
CODE_OF_CONDUCT.md e7ea38f243 chore: add contribution guidelines + Code of Conduct (#507) vor 6 Monaten
CONTRIBUTING.md e7ea38f243 chore: add contribution guidelines + Code of Conduct (#507) vor 6 Monaten
Dockerfile 300f889554 chore: update flashinfer to v0.1.3 (#685) vor 3 Monaten
Dockerfile.cpu d289c3855b fix: install protobuf for cpu (#716) vor 3 Monaten
Dockerfile.neuron 31483a7d3b fix: manually install triton for other devices to prevent outlines errors (#697) vor 3 Monaten
Dockerfile.openvino 31483a7d3b fix: manually install triton for other devices to prevent outlines errors (#697) vor 3 Monaten
Dockerfile.ppc64le 31483a7d3b fix: manually install triton for other devices to prevent outlines errors (#697) vor 3 Monaten
Dockerfile.rocm f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
Dockerfile.tpu 8cfbe62a7c chore: bump lmfe to v0.10.6 and include triton for tpu and xpu dockerfiles (#682) vor 3 Monaten
Dockerfile.xpu 8cfbe62a7c chore: bump lmfe to v0.10.6 and include triton for tpu and xpu dockerfiles (#682) vor 3 Monaten
LICENSE 5adcb33e14 Revert license back to AGPLv3 (#38) vor 1 Jahr
MANIFEST.in f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
README.md ba848b00f3 readme: fix model name typo (#627) vor 4 Monaten
build_wheel.sh f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
config.yaml f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
env.py 5dd0145414 chore: update the env.py script and the bug report template (#662) vor 4 Monaten
environment.yaml f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
formatting.sh f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
mypy.ini 9d81716bfd [v0.5.3] Release Candidate (#388) vor 8 Monaten
pyproject.toml ec32f999bc build: bump cmake to 3.26 (#691) vor 3 Monaten
requirements-adag.txt f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
requirements-build.txt ec32f999bc build: bump cmake to 3.26 (#691) vor 3 Monaten
requirements-common.txt d289c3855b fix: install protobuf for cpu (#716) vor 3 Monaten
requirements-cpu.txt f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
requirements-cuda.txt 4ca9aaaf3c build: add empty device (#684) vor 3 Monaten
requirements-dev.txt f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
requirements-lint.txt 62111fab17 feat: allow serving encoder-decoder models in the API server (#664) vor 4 Monaten
requirements-neuron.txt 9d81716bfd [v0.5.3] Release Candidate (#388) vor 8 Monaten
requirements-openvino.txt f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
requirements-rocm.txt f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
requirements-test.txt 04da8c33bd Revert "chore: use the `compressed-tensors` library to avoid code reuse (#704)" (#706) vor 3 Monaten
requirements-tpu.txt f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
requirements-xpu.txt f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
runtime.sh cbe37e8b18 fix: speed up cuda home detection (#288) vor 10 Monaten
setup.py 4ca9aaaf3c build: add empty device (#684) vor 3 Monaten
update-runtime.sh f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten

README.md

Breathing Life into Language

aphrodite

Aphrodite is the official backend engine for PygmalionAI. It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving Hugging Face-compatible models to a large number of users with blazing fast speeds (thanks to vLLM's Paged Attention).

Aphrodite builds upon and integrates the exceptional work from various projects.

The compute necessary for Aphrodite's development is provided by Arc Compute.

🔥 News

(09/2024) v0.6.0 is released, with huge throughput improvements, many new quant formats (including fp8 and llm-compressor), asymmetric tensor parallel, pipeline parallel and more! Please check out the exhaustive documentation for the User and Developer guides.

Features

  • Continuous Batching
  • Efficient K/V management with PagedAttention from vLLM
  • Optimized CUDA kernels for improved inference
  • Quantization support via AQLM, AWQ, Bitsandbytes, GGUF, GPTQ, QuIP#, Smoothquant+, SqueezeLLM, Marlin, FP4, FP6, FP8, FP12
  • Distributed inference
  • 8-bit KV Cache for higher context lengths and throughput, at both FP8 E5M3 and E4M3 formats.

Quickstart

Install the engine:

pip install -U aphrodite-engine==0.6.0

Then launch a model:

aphrodite run meta-llama/Meta-Llama-3.1-8B-Instruct

This will create a OpenAI-compatible API server that can be accessed at port 2242 of the localhost. You can plug in the API into a UI that supports OpenAI, such as SillyTavern.

Please refer to the documentation for the full list of arguments and flags you can pass to the engine.

You can play around with the engine in the demo here:

Open In Colab

Docker

Additionally, we provide a Docker image for easy deployment. Here's a basic command to get you started:

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    #--env "CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7" \
    -p 2242:2242 \
    --ipc=host \
    alpindale/aphrodite-openai:latest \
    --model NousResearch/Meta-Llama-3.1-8B-Instruct \
    --tensor-parallel-size 8 \
    --api-keys "sk-empty"

This will pull the Aphrodite Engine image (~8GiB download), and launch the engine with the Llama-3.1-8B-Instruct model at port 2242.

Requirements

  • Operating System: Linux (or WSL for Windows)
  • Python: 3.8 to 3.12

For windows users, it's recommended to use tabbyAPI instead, if you do not need batching support.

Build Requirements:

  • CUDA >= 11

For supported devices, see here. Generally speaking, all semi-modern GPUs are supported - down to Pascal (GTX 10xx, P40, etc.) We also support AMD GPUs, Intel CPUs and GPUs, Google TPU, and AWS Inferentia.

Notes

  1. By design, Aphrodite takes up 90% of your GPU's VRAM. If you're not serving an LLM at scale, you may want to limit the amount of memory it takes up. You can do this in the API example by launching the server with the --gpu-memory-utilization 0.6 (0.6 means 60%).

  2. You can view the full list of commands by running aphrodite run --help.

Acknowledgements

Aphrodite Engine would have not been possible without the phenomenal work of other open-source projects. Credits go to:

Contributing

Everyone is welcome to contribute. You can support the project by opening Pull Requests for new features, fixes, or general UX improvements.