PygmalionAI's large-scale inference engine
pygmalion.chat

It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving the Pygmalion models to a large number of users with blazing fast speeds (thanks to vLLM's Paged Attention).

AlpinDale e75daadfbd don't copy LogitsProcessors 10 months ago
.github aebd68c632 feat: backport kernels (#235) 11 months ago
aphrodite e75daadfbd don't copy LogitsProcessors 10 months ago
assets fefbf029c9 revert previous commit 1 year ago
docker c76b611021 docker: update the Dockerfile and push the latest image (#254) 11 months ago
examples 80e8a14949 feat: add pygchat Jinja template (#218) 11 months ago
kernels e31c6f0b45 feat: refactor modeling logic and support more models (#274) 10 months ago
rocm_patch 2755a48d51 merge dev branch into main (#153) 1 year ago
tests d2db4143fa feat: add grafana for metrics (#240) 11 months ago
.gitignore c3a221eb02 feat: GGUF, QuIP#, and Marlin support (#228) 11 months ago
.pylintrc e31c6f0b45 feat: refactor modeling logic and support more models (#274) 10 months ago
LICENSE 5adcb33e14 Revert license back to AGPLv3 (#38) 1 year ago
MANIFEST.in 3c512ef982 fix docker container and add klite embd to wheel 1 year ago
README.md c76b611021 docker: update the Dockerfile and push the latest image (#254) 11 months ago
build-linux-wheel.sh aebd68c632 feat: backport kernels (#235) 11 months ago
environment.yaml bb158b6282 fix: bump torch to 2.2.0 (#234) 11 months ago
formatting.sh 9b317aa26a feat: finish up tests and workflows (#87) 1 year ago
mypy.ini 9b317aa26a feat: finish up tests and workflows (#87) 1 year ago
patch_xformers-0.0.22.post7.rocm.sh 2755a48d51 merge dev branch into main (#153) 1 year ago
pyproject.toml fe70c6e8d5 feat: bump cuda and pytorch (#205) 11 months ago
requirements-dev.txt 9b317aa26a feat: finish up tests and workflows (#87) 1 year ago
requirements-rocm.txt 2755a48d51 merge dev branch into main (#153) 1 year ago
requirements.txt 4b80b42362 fix: memory leaks due to nccl cuda graphs (#275) 10 months ago
runtime.sh 31c95011a6 feat: FP8 E5M2 KV Cache (#226) 11 months ago
setup.py 7d6ba53602 feat: fused top-k kernels for MoE (#273) 10 months ago
update-runtime.sh 2755a48d51 merge dev branch into main (#153) 1 year ago

README.md

Breathing Life into Language

aphrodite

Aphrodite is the official backend engine for PygmalionAI. It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving the Pygmalion models to a large number of users with blazing fast speeds (thanks to FasterTransformer and vLLM).

Aphrodite builds upon and integrates the exceptional work from various projects.

The compute necessary for Aphrodite's development is provided by Arc Compute.

Features

  • Continuous Batching
  • Efficient K/V management with PagedAttention
  • Optimized CUDA kernels for improved inference
  • Quantization support via GPTQ, GGUF, AWQ, QuIP#, and SqueezeLLM.
  • Distributed inference
  • Variety of sampling methods (Mirostat, Locally Typical Sampling, Tail-Free Sampling, etc)
  • 8-bit KV Cache for higher context lengths and throughput.

Quickstart

pip install aphrodite-engine

python -m aphrodite.endpoints.openai.api_server --model PygmalionAI/pygmalion-2-7b

[!CAUTION] If the installation reports CUDA kernel errors, please run pip install aphrodite-engine=0.4.5 instead.

This will create a OpenAI-compatible API server that can be accessed at port 2242 of the localhost. You can plug in the API into a UI that supports Kobold, such as SillyTavern.

Docker

Additionally, we provide a docker image for easy deployment. Here's a base command to get you started:

sudo docker run --gpus '"all"' --shm-size 10g -p 2242:2242 -it alpindale/aphrodite-engine

This will pull the Aphrodite Engine image (~9GiB download), and throw you in a bash commandline. From there, follow the instructions here to create an OpenAI-compatible API.

Performance

Speeds vary with different GPUs, model sizes, quantization schemes, batch sizes, etc. Here are some baseline benchmarks conducted by requesting as many completions as possible from the API server. Keep in mind that these are the theoritical peak throughput with parallel decoding, with as high a batch size as possible. Per-request generation speed is a fraction of this, at 30-40 t/s.

High Batch Size Performance

[!NOTE]
The numbers below are the theoritical peak achieved by only requesting output tokens at very high batch sizes. At lower batch sizes with much larger prompts, the results will be vastly different. Throughput refers to output tokens per second.

Model Quantization bits GPU Throughput (T/s)
Mistral 7B None 16 RTX 4090 5489.3
AWQ 4 RTX 4090 4078.8
GPTQ 4 RTX 4090 7850.4
8 RTX 4090 7658.0
GGUF Q8 RTX 4090 5141.2
Q6KM RTX 4090 5791.7
Q5KM RTX 4090 5786.2
Q4KM RTX 4090 5815.8
SqueezeLLM 4 RTX 4090 549.5
Llama-2 7B None 16 RTX 4090 2576.2
AWQ 4 RTX 4090 3551.3
GPTQ 4 RTX 4090 2919.1
GGUF Q4KM RTX 4090 2726.6
Q5KM RTX 4090 2763.4
Q6KM RTX 4090 2694.7
Q8 RTX 4090 2647.0
SqueezeLLM 4 RTX 4090 580.3

Batch Size 1

These are the speeds a user would normally get if they request a single output with a sizable prompt and output length. Essentially, normal chatting experience.

The following results were gathered by sending a request with 2000 prompt tokens and requesting 1024 tokens with ignore_eos=True.

Model Quantization bits GPU Throughput (T/s)
Mistral 7B None 16 RTX 4090 54.0
AWQ 4 RTX 4090 128.2
GPTQ 8 RTX 4090 92.8
4 RTX 4090 146.8
GGUF Q8 RTX 4090 91.0
Q6KM RTX 4090 105.4
Q5KM RTX 4090 117.8
Q4KM RTX 4090 128.9
Llama-2 7B None 16 RTX 4090 55.2
GPTQ 8 RTX 4090 90.2
4 RTX 4090 128.0
AWQ 4 RTX 4090 116.3
GGUF Q8 RTX 4090 88.1
Q6KM RTX 4090 99.4
Q5KM RTX 4090 109.9
Q4KM RTX 4090 118.9

Requirements

  • Operating System: Linux (or WSL for Windows)
  • Python: at least 3.8

Build Requirements:

  • CUDA >=12

For supported GPUs, see here.

Installation

Usage

For usage, please refer to the wiki page for detailed instructions. Aphrodite provides many different options for LLM inference, so please read through the list of options here.

Notes

  1. By design, Aphrodite takes up 90% of your GPU's VRAM. If you're not serving an LLM at scale, you may want to limit the amount of memory it takes up. You can do this in the API example by launching the server with the --gpu-memory-utilization 0.6 (0.6 means 60%).

  2. You can view the full list of commands by running python -m aphrodite.endpoints.openai.api_server --help.

  3. Context Length extension via the RoPE method is supported for most models. Use the command-line flag --max-model-len to specify a desired context length and the engine will adjust the RoPE scaling accordingly.

  4. Please refer to the FAQ & Issues if you run into problems. If you don't find an answer there, please make an issue.

Acknowledgements

Aphrodite Engine would have not been possible without the phenomenal work of other open-source projects. Credits go to:

Contributing

Everyone is welcome to contribute. You can support the project by opening Pull Requests for new features, fixes, or general UX improvements.