PygmalionAI's large-scale inference engine
pygmalion.chat

It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving the Pygmalion models to a large number of users with blazing fast speeds (thanks to vLLM's Paged Attention).

AlpinDale 8442b36171 spec decoding: set the draft model ctxlen to target model 1 月之前
.github 55261b09d6 ci: fix docs deployment (#750) 3 月之前
aphrodite 8442b36171 spec decoding: set the draft model ctxlen to target model 1 月之前
assets b3df2351c8 readme: update with bsz1 graph 10 月之前
cmake 0256ed236b feat: windows support (#790) 2 月之前
docker f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
docs d486d7ac01 docs: add linux arm64/aarch64/GH200 installation tips (#851) 1 月之前
examples 8a71788372 Add OLMoE (#772) 2 月之前
kernels 93bc863591 feat: Machete Kernels for Hopper GPUs (#842) 1 月之前
patches eee3cf5dab fix: make AMD usable (#775) 2 月之前
tests abfd4465ca feat: add support for chunked prefill + prefix caching (#871) 1 月之前
.clang-format f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
.dockerignore f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
.gitignore 93bc863591 feat: Machete Kernels for Hopper GPUs (#842) 1 月之前
CMakeLists.txt 4f9fea4c4d fix: ROCm build (#817) 1 月之前
CODE_OF_CONDUCT.md e7ea38f243 chore: add contribution guidelines + Code of Conduct (#507) 6 月之前
CONTRIBUTING.md e7ea38f243 chore: add contribution guidelines + Code of Conduct (#507) 6 月之前
Dockerfile 1405051912 attention: add `AttentionState` abstraction (#863) 1 月之前
Dockerfile.cpu d289c3855b fix: install protobuf for cpu (#716) 3 月之前
Dockerfile.neuron 31483a7d3b fix: manually install triton for other devices to prevent outlines errors (#697) 3 月之前
Dockerfile.openvino 31483a7d3b fix: manually install triton for other devices to prevent outlines errors (#697) 3 月之前
Dockerfile.ppc64le 31483a7d3b fix: manually install triton for other devices to prevent outlines errors (#697) 3 月之前
Dockerfile.rocm 4d781b22d3 docker: apply AMD patch in the dockerfile (#777) 2 月之前
Dockerfile.tpu 8cfbe62a7c chore: bump lmfe to v0.10.6 and include triton for tpu and xpu dockerfiles (#682) 3 月之前
Dockerfile.xpu 8cfbe62a7c chore: bump lmfe to v0.10.6 and include triton for tpu and xpu dockerfiles (#682) 3 月之前
LICENSE 5adcb33e14 Revert license back to AGPLv3 (#38) 1 年之前
MANIFEST.in f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
README.md 5878e887f2 docs: update readme and docs (#757) 3 月之前
amdpatch.sh 4f9fea4c4d fix: ROCm build (#817) 1 月之前
build_and_upload_docker.sh 6e25b03f25 ci: docker build and upload script 2 月之前
build_wheel.sh f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
config.yaml f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
env.py 5dd0145414 chore: update the env.py script and the bug report template (#662) 4 月之前
environment.yaml f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
formatting.ps1 f98e7b2f8c feat: add HQQ quantization support (#795) 2 月之前
formatting.sh f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
install_windows.ps1 f0e00f1b43 ci: bump to 0.6.3.post1 (#801) 2 月之前
mypy.ini 9d81716bfd [v0.5.3] Release Candidate (#388) 8 月之前
pyproject.toml c6c91edab7 ci: update & overhaul test units (#769) 1 月之前
pytest.ini 22427602eb feat: add top-nsigma sampling method 1 月之前
requirements-adag.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
requirements-build.txt 82eabb6aa7 build: add jinja2 to requirements file (#862) 1 月之前
requirements-common.txt 538471f76e chore: bump mistral_common to 1.5.0 (#844) 1 月之前
requirements-cpu.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
requirements-cuda.txt 0256ed236b feat: windows support (#790) 2 月之前
requirements-dev.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
requirements-lint.txt 62111fab17 feat: allow serving encoder-decoder models in the API server (#664) 4 月之前
requirements-neuron.txt 9d81716bfd [v0.5.3] Release Candidate (#388) 8 月之前
requirements-openvino.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
requirements-rocm.txt eee3cf5dab fix: make AMD usable (#775) 2 月之前
requirements-test.txt 04da8c33bd Revert "chore: use the `compressed-tensors` library to avoid code reuse (#704)" (#706) 3 月之前
requirements-tpu.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
requirements-xpu.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前
runtime.sh cbe37e8b18 fix: speed up cuda home detection (#288) 10 月之前
setup.py 22425b689d fix: XPU build 1 月之前
update-runtime.sh f1d0b77c92 [0.6.0] Release Candidate (#481) 4 月之前

README.md

Breathing Life into Language

aphrodite

Aphrodite is the official backend engine for PygmalionAI. It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving Hugging Face-compatible models to a large number of users with blazing fast speeds (thanks to vLLM's Paged Attention).

Aphrodite builds upon and integrates the exceptional work from various projects.

The compute necessary for Aphrodite's development is provided by Arc Compute.

🔥 News

(09/2024) v0.6.1 is here. You can now load FP16 models in FP2 to FP7 quant formats, to achieve extremely high throughput and save on memory.

(09/2024) v0.6.0 is released, with huge throughput improvements, many new quant formats (including fp8 and llm-compressor), asymmetric tensor parallel, pipeline parallel and more! Please check out the exhaustive documentation for the User and Developer guides.

Features

  • Continuous Batching
  • Efficient K/V management with PagedAttention from vLLM
  • Optimized CUDA kernels for improved inference
  • Quantization support via AQLM, AWQ, Bitsandbytes, GGUF, GPTQ, QuIP#, Smoothquant+, SqueezeLLM, Marlin, FP2-FP12
  • Distributed inference
  • 8-bit KV Cache for higher context lengths and throughput, at both FP8 E5M3 and E4M3 formats.

Quickstart

Install the engine:

pip install -U aphrodite-engine

Then launch a model:

aphrodite run meta-llama/Meta-Llama-3.1-8B-Instruct

This will create a OpenAI-compatible API server that can be accessed at port 2242 of the localhost. You can plug in the API into a UI that supports OpenAI, such as SillyTavern.

Please refer to the documentation for the full list of arguments and flags you can pass to the engine.

You can play around with the engine in the demo here:

Open In Colab

Docker

Additionally, we provide a Docker image for easy deployment. Here's a basic command to get you started:

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    #--env "CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7" \
    -p 2242:2242 \
    --ipc=host \
    alpindale/aphrodite-openai:latest \
    --model NousResearch/Meta-Llama-3.1-8B-Instruct \
    --tensor-parallel-size 8 \
    --api-keys "sk-empty"

This will pull the Aphrodite Engine image (~8GiB download), and launch the engine with the Llama-3.1-8B-Instruct model at port 2242.

Requirements

  • Operating System: Linux (or WSL for Windows)
  • Python: 3.8 to 3.12

For windows users, it's recommended to use tabbyAPI instead, if you do not need batching support.

Build Requirements:

  • CUDA >= 11

For supported devices, see here. Generally speaking, all semi-modern GPUs are supported - down to Pascal (GTX 10xx, P40, etc.) We also support AMD GPUs, Intel CPUs and GPUs, Google TPU, and AWS Inferentia.

Notes

  1. By design, Aphrodite takes up 90% of your GPU's VRAM. If you're not serving an LLM at scale, you may want to limit the amount of memory it takes up. You can do this in the API example by launching the server with the --gpu-memory-utilization 0.6 (0.6 means 60%).

  2. You can view the full list of commands by running aphrodite run --help.

Acknowledgements

Aphrodite Engine would have not been possible without the phenomenal work of other open-source projects. Credits go to:

Contributing

Everyone is welcome to contribute. You can support the project by opening Pull Requests for new features, fixes, or general UX improvements.