PygmalionAI's large-scale inference engine
pygmalion.chat

It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving the Pygmalion models to a large number of users with blazing fast speeds (thanks to vLLM's Paged Attention).

AlpinDale 80be38ca6f chore: expose phi3_v num_crops as an mm_processor_kwargs (#1117) 1 неделя назад
.github 2bb9c9c399 Revert "CI: use self-hosted runner for the build job" 1 неделя назад
aphrodite 80be38ca6f chore: expose phi3_v num_crops as an mm_processor_kwargs (#1117) 1 неделя назад
assets b3df2351c8 readme: update with bsz1 graph 11 месяцев назад
cmake c36dd3a4b6 build: fix CPU CMake compilation 2 недель назад
docker f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
docs 3bb0f07461 chore: rename `task_handler` to `worker` (#985) 1 месяц назад
examples 949f974c59 (1/N) XQA: integrate the XQA CUDA kernels within Aphrodite (#1115) 1 неделя назад
kernels 949f974c59 (1/N) XQA: integrate the XQA CUDA kernels within Aphrodite (#1115) 1 неделя назад
patches eee3cf5dab fix: make AMD usable (#775) 3 месяцев назад
tests 80be38ca6f chore: expose phi3_v num_crops as an mm_processor_kwargs (#1117) 1 неделя назад
.clang-format f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
.dockerignore f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
.gitignore 93bc863591 feat: Machete Kernels for Hopper GPUs (#842) 2 месяцев назад
CMakeLists.txt 949f974c59 (1/N) XQA: integrate the XQA CUDA kernels within Aphrodite (#1115) 1 неделя назад
CODE_OF_CONDUCT.md e7ea38f243 chore: add contribution guidelines + Code of Conduct (#507) 8 месяцев назад
CONTRIBUTING.md e7ea38f243 chore: add contribution guidelines + Code of Conduct (#507) 8 месяцев назад
Dockerfile be59e30139 vlm: add support for video modality + llava next video (#1014) 1 месяц назад
Dockerfile.cpu f2b6dc3872 cpu: add support for W8A8 quantization via compressed-tensor (#1017) 1 месяц назад
Dockerfile.neuron be59e30139 vlm: add support for video modality + llava next video (#1014) 1 месяц назад
Dockerfile.openvino be59e30139 vlm: add support for video modality + llava next video (#1014) 1 месяц назад
Dockerfile.ppc64le be59e30139 vlm: add support for video modality + llava next video (#1014) 1 месяц назад
Dockerfile.rocm 4d781b22d3 docker: apply AMD patch in the dockerfile (#777) 3 месяцев назад
Dockerfile.tpu be59e30139 vlm: add support for video modality + llava next video (#1014) 1 месяц назад
Dockerfile.xpu 1448857bd3 XPU: fix docker build 2 недель назад
LICENSE 5adcb33e14 Revert license back to AGPLv3 (#38) 1 год назад
MANIFEST.in a8ff25679f chore: use `ray[adag]` dep instead of cuda (#997) 1 месяц назад
README.md ad1205b277 readme: update attributions (#1082) 1 месяц назад
amdpatch.sh 4f9fea4c4d fix: ROCm build (#817) 2 месяцев назад
build_and_upload_docker.sh 3d5b97837f ci: fix the tag for :latest docker 2 недель назад
build_wheel.sh f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
config.yaml f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
env.py 5dd0145414 chore: update the env.py script and the bug report template (#662) 5 месяцев назад
environment.yaml f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
formatting.ps1 f98e7b2f8c feat: add HQQ quantization support (#795) 3 месяцев назад
formatting.sh f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
install_windows.ps1 f0e00f1b43 ci: bump to 0.6.3.post1 (#801) 3 месяцев назад
mypy.ini 9d81716bfd [v0.5.3] Release Candidate (#388) 9 месяцев назад
pyproject.toml 86bf2cc4f3 core: rename `PromptInputs,inputs` -> `PromptType,prompt` (#1080) 1 месяц назад
pytest.ini 132aa2abe4 spec decode: add support for EAGLE (#899) 1 месяц назад
requirements-build.txt 82eabb6aa7 build: add jinja2 to requirements file (#862) 2 месяцев назад
requirements-common.txt 7c825e50be fix: correct FP8 support check on Ada+ GPUs by using compressed-tensors (#1110) 1 неделя назад
requirements-cpu.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
requirements-cuda.txt 7fffa507ff build: build flash attention kernels inside aphrodite (#1085) 2 недель назад
requirements-dev.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
requirements-lint.txt 62111fab17 feat: allow serving encoder-decoder models in the API server (#664) 5 месяцев назад
requirements-neuron.txt 9d81716bfd [v0.5.3] Release Candidate (#388) 9 месяцев назад
requirements-openvino.txt f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад
requirements-rocm.txt eee3cf5dab fix: make AMD usable (#775) 3 месяцев назад
requirements-test.txt 8d5d87e687 vlm: support multiple images for qwen-vl (#1031) 1 месяц назад
requirements-tpu.txt 61103b92d4 tpu: support single and multi-host TPUs on GKE and RayServe (#970) 1 месяц назад
requirements-xpu.txt 1448857bd3 XPU: fix docker build 2 недель назад
runtime.sh cbe37e8b18 fix: speed up cuda home detection (#288) 11 месяцев назад
setup.py 949f974c59 (1/N) XQA: integrate the XQA CUDA kernels within Aphrodite (#1115) 1 неделя назад
update-runtime.sh f1d0b77c92 [0.6.0] Release Candidate (#481) 5 месяцев назад

README.md

Breathing Life into Language

aphrodite

Aphrodite is an inference engine that optimizes the serving of HuggingFace-compatible models at scale. Built on vLLM's Paged Attention technology, it delivers high-performance model inference for multiple concurrent users. Developed through a collaboration between PygmalionAI and Ruliad, Aphrodite serves as the backend engine powering both organizations' chat platforms and API infrastructure.

Aphrodite builds upon and integrates the exceptional work from various projects, primarily vLLM.

🔥 News

(09/2024) v0.6.1 is here. You can now load FP16 models in FP2 to FP7 quant formats, to achieve extremely high throughput and save on memory.

(09/2024) v0.6.0 is released, with huge throughput improvements, many new quant formats (including fp8 and llm-compressor), asymmetric tensor parallel, pipeline parallel and more! Please check out the exhaustive documentation for the User and Developer guides.

Features

  • Continuous Batching
  • Efficient K/V management with PagedAttention from vLLM
  • Optimized CUDA kernels for improved inference
  • Quantization support via AQLM, AWQ, Bitsandbytes, GGUF, GPTQ, QuIP#, Smoothquant+, SqueezeLLM, Marlin, FP2-FP12, and more
  • Distributed inference
  • 8-bit KV Cache for higher context lengths and throughput, at both FP8 E5M3 and E4M3 formats
  • Support for modern samplers such as DRY, XTC, and more

Quickstart

Install the engine:

pip install -U aphrodite-engine

Then launch a model:

aphrodite run meta-llama/Meta-Llama-3.1-8B-Instruct

This will create a OpenAI-compatible API server that can be accessed at port 2242 of the localhost. You can plug in the API into a UI that supports OpenAI, such as SillyTavern.

Please refer to the documentation for the full list of arguments and flags you can pass to the engine.

You can play around with the engine in the demo here:

Open In Colab

Docker

Additionally, we provide a Docker image for easy deployment. Here's a basic command to get you started:

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    #--env "CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7" \
    -p 2242:2242 \
    --ipc=host \
    alpindale/aphrodite-openai:latest \
    --model NousResearch/Meta-Llama-3.1-8B-Instruct \
    --tensor-parallel-size 8 \
    --api-keys "sk-empty"

This will pull the Aphrodite Engine image (~8GiB download), and launch the engine with the Llama-3.1-8B-Instruct model at port 2242.

Requirements

  • Operating System: Linux, Windows (Needs building from source)
  • Python: 3.8 to 3.12

Build Requirements:

  • CUDA >= 11

For supported devices, see here. Generally speaking, all semi-modern GPUs are supported - down to Pascal (GTX 10xx, P40, etc.) We also support AMD GPUs, Intel CPUs and GPUs, Google TPU, and AWS Inferentia.

Notes

  1. By design, Aphrodite takes up 90% of your GPU's VRAM. If you're not serving an LLM at scale, you may want to limit the amount of memory it takes up. You can do this in the API example by launching the server with the --gpu-memory-utilization 0.6 (0.6 means 60%), or --single-user-mode to only allocate as much memory as needed for a single sequence.

  2. You can view the full list of commands by running aphrodite run --help.

Acknowledgements

Aphrodite Engine would have not been possible without the phenomenal work of other open-source projects. Credits go to:

Contributing

Everyone is welcome to contribute. You can support the project by opening Pull Requests for new features, fixes, or general UX improvements.