PygmalionAI's large-scale inference engine
pygmalion.chat

It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving the Pygmalion models to a large number of users with blazing fast speeds (thanks to vLLM's Paged Attention).

AlpinDale c1fa7e8567 chore: fix datatype check (#65) 1 рік тому
.github ca43123a30 add github action to auto-build wheels 1 рік тому
aphrodite c1fa7e8567 chore: fix datatype check (#65) 1 рік тому
assets fefbf029c9 revert previous commit 1 рік тому
docker c5869e0b62 feat: add docker image 1 рік тому
examples 551c4280cf chore: change default port to 2242 1 рік тому
kernels 9f7a0e3ecb feat: AWQ support for Turing GPUs (#53) 1 рік тому
tests 044251018e chore: update readme and the tests 1 рік тому
.gitignore a6a4220fa6 feat: refactor megatron and quants (#57) 1 рік тому
LICENSE 5adcb33e14 Revert license back to AGPLv3 (#38) 1 рік тому
MANIFEST.in 1e294e1bfa include klite UI in the build 1 рік тому
README.md 9a9e59b871 update readme with new instructions 1 рік тому
build-windows-wheel.cmd 0b2b62fe96 Micromamba Runtime (#54) 1 рік тому
chat.sh 76b2e4a445 Merge dev branch into main (#7) 1 рік тому
environment.yaml 0b2b62fe96 Micromamba Runtime (#54) 1 рік тому
pyproject.toml 1e7d28f96f fix: torch version mismatch (#43) 1 рік тому
requirements.txt b526a7b3bc Update requirements.txt (#58) 1 рік тому
runtime.cmd 0b2b62fe96 Micromamba Runtime (#54) 1 рік тому
runtime.sh 0b2b62fe96 Micromamba Runtime (#54) 1 рік тому
setup.py 0495c50a3e GPTQ+exllama support (#21) 1 рік тому
update-runtime.cmd 0b2b62fe96 Micromamba Runtime (#54) 1 рік тому
update-runtime.sh 2e70a6d5ed chore: allow the user to specify install method (#56) 1 рік тому

README.md

Breathing Life into Language

aphrodite

Aphrodite is the official backend engine for PygmalionAI. It is designed to serve as the inference endpoint for the PygmalionAI website, and to allow serving the Pygmalion models to a large number of users with blazing fast speeds (thanks to FasterTransformer and vLLM).

Aphrodite builds upon and integrates the exceptional work from various projects.

Features

  • Continuous Batching
  • Efficient K/V management with PagedAttention
  • Optimized CUDA kernels for improved inference
  • Quantization support via AWQ and GPTQ
  • Distributed inference
  • Variety of sampling methods (top a, tail-free sampling, rep. pen.)

Quickstart

pip install aphrodite-engine

python -m aphrodite.endpoints.api_server_kobold --model PygmalionAI/pygmalion-2-7b

This will create a KoboldAI-compatible API server that can be accessed at port 2242 of the localhost. You can plug in the API into a UI that supports Kobold, such as SillyTavern.

Requirements

  • Operating System: Linux (or WSL for Windows)
  • Python: at least 3.8
  • CUDA 11.8 (recommended, supports 11.0-11.8)

Supported GPUs

Any NVIDIA GPU with a compute capability of 6.0 or higher. Refer to this page for a full list of CUDA GPUs:

https://developer.nvidia.com/cuda-gpus.

Or, you can manually find out your GPU's Compute Capability by opening a Python interpreter and running:

>>> import torch    # if you don't have `torch` installed, run `pip install torch` first
>>> print(torch.cuda.get_device_capability())

This should print something like this: (7, 5), which would indicate a CC of 7.5

If you do not meet the minimum CC, you will not be able to run Aphrodite. At the moment, compute capability of 7.5 or higher is required for AWQ quantization scheme; you can use GPTQ if your GPU does not support it.

Setting up the environment

If you run into any problems, please refer to the common Common Issues section, or open an Issue if you can't find the answer there.

Aphrodite will require a slightly specialized environment to run, as the latest CUDA versions are currently not supported. You can use Conda to easily configure your environment. If you're on windows, make sure you have WSL2 installed. You can do this by opening Windows PowerShell and running:

wsl --install

Aphrodite provides an easy-to-use install script, which helps with both setting up a suitable environment for installing via the pip package and/or building from source.

The requirements is git, wget, bzip2, and tar - all of which are available on the majority of Linux distributions, including WSL.

git clone https://github.com/PygmalionAI/aphrodite-engine && cd aphrodite-engine

Then you can simply run:

./runtime.sh python -m aphrodite.endpoints.api_server_kobold --help

The ./runtime.sh prefix will need to be appended to every command you run that involves Aphrodite, as it launches your commands within the created environment. If you prefer not doing that, you can run ./runtime.sh by itself to enter the environment and execute commands as normal.

For updating the engine, run git pull and then ./update-runtime.sh to update the environment.

Usage

Aphrodite Engine provides 3 API endpoint types:

  1. KoboldAI:

    python -m aphrodite.endpoints.api_server_kobold --model PygmalionAI/pygmalion-2-7b
    
  2. Text Generation WebUI

    python -m aphrodite.endpoints.api_server_ooba --model PygmalionAI/pygmalion-2-7b
    
  3. OpenAI

    python -m aphrodite.endpoints.openai.api_server --model PygmalionAI/pygmalion-2-7b
    

Please refer to each endpoint's documentation on how to query them. Generally, they all work with SillyTavern.

To run a quantized model, use the --quantization flag with either gptq or awq and the --dtype float16 flag. Make sure your model is in AWQ/GPTQ format and not GGUF. Run with only the --help flag for a full list of arguments.

For the full list of Sampling parameters, please refer to SamplingParams:

https://github.com/PygmalionAI/aphrodite-engine/blob/56161a9674f1f9e8927aaa77e5d339498bb6eeee/aphrodite/common/sampling_params.py#L24-L87

Common Issues

  • The detected CUDA version (12.1) mismatches the version that was used to compile PyTorch (11.8). Please make sure to use the same CUDA versions.

This is normally due to your environment referring to the global installation of CUDA and not the one in your current env. Run which nvcc and note down the output. For example, if your output is /home/anon/miniconda3/envs/aphrodite/bin/nvcc, run this command:

export CUDA_HOME=/home/anon/miniconda3/envs/aphrodite

Then run the installation command again.

  • Aborted due to the lack of CPU swap space. Please increase the swap space to avoid this error.

You've run out of swap space! Please pass the --swap-space followed by the amount of swap (in GBs) to allocate. Make sure you leave enough RAM for the model loading process.

Notes

  1. By design, Aphrodite takes up 90% of your GPU's VRAM. If you're not serving an LLM at scale, you may want to limit the amount of memory it takes up. You can do this in the API example by launching the server with the --gpu-memory-utilization 0.6 (0.6 means 60%).

  2. You can view the full list of commands by running python -m aphrodite.endpoints.api_server_ooba --help.

  3. Context Length extension via the RoPE method is supported for Llama models. Edit the config.json with the following values:

    "rope_scaling": {
    "factor": 2.0,
    "type": "dynamic"
    },
    

Acknowledgements

Aphrodite Engine would have not been possible without the phenomenal work of other open-source projects. Credits go to:

Contributing

We accept PRs! There will likely be a few typos or other errors we've failed to catch, so please let us know either via an issue or by making a Pull Request.