Commit History

Author SHA1 Message Date
  AlpinDale feb5840f2a feat: async tokenization (#374) 9 months ago
  AlpinDale 29c241c115 fix: explicitly disallow installation on non-linux platforms (#373) 9 months ago
  AlpinDale 97a2b26c97 fix: assertion error when use_sliding_window is present 9 months ago
  AlpinDale 0f6d56b07f feat: model executor refactor (#367) 9 months ago
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 9 months ago
  AlpinDale c41462cfcd feat: exllamav2 quantization (#305) 10 months ago
  AlpinDale c2d77b1822 chore: logging refactor (#302) 10 months ago
  AlpinDale a98babfb74 fix: bnb on Turing GPUs (#299) 10 months ago
  AlpinDale 9810daa699 feat: INT8 KV Cache (#298) 10 months ago
  AlpinDale e0c35bb353 feat: bitsandbytes and `--load-in{4,8}bit` support (#294) 10 months ago
  AlpinDale 705821a7fe feat: AQLM quantization support (#293) 10 months ago
  AlpinDale a1d8ab9f3e fix: lora on quantized models (barred gguf) (#292) 10 months ago
  AlpinDale ac82b67f75 feat: naive context shift and various QoL changes (#289) 10 months ago
  AlpinDale 72229a94da feat: better marlin kernels (#285) 10 months ago
  AlpinDale 657aec0cbd refactor: OpenAI endpoint (#261) 10 months ago
  AlpinDale 842912d022 feat: on-the-fly gguf conversion (#250) 11 months ago
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) 11 months ago
  AlpinDale c3a221eb02 feat: GGUF, QuIP#, and Marlin support (#228) 11 months ago
  AlpinDale 31c95011a6 feat: FP8 E5M2 KV Cache (#226) 11 months ago
  AlpinDale 641bb0f6e9 feat: add custom allreduce kernels (#224) 11 months ago
  AlpinDale 26a717b49f fix: use head_dim if available 11 months ago
  AlpinDale c0aac15421 feat: S-LoRA support (#222) 11 months ago
  AlpinDale 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) 11 months ago
  AlpinDale 15a0454172 feat: FP8 KV Cache (#185) 1 year ago
  AlpinDale b9b295d74e chore: backlogs 1 (#191) 1 year ago
  AlpinDale 7d91e9e0f2 feat: CUDA graphs (#172) 1 year ago
  AlpinDale 725be3e0de feat: mixtral HF with expert parallelism (#167) 1 year ago
  AlpinDale 35e9cf707c chore: force pt for mixtral (#164) 1 year ago
  AlpinDale 653da510d1 chore: rewrite InputMetadata (#143) 1 year ago
  AlpinDale 1334a833a4 feat: AMD ROCm support (#95) 1 year ago