Commit Verlauf

Autor SHA1 Nachricht Datum
  AlpinDale 89a2c6dee1 chore: refactor `MultiModalConfig` initialization and profiling (#745) vor 4 Monaten
  AlpinDale 81c28d2a7f fix: use nvml to get consistent device names (#739) vor 4 Monaten
  AlpinDale b03fa02397 refactor: base worker input refactor for multi-step (#683) vor 4 Monaten
  AlpinDale bf88c8567e feat: mamba model support (#674) vor 4 Monaten
  AlpinDale 62111fab17 feat: allow serving encoder-decoder models in the API server (#664) vor 4 Monaten
  AlpinDale c147670c13 fix: clean up incorrect log in worker (#636) vor 4 Monaten
  AlpinDale a0e446a17d feat: initial encoder-decoder support with BART model (#633) vor 4 Monaten
  AlpinDale f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) vor 8 Monaten
  AlpinDale e3252edd07 fix: remove event and stream, add typing (#382) vor 9 Monaten
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) vor 9 Monaten
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) vor 10 Monaten
  AlpinDale 9810daa699 feat: INT8 KV Cache (#298) vor 10 Monaten
  AlpinDale 4b80b42362 fix: memory leaks due to nccl cuda graphs (#275) vor 11 Monaten
  Thomas Xin 43cf0e98a0 fix: worker initialization on WSL (#260) vor 11 Monaten
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) vor 11 Monaten
  AlpinDale 31c95011a6 feat: FP8 E5M2 KV Cache (#226) vor 1 Jahr
  AlpinDale 641bb0f6e9 feat: add custom allreduce kernels (#224) vor 1 Jahr
  AlpinDale c0aac15421 feat: S-LoRA support (#222) vor 1 Jahr
  AlpinDale 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) vor 1 Jahr
  AlpinDale 15a0454172 feat: FP8 KV Cache (#185) vor 1 Jahr
  AlpinDale 7d91e9e0f2 feat: CUDA graphs (#172) vor 1 Jahr
  AlpinDale f5f9bc6a7c fix: memory profiling (#166) vor 1 Jahr
  AlpinDale 653da510d1 chore: rewrite InputMetadata (#143) vor 1 Jahr
  AlpinDale 1aab8a7d6f feat: speedup compilation times by 3x (#130) vor 1 Jahr
  AlpinDale 237d2ec28d fix: CPU OOM for large models (#128) vor 1 Jahr
  AlpinDale e7b6a2d5a0 chore: tensor parallel refactors part 2 (#116) vor 1 Jahr
  AlpinDale f384f3ae60 fix: force v2 for ctxlen larger than 8192 (#100) vor 1 Jahr
  AlpinDale ae7d8df224 fix lint issues (again) vor 1 Jahr
  50h100a fa0ae5a2c9 feat: new mirostatv2 implementation (#96) vor 1 Jahr