Historique des commits

Auteur SHA1 Message Date
  AlpinDale 89a2c6dee1 chore: refactor `MultiModalConfig` initialization and profiling (#745) il y a 3 mois
  AlpinDale 81c28d2a7f fix: use nvml to get consistent device names (#739) il y a 3 mois
  AlpinDale b03fa02397 refactor: base worker input refactor for multi-step (#683) il y a 4 mois
  AlpinDale bf88c8567e feat: mamba model support (#674) il y a 4 mois
  AlpinDale 62111fab17 feat: allow serving encoder-decoder models in the API server (#664) il y a 4 mois
  AlpinDale c147670c13 fix: clean up incorrect log in worker (#636) il y a 4 mois
  AlpinDale a0e446a17d feat: initial encoder-decoder support with BART model (#633) il y a 4 mois
  AlpinDale f1d0b77c92 [0.6.0] Release Candidate (#481) il y a 4 mois
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) il y a 8 mois
  AlpinDale e3252edd07 fix: remove event and stream, add typing (#382) il y a 9 mois
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) il y a 9 mois
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) il y a 10 mois
  AlpinDale 9810daa699 feat: INT8 KV Cache (#298) il y a 10 mois
  AlpinDale 4b80b42362 fix: memory leaks due to nccl cuda graphs (#275) il y a 10 mois
  Thomas Xin 43cf0e98a0 fix: worker initialization on WSL (#260) il y a 10 mois
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) il y a 11 mois
  AlpinDale 31c95011a6 feat: FP8 E5M2 KV Cache (#226) il y a 11 mois
  AlpinDale 641bb0f6e9 feat: add custom allreduce kernels (#224) il y a 11 mois
  AlpinDale c0aac15421 feat: S-LoRA support (#222) il y a 11 mois
  AlpinDale 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) il y a 11 mois
  AlpinDale 15a0454172 feat: FP8 KV Cache (#185) il y a 1 an
  AlpinDale 7d91e9e0f2 feat: CUDA graphs (#172) il y a 1 an
  AlpinDale f5f9bc6a7c fix: memory profiling (#166) il y a 1 an
  AlpinDale 653da510d1 chore: rewrite InputMetadata (#143) il y a 1 an
  AlpinDale 1aab8a7d6f feat: speedup compilation times by 3x (#130) il y a 1 an
  AlpinDale 237d2ec28d fix: CPU OOM for large models (#128) il y a 1 an
  AlpinDale e7b6a2d5a0 chore: tensor parallel refactors part 2 (#116) il y a 1 an
  AlpinDale f384f3ae60 fix: force v2 for ctxlen larger than 8192 (#100) il y a 1 an
  AlpinDale ae7d8df224 fix lint issues (again) il y a 1 an
  50h100a fa0ae5a2c9 feat: new mirostatv2 implementation (#96) il y a 1 an