Historique des commits

Auteur SHA1 Message Date
  AlpinDale eb2c5c77df feat: enforce the max possible seqlen il y a 6 mois
  AlpinDale de62ceb18c refactor: eliminate parallel worker per-step task scheduling overhead il y a 6 mois
  AlpinDale 236be273e5 feat: tensor parallel speculative decoding (#554) il y a 6 mois
  AlpinDale 7bcff4ac03 implement sharded state dict il y a 6 mois
  AlpinDale b984fe4a91 refactor custom allreduce to support multiple tp groups il y a 6 mois
  AlpinDale be8154a8a0 feat: proper embeddings API with e5-mistral-7b support il y a 6 mois
  AlpinDale 8ae2cce237 refactor pynccl il y a 6 mois
  AlpinDale 0e062e66d3 set block size at init il y a 6 mois
  AlpinDale 8b56dc4347 dict -> torch.Tensor for blocks_to_swap il y a 6 mois
  AlpinDale 21ce19b3ea blocks_to_copy dict -> torch.Tensor il y a 6 mois
  AlpinDale ef733aee43 implement ExecuteModelData to reduce executor complexity il y a 6 mois
  AlpinDale 1879e32510 enable all-reduce for multiple tp groups il y a 6 mois
  AlpinDale 46159b107a formatting: pt1 il y a 7 mois
  AlpinDale 4c746d8baa chore: init nccl using the gloo backend il y a 7 mois
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) il y a 7 mois
  AlpinDale f894f7b176 Revert "reduce dedupe by wrapping in general worker class" il y a 8 mois
  AlpinDale 9fff6fb507 reduce dedupe by wrapping in general worker class il y a 8 mois
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) il y a 9 mois
  AlpinDale e3252edd07 fix: remove event and stream, add typing (#382) il y a 10 mois
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) il y a 10 mois
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) il y a 11 mois
  AlpinDale 9810daa699 feat: INT8 KV Cache (#298) il y a 11 mois
  AlpinDale 4b80b42362 fix: memory leaks due to nccl cuda graphs (#275) il y a 11 mois
  Thomas Xin 43cf0e98a0 fix: worker initialization on WSL (#260) il y a 11 mois
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) il y a 1 an
  AlpinDale 31c95011a6 feat: FP8 E5M2 KV Cache (#226) il y a 1 an
  AlpinDale 641bb0f6e9 feat: add custom allreduce kernels (#224) il y a 1 an
  AlpinDale c0aac15421 feat: S-LoRA support (#222) il y a 1 an
  AlpinDale 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) il y a 1 an
  AlpinDale 15a0454172 feat: FP8 KV Cache (#185) il y a 1 an