Histórico de Commits

Autor SHA1 Mensagem Data
  AlpinDale eb2c5c77df feat: enforce the max possible seqlen há 6 meses atrás
  AlpinDale de62ceb18c refactor: eliminate parallel worker per-step task scheduling overhead há 6 meses atrás
  AlpinDale 236be273e5 feat: tensor parallel speculative decoding (#554) há 6 meses atrás
  AlpinDale 7bcff4ac03 implement sharded state dict há 6 meses atrás
  AlpinDale b984fe4a91 refactor custom allreduce to support multiple tp groups há 6 meses atrás
  AlpinDale be8154a8a0 feat: proper embeddings API with e5-mistral-7b support há 6 meses atrás
  AlpinDale 8ae2cce237 refactor pynccl há 6 meses atrás
  AlpinDale 0e062e66d3 set block size at init há 6 meses atrás
  AlpinDale 8b56dc4347 dict -> torch.Tensor for blocks_to_swap há 6 meses atrás
  AlpinDale 21ce19b3ea blocks_to_copy dict -> torch.Tensor há 6 meses atrás
  AlpinDale ef733aee43 implement ExecuteModelData to reduce executor complexity há 6 meses atrás
  AlpinDale 1879e32510 enable all-reduce for multiple tp groups há 6 meses atrás
  AlpinDale 46159b107a formatting: pt1 há 7 meses atrás
  AlpinDale 4c746d8baa chore: init nccl using the gloo backend há 7 meses atrás
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) há 7 meses atrás
  AlpinDale f894f7b176 Revert "reduce dedupe by wrapping in general worker class" há 8 meses atrás
  AlpinDale 9fff6fb507 reduce dedupe by wrapping in general worker class há 8 meses atrás
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) há 9 meses atrás
  AlpinDale e3252edd07 fix: remove event and stream, add typing (#382) há 10 meses atrás
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) há 10 meses atrás
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) há 11 meses atrás
  AlpinDale 9810daa699 feat: INT8 KV Cache (#298) há 11 meses atrás
  AlpinDale 4b80b42362 fix: memory leaks due to nccl cuda graphs (#275) há 11 meses atrás
  Thomas Xin 43cf0e98a0 fix: worker initialization on WSL (#260) há 11 meses atrás
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) há 1 ano atrás
  AlpinDale 31c95011a6 feat: FP8 E5M2 KV Cache (#226) há 1 ano atrás
  AlpinDale 641bb0f6e9 feat: add custom allreduce kernels (#224) há 1 ano atrás
  AlpinDale c0aac15421 feat: S-LoRA support (#222) há 1 ano atrás
  AlpinDale 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) há 1 ano atrás
  AlpinDale 15a0454172 feat: FP8 KV Cache (#185) há 1 ano atrás