تاریخچه Commit ها

نویسنده SHA1 پیام تاریخ
  AlpinDale 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) 7 ماه پیش
  AlpinDale d0cca80b8b feat: support sharded tensorizer models 7 ماه پیش
  AlpinDale eb2c5c77df feat: enforce the max possible seqlen 8 ماه پیش
  AlpinDale de62ceb18c refactor: eliminate parallel worker per-step task scheduling overhead 8 ماه پیش
  AlpinDale 236be273e5 feat: tensor parallel speculative decoding (#554) 8 ماه پیش
  AlpinDale 7bcff4ac03 implement sharded state dict 8 ماه پیش
  AlpinDale b984fe4a91 refactor custom allreduce to support multiple tp groups 8 ماه پیش
  AlpinDale be8154a8a0 feat: proper embeddings API with e5-mistral-7b support 8 ماه پیش
  AlpinDale 8ae2cce237 refactor pynccl 8 ماه پیش
  AlpinDale 0e062e66d3 set block size at init 8 ماه پیش
  AlpinDale 8b56dc4347 dict -> torch.Tensor for blocks_to_swap 8 ماه پیش
  AlpinDale 21ce19b3ea blocks_to_copy dict -> torch.Tensor 8 ماه پیش
  AlpinDale ef733aee43 implement ExecuteModelData to reduce executor complexity 8 ماه پیش
  AlpinDale 1879e32510 enable all-reduce for multiple tp groups 8 ماه پیش
  AlpinDale 46159b107a formatting: pt1 8 ماه پیش
  AlpinDale 4c746d8baa chore: init nccl using the gloo backend 8 ماه پیش
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) 8 ماه پیش
  AlpinDale f894f7b176 Revert "reduce dedupe by wrapping in general worker class" 10 ماه پیش
  AlpinDale 9fff6fb507 reduce dedupe by wrapping in general worker class 10 ماه پیش
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) 10 ماه پیش
  AlpinDale e3252edd07 fix: remove event and stream, add typing (#382) 11 ماه پیش
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 11 ماه پیش
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) 1 سال پیش
  AlpinDale 9810daa699 feat: INT8 KV Cache (#298) 1 سال پیش
  AlpinDale 4b80b42362 fix: memory leaks due to nccl cuda graphs (#275) 1 سال پیش
  Thomas Xin 43cf0e98a0 fix: worker initialization on WSL (#260) 1 سال پیش
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) 1 سال پیش
  AlpinDale 31c95011a6 feat: FP8 E5M2 KV Cache (#226) 1 سال پیش
  AlpinDale 641bb0f6e9 feat: add custom allreduce kernels (#224) 1 سال پیش
  AlpinDale c0aac15421 feat: S-LoRA support (#222) 1 سال پیش