Historique des commits

Auteur SHA1 Message Date
  AlpinDale 5289c14b24 feat: Asymmetric Tensor Parallel (#594) il y a 4 mois
  AlpinDale 5be90c3859 Mamba infrastrucuture support (#586) il y a 4 mois
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) il y a 4 mois
  AlpinDale 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) il y a 5 mois
  AlpinDale fe21123a1c feat: TPU support (#570) il y a 5 mois
  AlpinDale f40b809d3b allow using v2 block manager with sliding window il y a 5 mois
  AlpinDale 50b7c13db0 refactor: attention selector (#552) il y a 5 mois
  AlpinDale 8b56dc4347 dict -> torch.Tensor for blocks_to_swap il y a 5 mois
  AlpinDale 21ce19b3ea blocks_to_copy dict -> torch.Tensor il y a 5 mois
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) il y a 8 mois
  AlpinDale e3252edd07 fix: remove event and stream, add typing (#382) il y a 9 mois
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) il y a 9 mois
  AlpinDale c2d77b1822 chore: logging refactor (#302) il y a 10 mois
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) il y a 11 mois
  AlpinDale 31c95011a6 feat: FP8 E5M2 KV Cache (#226) il y a 11 mois
  AlpinDale 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) il y a 11 mois
  AlpinDale 15a0454172 feat: FP8 KV Cache (#185) il y a 1 an
  AlpinDale 1aab8a7d6f feat: speedup compilation times by 3x (#130) il y a 1 an
  AlpinDale 74604eb252 fix: pylint complaints (#91) il y a 1 an
  AlpinDale 75c27d3e65 massive overhaul il y a 1 an
  AlpinDale 525edab7cc fix: logger in cache engine il y a 1 an
  AlpinDale b8f4337c5b chore: various fixes il y a 1 an
  AlpinDale a409431c40 feat: draft for cuda kernels il y a 1 an