Histórico de Commits

Autor SHA1 Mensagem Data
  AlpinDale 4d4e767838 ci: take one of fixing lint issues há 5 meses atrás
  AlpinDale cd31f8efbb chore: optimize PP comm by replacing send with partial send + allgather há 5 meses atrás
  AlpinDale 705e50f4bd fix: broadcasting logic for multi_modal_kwargs há 5 meses atrás
  AlpinDale d907f20908 feat: support collective comms in XLA devices, e.g. TPUs há 5 meses atrás
  AlpinDale 42c66d5b00 feat: tensor parallelism for CPU backend há 5 meses atrás
  AlpinDale 8ade64c0cc fix: prevent possible data race by adding sync há 5 meses atrás
  AlpinDale f91991f584 fix: f-string fixes há 5 meses atrás
  AlpinDale 5289c14b24 feat: Asymmetric Tensor Parallel (#594) há 5 meses atrás
  AlpinDale dba22e4f83 fix: add zeromq fallback for broadcasting large objects (e.g. vlm images) há 5 meses atrás
  AlpinDale bdf1cc1aec fix: allow using custom all reduce when pp_size > 1 há 5 meses atrás
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) há 5 meses atrás
  AlpinDale 4cdc810b1c fix: minor TP issues with vision models há 6 meses atrás
  AlpinDale 9868bb2290 chore: make it clear that '%' should NOT be in tensor dict keys há 6 meses atrás
  AlpinDale bb4da84623 fix: make sure multi modal kwargs can broadcast properly with ring buffer há 6 meses atrás
  AlpinDale bc5ac9584a fix: make tensor_dict flattening/unflattening more generic há 6 meses atrás
  AlpinDale abbb730607 feat: support draft model on different tensor parallel size há 6 meses atrás
  AlpinDale e238abf0cc chore: send and recv helper functions há 6 meses atrás
  AlpinDale 1b340083b1 feat: add shm broadcast há 6 meses atrás
  AlpinDale 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) há 6 meses atrás
  AlpinDale cc3486477e fix: benign multiprocessing error há 6 meses atrás
  AlpinDale 1d00b61622 feat: w4a16 support for compressed-tensors há 6 meses atrás
  AlpinDale 34b41e0a87 chore: add coordinator to reduce code duplication in tp and pp há 6 meses atrás
  AlpinDale 270bd333af chore: check if process is on the same node há 6 meses atrás
  AlpinDale 5b0c11d190 support pipeline parallel pynccl groups há 6 meses atrás
  AlpinDale b984fe4a91 refactor custom allreduce to support multiple tp groups há 6 meses atrás
  AlpinDale 8ae2cce237 refactor pynccl há 6 meses atrás
  AlpinDale 1879e32510 enable all-reduce for multiple tp groups há 6 meses atrás
  AlpinDale 4c746d8baa chore: init nccl using the gloo backend há 7 meses atrás
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) há 9 meses atrás