Historique des commits

Auteur SHA1 Message Date
  AlpinDale 4d4e767838 ci: take one of fixing lint issues il y a 5 mois
  AlpinDale cd31f8efbb chore: optimize PP comm by replacing send with partial send + allgather il y a 5 mois
  AlpinDale 705e50f4bd fix: broadcasting logic for multi_modal_kwargs il y a 5 mois
  AlpinDale d907f20908 feat: support collective comms in XLA devices, e.g. TPUs il y a 5 mois
  AlpinDale 42c66d5b00 feat: tensor parallelism for CPU backend il y a 5 mois
  AlpinDale 8ade64c0cc fix: prevent possible data race by adding sync il y a 5 mois
  AlpinDale f91991f584 fix: f-string fixes il y a 5 mois
  AlpinDale 5289c14b24 feat: Asymmetric Tensor Parallel (#594) il y a 5 mois
  AlpinDale dba22e4f83 fix: add zeromq fallback for broadcasting large objects (e.g. vlm images) il y a 5 mois
  AlpinDale bdf1cc1aec fix: allow using custom all reduce when pp_size > 1 il y a 5 mois
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) il y a 6 mois
  AlpinDale 4cdc810b1c fix: minor TP issues with vision models il y a 6 mois
  AlpinDale 9868bb2290 chore: make it clear that '%' should NOT be in tensor dict keys il y a 6 mois
  AlpinDale bb4da84623 fix: make sure multi modal kwargs can broadcast properly with ring buffer il y a 6 mois
  AlpinDale bc5ac9584a fix: make tensor_dict flattening/unflattening more generic il y a 6 mois
  AlpinDale abbb730607 feat: support draft model on different tensor parallel size il y a 6 mois
  AlpinDale e238abf0cc chore: send and recv helper functions il y a 6 mois
  AlpinDale 1b340083b1 feat: add shm broadcast il y a 6 mois
  AlpinDale 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) il y a 6 mois
  AlpinDale cc3486477e fix: benign multiprocessing error il y a 6 mois
  AlpinDale 1d00b61622 feat: w4a16 support for compressed-tensors il y a 6 mois
  AlpinDale 34b41e0a87 chore: add coordinator to reduce code duplication in tp and pp il y a 6 mois
  AlpinDale 270bd333af chore: check if process is on the same node il y a 6 mois
  AlpinDale 5b0c11d190 support pipeline parallel pynccl groups il y a 6 mois
  AlpinDale b984fe4a91 refactor custom allreduce to support multiple tp groups il y a 6 mois
  AlpinDale 8ae2cce237 refactor pynccl il y a 6 mois
  AlpinDale 1879e32510 enable all-reduce for multiple tp groups il y a 6 mois
  AlpinDale 4c746d8baa chore: init nccl using the gloo backend il y a 7 mois
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) il y a 9 mois