Histórico de commits

Autor SHA1 Mensagem Data
  AlpinDale f91991f584 fix: f-string fixes 5 meses atrás
  AlpinDale 5289c14b24 feat: Asymmetric Tensor Parallel (#594) 5 meses atrás
  AlpinDale dba22e4f83 fix: add zeromq fallback for broadcasting large objects (e.g. vlm images) 5 meses atrás
  AlpinDale bdf1cc1aec fix: allow using custom all reduce when pp_size > 1 5 meses atrás
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) 5 meses atrás
  AlpinDale 4cdc810b1c fix: minor TP issues with vision models 5 meses atrás
  AlpinDale 9868bb2290 chore: make it clear that '%' should NOT be in tensor dict keys 5 meses atrás
  AlpinDale bb4da84623 fix: make sure multi modal kwargs can broadcast properly with ring buffer 5 meses atrás
  AlpinDale bc5ac9584a fix: make tensor_dict flattening/unflattening more generic 5 meses atrás
  AlpinDale abbb730607 feat: support draft model on different tensor parallel size 5 meses atrás
  AlpinDale e238abf0cc chore: send and recv helper functions 5 meses atrás
  AlpinDale 1b340083b1 feat: add shm broadcast 5 meses atrás
  AlpinDale 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) 5 meses atrás
  AlpinDale cc3486477e fix: benign multiprocessing error 5 meses atrás
  AlpinDale 1d00b61622 feat: w4a16 support for compressed-tensors 5 meses atrás
  AlpinDale 34b41e0a87 chore: add coordinator to reduce code duplication in tp and pp 5 meses atrás
  AlpinDale 270bd333af chore: check if process is on the same node 5 meses atrás
  AlpinDale 5b0c11d190 support pipeline parallel pynccl groups 6 meses atrás
  AlpinDale b984fe4a91 refactor custom allreduce to support multiple tp groups 6 meses atrás
  AlpinDale 8ae2cce237 refactor pynccl 6 meses atrás
  AlpinDale 1879e32510 enable all-reduce for multiple tp groups 6 meses atrás
  AlpinDale 4c746d8baa chore: init nccl using the gloo backend 7 meses atrás
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) 8 meses atrás