Histórico de Commits

Autor SHA1 Mensagem Data
  AlpinDale 61aed092a5 rocm: add support for FP8 KV cache in the custom paged attention kkernels (#1066) há 6 dias atrás
  AlpinDale 9bdf8d5bfa mamba: enable continuous batching for mamba kernels (#1055) há 1 semana atrás
  AlpinDale 239a8cae25 torch.compile: register all-reduce operations as custom ops (#1050) há 1 semana atrás
  AlpinDale 8976805f90 kernel: asymmetric AQ AZP quantization kernels (#1048) há 1 semana atrás
  AlpinDale 4a7cb8f232 rocm: add custom paged attention kernels for ROCm (#1043) há 1 semana atrás
  AlpinDale 1390915778 multi-step: add support for flashinfer attention backend (#1033) há 1 semana atrás
  AlpinDale a113309876 kernel: add meta functions for ops to prevent graph breaks (#1019) há 1 semana atrás
  AlpinDale fcfcfc65e1 quants: add triton kernels for AWQ (#946) há 2 semanas atrás
  AlpinDale 9f3e7c86e2 feat: add fused Marlin MoE kernel (#934) há 2 semanas atrás
  AlpinDale 93bc863591 feat: Machete Kernels for Hopper GPUs (#842) há 1 mês atrás
  AlpinDale bfc8988116 feat: add cuda sampling kernels for top_k and top_p (#828) há 1 mês atrás
  AlpinDale f98e7b2f8c feat: add HQQ quantization support (#795) há 2 meses atrás
  AlpinDale 73177656ed feat: quant_llm support (#755) há 3 meses atrás
  AlpinDale ccbda97416 fix: types in AQLM and GGUF for dynamo support (#736) há 3 meses atrás
  AlpinDale b0f262eec1 feat: FP8 quantization support for AMD ROCm (#729) há 3 meses atrás
  AlpinDale 5d37ec1016 suppress tpu import warning (#696) há 3 meses atrás
  AlpinDale a401f8e05d feat: per-tensor token epilogue kernels (#630) há 4 meses atrás
  AlpinDale f1d0b77c92 [0.6.0] Release Candidate (#481) há 4 meses atrás