AlpinDale ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs il y a 6 mois
..
all_reduce 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
attention 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
backup f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) il y a 11 mois
cpu 271a680026 feat: inference support for PowerPC ISA il y a 7 mois
hadamard 5d288aa76c feat: add fast hadamard transformation kernels (#232) il y a 1 an
mamba 5be90c3859 Mamba infrastrucuture support (#586) il y a 6 mois
moe 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
punica 4f42985b5c feat: qwen2 lora shapes il y a 7 mois
quantization ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs il y a 6 mois
activation_kernels.cu c0c336aaa3 refactor: registry for processing model inputs; quick_gelu; clip model support il y a 7 mois
cache.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
cache_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
cuda_compat.h 00acf371f9 rocm: fused topk softmax il y a 7 mois
cuda_utils.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
cuda_utils_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
dispatch_utils.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
layernorm_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
ops.h 5be90c3859 Mamba infrastrucuture support (#586) il y a 6 mois
pos_encoding_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
reduction.cuh aba03b4756 feat: dynamic per-token activation quantization il y a 7 mois
registration.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) il y a 7 mois
torch_bindings.cpp ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs il y a 6 mois