AlpinDale ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs vor 6 Monaten
..
all_reduce 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
attention 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
backup f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) vor 11 Monaten
cpu 271a680026 feat: inference support for PowerPC ISA vor 7 Monaten
hadamard 5d288aa76c feat: add fast hadamard transformation kernels (#232) vor 1 Jahr
mamba 5be90c3859 Mamba infrastrucuture support (#586) vor 6 Monaten
moe 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
punica 4f42985b5c feat: qwen2 lora shapes vor 7 Monaten
quantization ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs vor 6 Monaten
activation_kernels.cu c0c336aaa3 refactor: registry for processing model inputs; quick_gelu; clip model support vor 7 Monaten
cache.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
cache_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
cuda_compat.h 00acf371f9 rocm: fused topk softmax vor 7 Monaten
cuda_utils.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
cuda_utils_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
dispatch_utils.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
layernorm_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
ops.h 5be90c3859 Mamba infrastrucuture support (#586) vor 6 Monaten
pos_encoding_kernels.cu 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
reduction.cuh aba03b4756 feat: dynamic per-token activation quantization vor 7 Monaten
registration.h 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) vor 7 Monaten
torch_bindings.cpp ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs vor 6 Monaten