AlpinDale a788ca33bf hack in custom bias for attention kernels пре 9 месеци
..
all_reduce 6305e6f3f2 fix: no repeated IPC registration (#227) пре 11 месеци
attention a788ca33bf hack in custom bias for attention kernels пре 9 месеци
hadamard 5d288aa76c feat: add fast hadamard transformation kernels (#232) пре 11 месеци
moe 7d6ba53602 feat: fused top-k kernels for MoE (#273) пре 10 месеци
punica c0aac15421 feat: S-LoRA support (#222) пре 11 месеци
quantization 89c32b40ec chore: add new imatrix quants (#320) пре 10 месеци
activation_kernels.cu e31c6f0b45 feat: refactor modeling logic and support more models (#274) пре 10 месеци
cache.h 9810daa699 feat: INT8 KV Cache (#298) пре 10 месеци
cache_kernels.cu 9810daa699 feat: INT8 KV Cache (#298) пре 10 месеци
cuda_compat.h 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) пре 11 месеци
cuda_utils.h 31c95011a6 feat: FP8 E5M2 KV Cache (#226) пре 11 месеци
cuda_utils_kernels.cu 31c95011a6 feat: FP8 E5M2 KV Cache (#226) пре 11 месеци
dispatch_utils.h 9810daa699 feat: INT8 KV Cache (#298) пре 10 месеци
layernorm_kernels.cu 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) пре 11 месеци
ops.h a788ca33bf hack in custom bias for attention kernels пре 9 месеци
pos_encoding_kernels.cu 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) пре 11 месеци
pybind.cpp c41462cfcd feat: exllamav2 quantization (#305) пре 10 месеци
reduction.cuh 2755a48d51 merge dev branch into main (#153) пре 1 година