AlpinDale a788ca33bf hack in custom bias for attention kernels hai 9 meses
..
all_reduce 6305e6f3f2 fix: no repeated IPC registration (#227) hai 11 meses
attention a788ca33bf hack in custom bias for attention kernels hai 9 meses
hadamard 5d288aa76c feat: add fast hadamard transformation kernels (#232) hai 11 meses
moe 7d6ba53602 feat: fused top-k kernels for MoE (#273) hai 10 meses
punica c0aac15421 feat: S-LoRA support (#222) hai 11 meses
quantization 89c32b40ec chore: add new imatrix quants (#320) hai 10 meses
activation_kernels.cu e31c6f0b45 feat: refactor modeling logic and support more models (#274) hai 10 meses
cache.h 9810daa699 feat: INT8 KV Cache (#298) hai 10 meses
cache_kernels.cu 9810daa699 feat: INT8 KV Cache (#298) hai 10 meses
cuda_compat.h 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) hai 11 meses
cuda_utils.h 31c95011a6 feat: FP8 E5M2 KV Cache (#226) hai 11 meses
cuda_utils_kernels.cu 31c95011a6 feat: FP8 E5M2 KV Cache (#226) hai 11 meses
dispatch_utils.h 9810daa699 feat: INT8 KV Cache (#298) hai 10 meses
layernorm_kernels.cu 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) hai 11 meses
ops.h a788ca33bf hack in custom bias for attention kernels hai 9 meses
pos_encoding_kernels.cu 8fa608aeb7 feat: replace Ray with NCCL for control plane comms (#221) hai 11 meses
pybind.cpp c41462cfcd feat: exllamav2 quantization (#305) hai 10 meses
reduction.cuh 2755a48d51 merge dev branch into main (#153) hai 1 ano