1
0
AlpinDale e32f506e17 chore: gpu arch guard for cutlass w8a8 kernels 7 сар өмнө
..
all_reduce 9d81716bfd [v0.5.3] Release Candidate (#388) 10 сар өмнө
attention ced1b36b8b feat: support head size of 192 7 сар өмнө
backup f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 11 сар өмнө
cpu ba02fb65c9 fix: pos encodings for CPU 7 сар өмнө
hadamard 5d288aa76c feat: add fast hadamard transformation kernels (#232) 1 жил өмнө
moe 00acf371f9 rocm: fused topk softmax 7 сар өмнө
punica 3bdeb3e116 fix: clang formatting for all kernels (#558) 7 сар өмнө
quantization e32f506e17 chore: gpu arch guard for cutlass w8a8 kernels 7 сар өмнө
activation_kernels.cu 3d6695cfbb feat: add approximate gelu activation kernels (#370) 11 сар өмнө
cache.h 3bdeb3e116 fix: clang formatting for all kernels (#558) 7 сар өмнө
cache_kernels.cu 3bdeb3e116 fix: clang formatting for all kernels (#558) 7 сар өмнө
cuda_compat.h 00acf371f9 rocm: fused topk softmax 7 сар өмнө
cuda_utils.h 31c95011a6 feat: FP8 E5M2 KV Cache (#226) 1 жил өмнө
cuda_utils_kernels.cu 31c95011a6 feat: FP8 E5M2 KV Cache (#226) 1 жил өмнө
dispatch_utils.h f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 11 сар өмнө
layernorm_kernels.cu 9d81716bfd [v0.5.3] Release Candidate (#388) 10 сар өмнө
ops.h 696f2cd59c add phi3_small support with blocksparse attention 7 сар өмнө
pos_encoding_kernels.cu e702f587cf feat: add batched RoPE kernels (#371) 11 сар өмнө
pybind.cpp 3bdeb3e116 fix: clang formatting for all kernels (#558) 7 сар өмнө
reduction.cuh 9d81716bfd [v0.5.3] Release Candidate (#388) 10 сар өмнө