1
0
AlpinDale 40f63268ee disable new layernorm kernels for CUDA < 12.0 9 сар өмнө
..
all_reduce 29eaded422 fix and re-enable custom all-reduce 9 сар өмнө
attention 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
backup f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 9 сар өмнө
cpu fb23720c72 fix CPU build 9 сар өмнө
hadamard 5d288aa76c feat: add fast hadamard transformation kernels (#232) 11 сар өмнө
moe 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
punica c0aac15421 feat: S-LoRA support (#222) 1 жил өмнө
quantization 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
activation_kernels.cu 3d6695cfbb feat: add approximate gelu activation kernels (#370) 9 сар өмнө
cache.h 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
cache_kernels.cu 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
cuda_compat.h 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
cuda_utils.h 31c95011a6 feat: FP8 E5M2 KV Cache (#226) 1 жил өмнө
cuda_utils_kernels.cu 31c95011a6 feat: FP8 E5M2 KV Cache (#226) 1 жил өмнө
dispatch_utils.h f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 9 сар өмнө
layernorm_kernels.cu 40f63268ee disable new layernorm kernels for CUDA < 12.0 9 сар өмнө
ops.h 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
pos_encoding_kernels.cu e702f587cf feat: add batched RoPE kernels (#371) 9 сар өмнө
pybind.cpp 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө
reduction.cuh 071269e406 feat: FP8 E4M3 KV Cache (#405) 9 сар өмнө