.. |
all_reduce
|
29eaded422
fix and re-enable custom all-reduce
|
9 months ago |
attention
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
backup
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
9 months ago |
cpu
|
fb23720c72
fix CPU build
|
9 months ago |
hadamard
|
5d288aa76c
feat: add fast hadamard transformation kernels (#232)
|
11 months ago |
moe
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
punica
|
c0aac15421
feat: S-LoRA support (#222)
|
11 months ago |
quantization
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
activation_kernels.cu
|
3d6695cfbb
feat: add approximate gelu activation kernels (#370)
|
9 months ago |
cache.h
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
cache_kernels.cu
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
cuda_compat.h
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
cuda_utils.h
|
31c95011a6
feat: FP8 E5M2 KV Cache (#226)
|
11 months ago |
cuda_utils_kernels.cu
|
31c95011a6
feat: FP8 E5M2 KV Cache (#226)
|
11 months ago |
dispatch_utils.h
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
9 months ago |
layernorm_kernels.cu
|
40f63268ee
disable new layernorm kernels for CUDA < 12.0
|
8 months ago |
ops.h
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
pos_encoding_kernels.cu
|
e702f587cf
feat: add batched RoPE kernels (#371)
|
9 months ago |
pybind.cpp
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |
reduction.cuh
|
071269e406
feat: FP8 E4M3 KV Cache (#405)
|
9 months ago |