.. |
all_reduce
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
8 months ago |
attention
|
696f2cd59c
add phi3_small support with blocksparse attention
|
5 months ago |
backup
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
9 months ago |
cpu
|
696f2cd59c
add phi3_small support with blocksparse attention
|
5 months ago |
hadamard
|
5d288aa76c
feat: add fast hadamard transformation kernels (#232)
|
11 months ago |
moe
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
8 months ago |
punica
|
3bdeb3e116
fix: clang formatting for all kernels (#558)
|
5 months ago |
quantization
|
f4ea11b982
feat: initial support for activation quantization
|
5 months ago |
activation_kernels.cu
|
3d6695cfbb
feat: add approximate gelu activation kernels (#370)
|
9 months ago |
cache.h
|
3bdeb3e116
fix: clang formatting for all kernels (#558)
|
5 months ago |
cache_kernels.cu
|
3bdeb3e116
fix: clang formatting for all kernels (#558)
|
5 months ago |
cuda_compat.h
|
3bdeb3e116
fix: clang formatting for all kernels (#558)
|
5 months ago |
cuda_utils.h
|
31c95011a6
feat: FP8 E5M2 KV Cache (#226)
|
11 months ago |
cuda_utils_kernels.cu
|
31c95011a6
feat: FP8 E5M2 KV Cache (#226)
|
11 months ago |
dispatch_utils.h
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
9 months ago |
layernorm_kernels.cu
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
8 months ago |
ops.h
|
696f2cd59c
add phi3_small support with blocksparse attention
|
5 months ago |
pos_encoding_kernels.cu
|
e702f587cf
feat: add batched RoPE kernels (#371)
|
9 months ago |
pybind.cpp
|
3bdeb3e116
fix: clang formatting for all kernels (#558)
|
5 months ago |
reduction.cuh
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
8 months ago |