.. |
all_reduce
|
6305e6f3f2
fix: no repeated IPC registration (#227)
|
1 year ago |
attention
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
11 months ago |
backup
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
11 months ago |
hadamard
|
5d288aa76c
feat: add fast hadamard transformation kernels (#232)
|
1 year ago |
moe
|
935027bdcc
feat: dynamic shared memory allocation for moe align block size (#372)
|
11 months ago |
punica
|
c0aac15421
feat: S-LoRA support (#222)
|
1 year ago |
quantization
|
89c32b40ec
chore: add new imatrix quants (#320)
|
1 year ago |
activation_kernels.cu
|
3d6695cfbb
feat: add approximate gelu activation kernels (#370)
|
11 months ago |
cache.h
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
11 months ago |
cache_kernels.cu
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
11 months ago |
cuda_compat.h
|
8fa608aeb7
feat: replace Ray with NCCL for control plane comms (#221)
|
1 year ago |
cuda_utils.h
|
31c95011a6
feat: FP8 E5M2 KV Cache (#226)
|
1 year ago |
cuda_utils_kernels.cu
|
31c95011a6
feat: FP8 E5M2 KV Cache (#226)
|
1 year ago |
dispatch_utils.h
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
11 months ago |
layernorm_kernels.cu
|
8fa608aeb7
feat: replace Ray with NCCL for control plane comms (#221)
|
1 year ago |
ops.h
|
e702f587cf
feat: add batched RoPE kernels (#371)
|
11 months ago |
pos_encoding_kernels.cu
|
e702f587cf
feat: add batched RoPE kernels (#371)
|
11 months ago |
pybind.cpp
|
e702f587cf
feat: add batched RoPE kernels (#371)
|
11 months ago |
reduction.cuh
|
2755a48d51
merge dev branch into main (#153)
|
1 year ago |