.. |
all_reduce
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
attention
|
3bb0f07461
chore: rename `task_handler` to `worker` (#985)
|
2 weeks ago |
backup
|
3bb0f07461
chore: rename `task_handler` to `worker` (#985)
|
2 weeks ago |
core
|
f1ea7711bd
core: do not compile ScalarType for torch < 2.4.0 (#938)
|
2 weeks ago |
cpu
|
a113309876
kernel: add meta functions for ops to prevent graph breaks (#1019)
|
1 week ago |
cutlass_extensions
|
93bc863591
feat: Machete Kernels for Hopper GPUs (#842)
|
1 month ago |
hadamard
|
5d288aa76c
feat: add fast hadamard transformation kernels (#232)
|
11 months ago |
mamba
|
0256ed236b
feat: windows support (#790)
|
2 months ago |
moe
|
7ca63930c8
support deepseek_v3 model
|
1 week ago |
prepare_inputs
|
1390915778
multi-step: add support for flashinfer attention backend (#1033)
|
1 week ago |
punica
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
quantization
|
a113309876
kernel: add meta functions for ops to prevent graph breaks (#1019)
|
1 week ago |
sampling
|
bfc8988116
feat: add cuda sampling kernels for top_k and top_p (#828)
|
1 month ago |
activation_kernels.cu
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
cache.h
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
cache_kernels.cu
|
0256ed236b
feat: windows support (#790)
|
2 months ago |
cuda_compat.h
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
cuda_utils.h
|
93bc863591
feat: Machete Kernels for Hopper GPUs (#842)
|
1 month ago |
cuda_utils_kernels.cu
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
dispatch_utils.h
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
layernorm_kernels.cu
|
e14223dce5
kernel: use `cub::BlockReduce` instead of custom impl (#895)
|
3 weeks ago |
ops.h
|
1390915778
multi-step: add support for flashinfer attention backend (#1033)
|
1 week ago |
permute_cols.cu
|
93bc863591
feat: Machete Kernels for Hopper GPUs (#842)
|
1 month ago |
pos_encoding_kernels.cu
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
torch_bindings.cpp
|
1390915778
multi-step: add support for flashinfer attention backend (#1033)
|
1 week ago |