.. |
attention
|
4e71bd1d12
feat: add PagedAttention V2 kernels (#76)
|
1 year ago |
quantization
|
887e03669a
feat: add exllamav2 for GPTQ (#99)
|
1 year ago |
activation.cpp
|
32844c1522
add GELU kernels and remove compile bloat
|
1 year ago |
activation_kernels.cu
|
5175605f8d
fix: yarn (#112)
|
1 year ago |
attention.cpp
|
4e71bd1d12
feat: add PagedAttention V2 kernels (#76)
|
1 year ago |
cache.cpp
|
081545bde6
fix: various CUDA kernel tweaks
|
1 year ago |
cache_kernels.cu
|
3d72f05c7b
feat: flattened 1D tensor -> 2D tensor (#85)
|
1 year ago |
cuda_utils.cpp
|
75c27d3e65
massive overhaul
|
1 year ago |
cuda_utils_kernels.cu
|
75c27d3e65
massive overhaul
|
1 year ago |
dispatch_utils.h
|
32844c1522
add GELU kernels and remove compile bloat
|
1 year ago |
layernorm.cpp
|
081545bde6
fix: various CUDA kernel tweaks
|
1 year ago |
layernorm_kernels.cu
|
3d72f05c7b
feat: flattened 1D tensor -> 2D tensor (#85)
|
1 year ago |
pos_encoding.cpp
|
45f6d9f923
initial refactor commit
|
1 year ago |
pos_encoding_kernels.cu
|
5175605f8d
fix: yarn (#112)
|
1 year ago |
quantization.cpp
|
887e03669a
feat: add exllamav2 for GPTQ (#99)
|
1 year ago |
reduction.cuh
|
081545bde6
fix: various CUDA kernel tweaks
|
1 year ago |