sclarkson 1feb711f46 Fix compilation with clang on ARM64 (#1285) | 1 هفته پیش | |
---|---|---|
.. | ||
README.md | 1 سال پیش | |
cuda_bf16_fallbacks.cuh | 1 سال پیش | |
cuda_bf16_wrapper.h | 1 سال پیش | |
decoder_masked_multihead_attention.cu | 1 سال پیش | |
decoder_masked_multihead_attention.h | 1 سال پیش | |
decoder_masked_multihead_attention_template.hpp | 1 سال پیش | |
decoder_masked_multihead_attention_utils.h | 1 سال پیش | |
ft_attention.cpp | 1 هفته پیش | |
setup.py | 9 ماه پیش |
This CUDA extension wraps the single-query attention kernel from FasterTransformer v5.2.1 for benchmarking purpose.
cd csrc/ft_attention && pip install .
As of 2023-09-17, this extension is no longer used in the FlashAttention repo.
FlashAttention now has implemented
flash_attn_with_kvcache
with all the features of this ft_attention
kernel (and more).