sclarkson 1feb711f46 Fix compilation with clang on ARM64 (#1285) hace 1 semana
..
README.md dfe29f5e2b [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead hace 1 año
cuda_bf16_fallbacks.cuh a01d1213d7 [Gen] Add kernel from FasterTransformer for benchmarking hace 1 año
cuda_bf16_wrapper.h a01d1213d7 [Gen] Add kernel from FasterTransformer for benchmarking hace 1 año
decoder_masked_multihead_attention.cu c3f2a632aa [ft_attention] Fix for seqlen=8136 (#488) hace 1 año
decoder_masked_multihead_attention.h a157cc8c9b [FT] Implement MQA/GQA hace 1 año
decoder_masked_multihead_attention_template.hpp a157cc8c9b [FT] Implement MQA/GQA hace 1 año
decoder_masked_multihead_attention_utils.h 3a9bfd076f [FT] rotary_cos/sin should have shape (dim) instead of (seqlen, dim) hace 1 año
ft_attention.cpp 1feb711f46 Fix compilation with clang on ARM64 (#1285) hace 1 semana
setup.py 50896ec574 Make nvcc threads configurable via environment variable (#885) hace 9 meses

README.md

Attention kernel from FasterTransformer

This CUDA extension wraps the single-query attention kernel from FasterTransformer v5.2.1 for benchmarking purpose.

cd csrc/ft_attention && pip install .

As of 2023-09-17, this extension is no longer used in the FlashAttention repo. FlashAttention now has implemented flash_attn_with_kvcache with all the features of this ft_attention kernel (and more).