Chirag Jain 50896ec574 Make nvcc threads configurable via environment variable (#885) hai 9 meses
..
README.md dfe29f5e2b [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead hai 1 ano
cuda_bf16_fallbacks.cuh a01d1213d7 [Gen] Add kernel from FasterTransformer for benchmarking hai 1 ano
cuda_bf16_wrapper.h a01d1213d7 [Gen] Add kernel from FasterTransformer for benchmarking hai 1 ano
decoder_masked_multihead_attention.cu c3f2a632aa [ft_attention] Fix for seqlen=8136 (#488) hai 1 ano
decoder_masked_multihead_attention.h a157cc8c9b [FT] Implement MQA/GQA hai 1 ano
decoder_masked_multihead_attention_template.hpp a157cc8c9b [FT] Implement MQA/GQA hai 1 ano
decoder_masked_multihead_attention_utils.h 3a9bfd076f [FT] rotary_cos/sin should have shape (dim) instead of (seqlen, dim) hai 1 ano
ft_attention.cpp ccbb14f38e Implement rotary embedding in flash_attn_with_kvcache hai 1 ano
setup.py 50896ec574 Make nvcc threads configurable via environment variable (#885) hai 9 meses

README.md

Attention kernel from FasterTransformer

This CUDA extension wraps the single-query attention kernel from FasterTransformer v5.2.1 for benchmarking purpose.

cd csrc/ft_attention && pip install .

As of 2023-09-17, this extension is no longer used in the FlashAttention repo. FlashAttention now has implemented flash_attn_with_kvcache with all the features of this ft_attention kernel (and more).