Commit History

Author SHA1 Message Date
  XiaobingZhang 0dfb281743 don't save inputs buffer of FlashAttenFunc to reduce memory usage for inference mode (#1383) 2 days ago
  Michael Melesse b518517cb8 [AMD] Triton Backend for ROCm (#1203) 1 week ago
  Antoni Viros 83e41b3ca4 Add custom ops for compatibility with PT Compile (#1139) 2 months ago
  youkaichao ef3e358a25 remove lambda (#1056) 4 months ago
  Tri Dao 898dd4bbf2 Pass seqused_k to _flash_attn_varlen_forward 5 months ago
  Tri Dao 40e534a7f6 Implement cache_leftpad 5 months ago
  Tri Dao 81e01efd4b More typo fixes 5 months ago
  Tri Dao 72e27c6320 Fix typo with softcapping 5 months ago
  Phil Wang f4628b43ec missing commas and backwards return arguments (#1032) 5 months ago
  Nicolas Patry 8f873cc6ac Implement softcapping. (#1025) 5 months ago
  Jianwei Dong 4e8d60069f Add the return_softmax_lse parameter to the flash_attn_with_kvcache function to allow returning the logsumexp of the attention scores. (#989) 5 months ago
  Grigory Sizov f816dee63c Support unpadded LSE layout (#970) 5 months ago
  Grigory Sizov 2a15840f09 Enable paged attention in varlen forward (#831) 9 months ago
  Tao He 204c3c6d1b Fixes an error in comment (#785) 10 months ago
  Tri Dao 54e80a3829 Implement page KV cache 10 months ago
  Tri Dao a7b66ae25a Simplify writing softmax to gmem 11 months ago
  Tri Dao 732654583c Implement deterministic backward (thanks to Meituan) 11 months ago
  Tri Dao 5ab9b3667b Clean up alibi, implement non-causal alibi 11 months ago
  Tri Dao bc28eacc60 Format flash_attn_interface.py 1 year ago
  Sanghun Cho e4f726fc44 Support alibi, by Sanghun Cho from Kakao Brain 1 year ago
  Tri Dao d4a7c8ffbb [CI] Only compile for CUDA 11.8 & 12.2, MAX_JOBS=2,add torch-nightly 1 year ago
  Jeremy Reizenstein ce3e7280f8 Allow varlen_fwd to take optional seqused_k (#647) 1 year ago
  Tri Dao e279bf8ed9 [Gen] Accept cache_batch_idx to index into the KV cache 1 year ago
  Tri Dao 083e8f525f Implement local attention 1 year ago
  Tri Dao ccbb14f38e Implement rotary embedding in flash_attn_with_kvcache 1 year ago
  Tri Dao ee77b931b9 Swap seqlen_q and nheads for MQA to speed it up (h/t Daniel Haziza) 1 year ago
  Tri Dao fd20f16a4e Support cache_seqlens being integer 1 year ago
  Tri Dao 37c6e05406 Implement flash_attn_with_kvcache 1 year ago
  Tri Dao 9e5e8bc91e Change causal mask to be aligned to bottom-right instead of top-left 1 year ago
  Tri Dao d431f16751 Import torch before flash_attn_2_cuda 1 year ago