Histórico de commits

Autor SHA1 Mensagem Data
  XiaobingZhang 0dfb281743 don't save inputs buffer of FlashAttenFunc to reduce memory usage for inference mode (#1383) 3 dias atrás
  Michael Melesse b518517cb8 [AMD] Triton Backend for ROCm (#1203) 1 semana atrás
  Antoni Viros 83e41b3ca4 Add custom ops for compatibility with PT Compile (#1139) 2 meses atrás
  youkaichao ef3e358a25 remove lambda (#1056) 4 meses atrás
  Tri Dao 898dd4bbf2 Pass seqused_k to _flash_attn_varlen_forward 5 meses atrás
  Tri Dao 40e534a7f6 Implement cache_leftpad 5 meses atrás
  Tri Dao 81e01efd4b More typo fixes 5 meses atrás
  Tri Dao 72e27c6320 Fix typo with softcapping 5 meses atrás
  Phil Wang f4628b43ec missing commas and backwards return arguments (#1032) 5 meses atrás
  Nicolas Patry 8f873cc6ac Implement softcapping. (#1025) 5 meses atrás
  Jianwei Dong 4e8d60069f Add the return_softmax_lse parameter to the flash_attn_with_kvcache function to allow returning the logsumexp of the attention scores. (#989) 5 meses atrás
  Grigory Sizov f816dee63c Support unpadded LSE layout (#970) 5 meses atrás
  Grigory Sizov 2a15840f09 Enable paged attention in varlen forward (#831) 9 meses atrás
  Tao He 204c3c6d1b Fixes an error in comment (#785) 10 meses atrás
  Tri Dao 54e80a3829 Implement page KV cache 10 meses atrás
  Tri Dao a7b66ae25a Simplify writing softmax to gmem 11 meses atrás
  Tri Dao 732654583c Implement deterministic backward (thanks to Meituan) 11 meses atrás
  Tri Dao 5ab9b3667b Clean up alibi, implement non-causal alibi 11 meses atrás
  Tri Dao bc28eacc60 Format flash_attn_interface.py 1 ano atrás
  Sanghun Cho e4f726fc44 Support alibi, by Sanghun Cho from Kakao Brain 1 ano atrás
  Tri Dao d4a7c8ffbb [CI] Only compile for CUDA 11.8 & 12.2, MAX_JOBS=2,add torch-nightly 1 ano atrás
  Jeremy Reizenstein ce3e7280f8 Allow varlen_fwd to take optional seqused_k (#647) 1 ano atrás
  Tri Dao e279bf8ed9 [Gen] Accept cache_batch_idx to index into the KV cache 1 ano atrás
  Tri Dao 083e8f525f Implement local attention 1 ano atrás
  Tri Dao ccbb14f38e Implement rotary embedding in flash_attn_with_kvcache 1 ano atrás
  Tri Dao ee77b931b9 Swap seqlen_q and nheads for MQA to speed it up (h/t Daniel Haziza) 1 ano atrás
  Tri Dao fd20f16a4e Support cache_seqlens being integer 1 ano atrás
  Tri Dao 37c6e05406 Implement flash_attn_with_kvcache 1 ano atrás
  Tri Dao 9e5e8bc91e Change causal mask to be aligned to bottom-right instead of top-left 1 ano atrás
  Tri Dao d431f16751 Import torch before flash_attn_2_cuda 1 ano atrás