XiaobingZhang 0dfb281743 don't save inputs buffer of FlashAttenFunc to reduce memory usage for inference mode (#1383) il y a 2 jours
..
flash_attn_triton_amd b518517cb8 [AMD] Triton Backend for ROCm (#1203) il y a 1 semaine
layers 8c20cfef49 [Rotary] Support qkv block layout from GQA il y a 3 mois
losses c7f32a8409 [CrossEntropy] Support precomputed LSE il y a 3 mois
models 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) il y a 2 mois
modules 3f1b4d38e7 Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127) il y a 4 mois
ops 7153673c1a Fix swiglu backwards return type (#1337) il y a 4 semaines
utils 320fb59487 Update citation il y a 6 mois
__init__.py f86e3dd919 [CI] Use MAX_JOBS=1 with nvcc 12.3, don't need OLD_GENERATOR_PATH il y a 1 semaine
bert_padding.py 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) il y a 2 mois
flash_attn_interface.py 0dfb281743 don't save inputs buffer of FlashAttenFunc to reduce memory usage for inference mode (#1383) il y a 2 jours
flash_attn_triton.py f1a73d0740 Run isort and black on python files il y a 1 an
flash_attn_triton_og.py f1a73d0740 Run isort and black on python files il y a 1 an
flash_blocksparse_attention.py cdbbe844b1 minor changes to unpad_input test util func il y a 2 mois
flash_blocksparse_attn_interface.py f1a73d0740 Run isort and black on python files il y a 1 an
fused_softmax.py f1a73d0740 Run isort and black on python files il y a 1 an
pyproject.toml 73bd3f3bbb Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 il y a 1 an