Tri Dao 02541ac9e8 [CE] Assert logit_scale > 0 1 miesiąc temu
..
flash_attn_triton_amd b518517cb8 [AMD] Triton Backend for ROCm (#1203) 3 miesięcy temu
layers 8c20cfef49 [Rotary] Support qkv block layout from GQA 5 miesięcy temu
losses c7f32a8409 [CrossEntropy] Support precomputed LSE 5 miesięcy temu
models 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) 5 miesięcy temu
modules 3f1b4d38e7 Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127) 7 miesięcy temu
ops 02541ac9e8 [CE] Assert logit_scale > 0 1 miesiąc temu
utils 320fb59487 Update citation 9 miesięcy temu
__init__.py 5231d95fe1 Drop Pytorch 2.1 1 miesiąc temu
bert_padding.py 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) 5 miesięcy temu
flash_attn_interface.py d57f826835 Expose `zero_tensors` arg in varlen functions (#1433) 1 miesiąc temu
flash_attn_triton.py f1a73d0740 Run isort and black on python files 1 rok temu
flash_attn_triton_og.py f1a73d0740 Run isort and black on python files 1 rok temu
flash_blocksparse_attention.py cdbbe844b1 minor changes to unpad_input test util func 5 miesięcy temu
flash_blocksparse_attn_interface.py f1a73d0740 Run isort and black on python files 1 rok temu
fused_softmax.py f1a73d0740 Run isort and black on python files 1 rok temu
pyproject.toml 73bd3f3bbb Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 1 rok temu