Tri Dao 02541ac9e8 [CE] Assert logit_scale > 0 vor 1 Monat
..
flash_attn_triton_amd b518517cb8 [AMD] Triton Backend for ROCm (#1203) vor 3 Monaten
layers 8c20cfef49 [Rotary] Support qkv block layout from GQA vor 5 Monaten
losses c7f32a8409 [CrossEntropy] Support precomputed LSE vor 5 Monaten
models 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) vor 5 Monaten
modules 3f1b4d38e7 Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127) vor 7 Monaten
ops 02541ac9e8 [CE] Assert logit_scale > 0 vor 1 Monat
utils 320fb59487 Update citation vor 9 Monaten
__init__.py 5231d95fe1 Drop Pytorch 2.1 vor 1 Monat
bert_padding.py 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) vor 5 Monaten
flash_attn_interface.py d57f826835 Expose `zero_tensors` arg in varlen functions (#1433) vor 1 Monat
flash_attn_triton.py f1a73d0740 Run isort and black on python files vor 1 Jahr
flash_attn_triton_og.py f1a73d0740 Run isort and black on python files vor 1 Jahr
flash_blocksparse_attention.py cdbbe844b1 minor changes to unpad_input test util func vor 5 Monaten
flash_blocksparse_attn_interface.py f1a73d0740 Run isort and black on python files vor 1 Jahr
fused_softmax.py f1a73d0740 Run isort and black on python files vor 1 Jahr
pyproject.toml 73bd3f3bbb Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 vor 1 Jahr