Zhihao Shen 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) vor 2 Monaten
..
layers 8c20cfef49 [Rotary] Support qkv block layout from GQA vor 3 Monaten
losses c7f32a8409 [CrossEntropy] Support precomputed LSE vor 3 Monaten
models 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) vor 2 Monaten
modules 3f1b4d38e7 Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127) vor 4 Monaten
ops 8c20cfef49 [Rotary] Support qkv block layout from GQA vor 3 Monaten
utils 320fb59487 Update citation vor 6 Monaten
__init__.py 418d677192 Bump to v2.6.3 vor 4 Monaten
bert_padding.py 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) vor 2 Monaten
flash_attn_interface.py 83e41b3ca4 Add custom ops for compatibility with PT Compile (#1139) vor 2 Monaten
flash_attn_triton.py f1a73d0740 Run isort and black on python files vor 1 Jahr
flash_attn_triton_og.py f1a73d0740 Run isort and black on python files vor 1 Jahr
flash_blocksparse_attention.py cdbbe844b1 minor changes to unpad_input test util func vor 3 Monaten
flash_blocksparse_attn_interface.py f1a73d0740 Run isort and black on python files vor 1 Jahr
fused_softmax.py f1a73d0740 Run isort and black on python files vor 1 Jahr
pyproject.toml 73bd3f3bbb Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 vor 1 Jahr