Zhihao Shen 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) 2 mēneši atpakaļ
..
layers 8c20cfef49 [Rotary] Support qkv block layout from GQA 3 mēneši atpakaļ
losses c7f32a8409 [CrossEntropy] Support precomputed LSE 3 mēneši atpakaļ
models 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) 2 mēneši atpakaļ
modules 3f1b4d38e7 Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127) 4 mēneši atpakaļ
ops 8c20cfef49 [Rotary] Support qkv block layout from GQA 3 mēneši atpakaļ
utils 320fb59487 Update citation 6 mēneši atpakaļ
__init__.py 418d677192 Bump to v2.6.3 4 mēneši atpakaļ
bert_padding.py 30e1ef0f79 minify torch.torch.int32 to torch.int32 (#1237) 2 mēneši atpakaļ
flash_attn_interface.py 83e41b3ca4 Add custom ops for compatibility with PT Compile (#1139) 2 mēneši atpakaļ
flash_attn_triton.py f1a73d0740 Run isort and black on python files 1 gadu atpakaļ
flash_attn_triton_og.py f1a73d0740 Run isort and black on python files 1 gadu atpakaļ
flash_blocksparse_attention.py cdbbe844b1 minor changes to unpad_input test util func 2 mēneši atpakaļ
flash_blocksparse_attn_interface.py f1a73d0740 Run isort and black on python files 1 gadu atpakaļ
fused_softmax.py f1a73d0740 Run isort and black on python files 1 gadu atpakaļ
pyproject.toml 73bd3f3bbb Move pyproject.toml to flash-attn and tests dir to avoid PEP 517 1 gadu atpakaļ