.. |
flash_attn_triton_amd
|
b518517cb8
[AMD] Triton Backend for ROCm (#1203)
|
há 1 semana atrás |
layers
|
8c20cfef49
[Rotary] Support qkv block layout from GQA
|
há 3 meses atrás |
losses
|
c7f32a8409
[CrossEntropy] Support precomputed LSE
|
há 3 meses atrás |
models
|
30e1ef0f79
minify torch.torch.int32 to torch.int32 (#1237)
|
há 2 meses atrás |
modules
|
3f1b4d38e7
Fix: check the type of max_seqlen_k instead of checking max_seqlen twice (#1127)
|
há 4 meses atrás |
ops
|
7153673c1a
Fix swiglu backwards return type (#1337)
|
há 4 semanas atrás |
utils
|
320fb59487
Update citation
|
há 6 meses atrás |
__init__.py
|
f86e3dd919
[CI] Use MAX_JOBS=1 with nvcc 12.3, don't need OLD_GENERATOR_PATH
|
há 1 semana atrás |
bert_padding.py
|
30e1ef0f79
minify torch.torch.int32 to torch.int32 (#1237)
|
há 2 meses atrás |
flash_attn_interface.py
|
0dfb281743
don't save inputs buffer of FlashAttenFunc to reduce memory usage for inference mode (#1383)
|
há 2 dias atrás |
flash_attn_triton.py
|
f1a73d0740
Run isort and black on python files
|
há 1 ano atrás |
flash_attn_triton_og.py
|
f1a73d0740
Run isort and black on python files
|
há 1 ano atrás |
flash_blocksparse_attention.py
|
cdbbe844b1
minor changes to unpad_input test util func
|
há 2 meses atrás |
flash_blocksparse_attn_interface.py
|
f1a73d0740
Run isort and black on python files
|
há 1 ano atrás |
fused_softmax.py
|
f1a73d0740
Run isort and black on python files
|
há 1 ano atrás |
pyproject.toml
|
73bd3f3bbb
Move pyproject.toml to flash-attn and tests dir to avoid PEP 517
|
há 1 ano atrás |