This website works better with JavaScript
Inicio
Explorar
Axuda
Rexistro
Iniciar sesión
david
/
flash-attention
réplica de
https://github.com/Dao-AILab/flash-attention
Seguir
1
Destacar
0
Fork
0
Ficheiros
Incidencias
0
Wiki
Árbore:
4ead9bd7cc
Ramas
Etiquetas
decode
doc_masking
fa3-fp8-varlen
fa3-kvcache-gqa
ipiszy/local_attn
ipiszy/used_q
main
tdd
v2.7.3
v2.7.2.post1
v2.7.2
v2.7.1.post4
v2.7.1.post3
v2.7.1.post2
v2.7.1.post1
v2.7.1
v2.7.0.post2
v2.7.0.post1
v2.7.0
v2.6.3
v2.6.2
v2.6.1
v2.6.0.post1
v2.6.0
v2.5.9.post1
v2.5.9
v2.5.8
v2.5.7
v2.5.6
v2.5.5
v2.5.4
v2.5.3
v2.5.2
v2.5.1.post1
v2.5.1
v2.5.0
v2.4.3.post1
v2.4.3
v2.4.2
v2.4.1
v2.4.0.post1
v2.4.0
v2.3.6
v2.3.5
v2.3.4
v2.3.3
v2.3.2
v2.3.1.post1
v2.3.1
v2.3.0
v2.2.5
v2.2.4.post1
v2.2.4
v2.2.3.post2
v2.2.3.post1
v2.2.3
v2.2.2
v2.2.1
v2.2.0
v2.1.2.post3
v2.1.2.post2
v2.1.2.post1
v2.1.2
v2.1.1
v2.1.0
v2.0.9
v2.0.8
v2.0.7
v2.0.6.post2
v2.0.6.post1
v2.0.6
v2.0.5
v2.0.4
v2.0.3
v2.0.2
v2.0.1
v2.0.0
v1.0.9
v1.0.8
v1.0.7
v1.0.6
v1.0.5
v1.0.4
v1.0.3.post0
v1.0.3
v1.0.2
v1.0.1
v1.0.0
v0.2.8
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.1
flash-attention
/
flash_attn
/
modules
Markus Krimmel
6bbc532388
fix: cast the alibi slopes to torch.float32 (
#846
)
hai 10 meses
..
__init__.py
ece539abd6
Add __init__.py files to subdirectories for installation
%!s(int64=2) %!d(string=hai) anos
block.py
abbc131173
[LayerNorm] Switch from CUDA to Triton implementation
hai 1 ano
embedding.py
f1a73d0740
Run isort and black on python files
hai 1 ano
mha.py
6bbc532388
fix: cast the alibi slopes to torch.float32 (
#846
)
hai 10 meses
mlp.py
c3b2196652
Add Alibi to MHA, test with Baichuan-13B
hai 1 ano