This website works better with JavaScript
Página inicial
Explorar
Ajuda
Registrar
Entrar
david
/
flash-attention
mirror de
https://github.com/Dao-AILab/flash-attention
Observar
1
Favorito
0
Fork
0
Arquivos
Issues
0
Wiki
Branch:
decode
Branches
Tags
changes_for_fp8
decode
doc_masking
fa3-fp8-varlen
fa3-kvcache-gqa
ipiszy/local_attn
ipiszy/used_q
main
tdd
varlen
v2.7.2.post1
v2.7.2
v2.7.1.post4
v2.7.1.post3
v2.7.1.post2
v2.7.1.post1
v2.7.1
v2.7.0.post2
v2.7.0.post1
v2.7.0
v2.6.3
v2.6.2
v2.6.1
v2.6.0.post1
v2.6.0
v2.5.9.post1
v2.5.9
v2.5.8
v2.5.7
v2.5.6
v2.5.5
v2.5.4
v2.5.3
v2.5.2
v2.5.1.post1
v2.5.1
v2.5.0
v2.4.3.post1
v2.4.3
v2.4.2
v2.4.1
v2.4.0.post1
v2.4.0
v2.3.6
v2.3.5
v2.3.4
v2.3.3
v2.3.2
v2.3.1.post1
v2.3.1
v2.3.0
v2.2.5
v2.2.4.post1
v2.2.4
v2.2.3.post2
v2.2.3.post1
v2.2.3
v2.2.2
v2.2.1
v2.2.0
v2.1.2.post3
v2.1.2.post2
v2.1.2.post1
v2.1.2
v2.1.1
v2.1.0
v2.0.9
v2.0.8
v2.0.7
v2.0.6.post2
v2.0.6.post1
v2.0.6
v2.0.5
v2.0.4
v2.0.3
v2.0.2
v2.0.1
v2.0.0
v1.0.9
v1.0.8
v1.0.7
v1.0.6
v1.0.5
v1.0.4
v1.0.3.post0
v1.0.3
v1.0.2
v1.0.1
v1.0.0
v0.2.8
v0.2.7
v0.2.6
v0.2.5
v0.2.4
v0.2.3
v0.2.2
v0.2.1
Histórico de commits
Buscar
Autor
SHA1
Mensagem
Data
Tri Dao
9fd6b977bb
Precompute the pointers in mha_combine kernel
3 semanas atrás
Tri Dao
01de45b8f8
Add combine kernel
1 mês atrás