Historique des commits

Auteur SHA1 Message Date
  Tri Dao abbc131173 [LayerNorm] Switch from CUDA to Triton implementation il y a 1 an
  Tri Dao f1a73d0740 Run isort and black on python files il y a 1 an
  Tri Dao 75e334d407 [MLP] Add ParallelMLP il y a 1 an
  Tri Dao b3177dfaf6 [GPT] Enable FlashAttention for GPT-J il y a 1 an
  Tri Dao 6fc1e07da2 [Block] Re-enable DropPath il y a 1 an
  Tri Dao 4f285b3547 FlashAttention-2 release il y a 1 an
  ljss 8e44c0eefb Fix a bug il y a 1 an
  Federico Berto 3889ba168b [BugFix] cannot unpack non-iterable NoneType object il y a 1 an
  Tri Dao ba2fe7f378 [Gen] Move allocate_inference_cache to within the model il y a 1 an
  Tri Dao 96d10f6545 Implement LLaMa il y a 1 an
  Tri Dao 393882bc08 [LayerNorm] Implement LN with parallel residual, support dim 8k il y a 1 an
  Tri Dao 4d87e4d875 Implement GPT-J il y a 1 an
  Tri Dao 88173a1aaf [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP il y a 2 ans
  Tri Dao 780e8eeabb [ViT] Support timm checkpoint, add tests il y a 2 ans
  Tri Dao ef085cfcda [ViT] Fix extra norm_0, use new LN order in Block il y a 2 ans
  Tri Dao ff34123bd4 Reorder LN in Block, support OPT il y a 2 ans
  Tri Dao 93383bd55b [TP] Implement TensorParallel without sequence parallel il y a 2 ans
  Tri Dao a8cfe51551 Implement Tensor Parallel for transformer Block il y a 2 ans
  Tri Dao 5fb6df0e04 Implement BERT il y a 2 ans
  Tri Dao d4b320b31f Add MLP, MHA, Block, Embedding modules il y a 2 ans