Historique des commits

Auteur SHA1 Message Date
  GAOXinyu 0cb595ad94 [bugfix] handle_x not define when using checkpoint_lvl = 2 (#502) il y a 1 an
  Tri Dao f1a73d0740 Run isort and black on python files il y a 1 an
  Xuechen Li bb4cded17b support when num_heads is not divisible by world_size; resolves #459 (#461) il y a 1 an
  Tri Dao cb0daccc41 [FusedDense] Allow Row/ColumnParallelLinear to have uneven split il y a 1 an
  Tri Dao bcfa7c9751 [FusedDense] Run black on fused_dense.py il y a 1 an
  Tri Dao b630aef53f Implement GatedMlp il y a 1 an
  Tri Dao 6f6e9a9aaf [FusedDense] Enable sqrelu activation in FusedMLP il y a 1 an
  Tri Dao dc08ea1c33 Support H100 for other CUDA extensions il y a 1 an
  Tri Dao 88173a1aaf [FusedDense] Support relu, rename FusedDenseGeluDense -> FusedMLP il y a 1 an
  Tri Dao 93383bd55b [TP] Implement TensorParallel without sequence parallel il y a 1 an
  Tri Dao 1ec09ebd90 [FusedDense] Limit matrix dims to 2M (instead of 64k) il y a 1 an
  Tri Dao 65b4064b2a [FusedDense] Kick off input all_gather before weight dtype conversion il y a 1 an
  Tri Dao a8cfe51551 Implement Tensor Parallel for transformer Block il y a 2 ans
  Tri Dao 226a1b721d Implement TensorParallel for FusedDense and FusedDenseGeluDense il y a 2 ans
  Tri Dao e68ebbe89a Simplify FusedDense il y a 2 ans
  Tri Dao d4b320b31f Add MLP, MHA, Block, Embedding modules il y a 2 ans
  Tri Dao fa6d1ce44f Add fused_dense and dropout_add_layernorm CUDA extensions il y a 2 ans