Historique des commits

Auteur SHA1 Message Date
  JDKWangGuan 0d810cfb73 Fix KeyError handling for non-existing key in state_dict.pop() (#898) il y a 5 mois
  Tri Dao ef0ed10622 Add window_size option to MHA and GPT il y a 10 mois
  Tri Dao abbc131173 [LayerNorm] Switch from CUDA to Triton implementation il y a 11 mois
  Tri Dao 73df3be7d5 Add test for BTLM init il y a 11 mois
  Tri Dao 2e29dacf0c Implement muParam il y a 11 mois
  Tri Dao 2c7d7b7396 Implement norm head for Baichuan2 il y a 1 an
  Tri Dao c3b2196652 Add Alibi to MHA, test with Baichuan-13B il y a 1 an
  Yuchao Dai 187c2a0635 Fix E1136 (#563) il y a 1 an
  Tri Dao d0032700d1 Add tests for Pythia, GPT-JT, and RedPajama models il y a 1 an
  Kevin Hu 07005806ff Add BigCode converters (#532) il y a 1 an
  Tri Dao 798858f9f1 Fix test_baichuan il y a 1 an
  Tri Dao 7b33743a72 [Gen] Add back num_last_tokens in gpt.py il y a 1 an
  dan_the_3rd 011ec323d6 Support MQA + MP for decoding (#490) il y a 1 an
  Tri Dao f8aea6ead0 [GPT] Generalize last_token_only arg to num_last_tokens il y a 1 an
  Aman Gupta Karmani e0b09891c6 add llama support to GPTPreTrainedModel.from_pretrained (#479) il y a 1 an
  Xuechen Li 25d6b1dbcb handle uneven heads across ranks when combining state_dicts; resolves #467 (#468) il y a 1 an
  Xuechen Li 7fcd3e6a04 map custom model state_dict back to huggingface format (#465) il y a 1 an
  Tri Dao f1a73d0740 Run isort and black on python files il y a 1 an
  Xuechen Li bb4cded17b support when num_heads is not divisible by world_size; resolves #459 (#461) il y a 1 an
  Tri Dao 4b661a569d [GPT] Run black on gpt.py il y a 1 an
  Tri Dao 184b992dcb [GPT] Implement parallel LLaMa il y a 1 an
  Haodong Lyu 8ee62efca3 Implement ParallelGatedMlp (#251) il y a 1 an
  Tri Dao d38357dd2f [GPT] Implement Falcon il y a 1 an
  Tri Dao 425dbcb6c6 [MHA] Implement MQA/GQA il y a 1 an
  Tri Dao ec9f74ab9a [Rotary] Don't store inv_freq in state_dict il y a 1 an
  Tri Dao 75e334d407 [MLP] Add ParallelMLP il y a 1 an
  Tri Dao 48bc6eacd6 [Gen] Add rotary base as an argument to FT attention kernel il y a 1 an
  Federico Berto 69f5f7d0a2 [BugFix] cannot unpack non-iterable NoneType object il y a 1 an
  Tri Dao a9a4b4e4f2 [LLaMa] Fix last norm layer to use RMSNorm instead of LayerNorm il y a 1 an
  Tri Dao ba2fe7f378 [Gen] Move allocate_inference_cache to within the model il y a 1 an