Commit History

作者 SHA1 備註 提交日期
  JDKWangGuan 0d810cfb73 Fix KeyError handling for non-existing key in state_dict.pop() (#898) 5 月之前
  Tri Dao ef0ed10622 Add window_size option to MHA and GPT 10 月之前
  Tri Dao abbc131173 [LayerNorm] Switch from CUDA to Triton implementation 11 月之前
  Tri Dao 73df3be7d5 Add test for BTLM init 11 月之前
  Tri Dao 2e29dacf0c Implement muParam 11 月之前
  Tri Dao 2c7d7b7396 Implement norm head for Baichuan2 1 年之前
  Tri Dao c3b2196652 Add Alibi to MHA, test with Baichuan-13B 1 年之前
  Yuchao Dai 187c2a0635 Fix E1136 (#563) 1 年之前
  Tri Dao d0032700d1 Add tests for Pythia, GPT-JT, and RedPajama models 1 年之前
  Kevin Hu 07005806ff Add BigCode converters (#532) 1 年之前
  Tri Dao 798858f9f1 Fix test_baichuan 1 年之前
  Tri Dao 7b33743a72 [Gen] Add back num_last_tokens in gpt.py 1 年之前
  dan_the_3rd 011ec323d6 Support MQA + MP for decoding (#490) 1 年之前
  Tri Dao f8aea6ead0 [GPT] Generalize last_token_only arg to num_last_tokens 1 年之前
  Aman Gupta Karmani e0b09891c6 add llama support to GPTPreTrainedModel.from_pretrained (#479) 1 年之前
  Xuechen Li 25d6b1dbcb handle uneven heads across ranks when combining state_dicts; resolves #467 (#468) 1 年之前
  Xuechen Li 7fcd3e6a04 map custom model state_dict back to huggingface format (#465) 1 年之前
  Tri Dao f1a73d0740 Run isort and black on python files 1 年之前
  Xuechen Li bb4cded17b support when num_heads is not divisible by world_size; resolves #459 (#461) 1 年之前
  Tri Dao 4b661a569d [GPT] Run black on gpt.py 1 年之前
  Tri Dao 184b992dcb [GPT] Implement parallel LLaMa 1 年之前
  Haodong Lyu 8ee62efca3 Implement ParallelGatedMlp (#251) 1 年之前
  Tri Dao d38357dd2f [GPT] Implement Falcon 1 年之前
  Tri Dao 425dbcb6c6 [MHA] Implement MQA/GQA 1 年之前
  Tri Dao ec9f74ab9a [Rotary] Don't store inv_freq in state_dict 1 年之前
  Tri Dao 75e334d407 [MLP] Add ParallelMLP 1 年之前
  Tri Dao 48bc6eacd6 [Gen] Add rotary base as an argument to FT attention kernel 1 年之前
  Federico Berto 69f5f7d0a2 [BugFix] cannot unpack non-iterable NoneType object 1 年之前
  Tri Dao a9a4b4e4f2 [LLaMa] Fix last norm layer to use RMSNorm instead of LayerNorm 1 年之前
  Tri Dao ba2fe7f378 [Gen] Move allocate_inference_cache to within the model 1 年之前