تاریخچه Commit ها

نویسنده SHA1 پیام تاریخ
  JDKWangGuan 0d810cfb73 Fix KeyError handling for non-existing key in state_dict.pop() (#898) 5 ماه پیش
  Tri Dao ef0ed10622 Add window_size option to MHA and GPT 10 ماه پیش
  Tri Dao abbc131173 [LayerNorm] Switch from CUDA to Triton implementation 11 ماه پیش
  Tri Dao 73df3be7d5 Add test for BTLM init 11 ماه پیش
  Tri Dao 2e29dacf0c Implement muParam 11 ماه پیش
  Tri Dao 2c7d7b7396 Implement norm head for Baichuan2 1 سال پیش
  Tri Dao c3b2196652 Add Alibi to MHA, test with Baichuan-13B 1 سال پیش
  Yuchao Dai 187c2a0635 Fix E1136 (#563) 1 سال پیش
  Tri Dao d0032700d1 Add tests for Pythia, GPT-JT, and RedPajama models 1 سال پیش
  Kevin Hu 07005806ff Add BigCode converters (#532) 1 سال پیش
  Tri Dao 798858f9f1 Fix test_baichuan 1 سال پیش
  Tri Dao 7b33743a72 [Gen] Add back num_last_tokens in gpt.py 1 سال پیش
  dan_the_3rd 011ec323d6 Support MQA + MP for decoding (#490) 1 سال پیش
  Tri Dao f8aea6ead0 [GPT] Generalize last_token_only arg to num_last_tokens 1 سال پیش
  Aman Gupta Karmani e0b09891c6 add llama support to GPTPreTrainedModel.from_pretrained (#479) 1 سال پیش
  Xuechen Li 25d6b1dbcb handle uneven heads across ranks when combining state_dicts; resolves #467 (#468) 1 سال پیش
  Xuechen Li 7fcd3e6a04 map custom model state_dict back to huggingface format (#465) 1 سال پیش
  Tri Dao f1a73d0740 Run isort and black on python files 1 سال پیش
  Xuechen Li bb4cded17b support when num_heads is not divisible by world_size; resolves #459 (#461) 1 سال پیش
  Tri Dao 4b661a569d [GPT] Run black on gpt.py 1 سال پیش
  Tri Dao 184b992dcb [GPT] Implement parallel LLaMa 1 سال پیش
  Haodong Lyu 8ee62efca3 Implement ParallelGatedMlp (#251) 1 سال پیش
  Tri Dao d38357dd2f [GPT] Implement Falcon 1 سال پیش
  Tri Dao 425dbcb6c6 [MHA] Implement MQA/GQA 1 سال پیش
  Tri Dao ec9f74ab9a [Rotary] Don't store inv_freq in state_dict 1 سال پیش
  Tri Dao 75e334d407 [MLP] Add ParallelMLP 1 سال پیش
  Tri Dao 48bc6eacd6 [Gen] Add rotary base as an argument to FT attention kernel 1 سال پیش
  Federico Berto 69f5f7d0a2 [BugFix] cannot unpack non-iterable NoneType object 1 سال پیش
  Tri Dao a9a4b4e4f2 [LLaMa] Fix last norm layer to use RMSNorm instead of LayerNorm 1 سال پیش
  Tri Dao ba2fe7f378 [Gen] Move allocate_inference_cache to within the model 1 سال پیش