Commit történet

Szerző SHA1 Üzenet Dátum
  JDKWangGuan 0d810cfb73 Fix KeyError handling for non-existing key in state_dict.pop() (#898) 5 hónapja
  Tri Dao ef0ed10622 Add window_size option to MHA and GPT 10 hónapja
  Tri Dao abbc131173 [LayerNorm] Switch from CUDA to Triton implementation 11 hónapja
  Tri Dao 73df3be7d5 Add test for BTLM init 11 hónapja
  Tri Dao 2e29dacf0c Implement muParam 11 hónapja
  Tri Dao 2c7d7b7396 Implement norm head for Baichuan2 1 éve
  Tri Dao c3b2196652 Add Alibi to MHA, test with Baichuan-13B 1 éve
  Yuchao Dai 187c2a0635 Fix E1136 (#563) 1 éve
  Tri Dao d0032700d1 Add tests for Pythia, GPT-JT, and RedPajama models 1 éve
  Kevin Hu 07005806ff Add BigCode converters (#532) 1 éve
  Tri Dao 798858f9f1 Fix test_baichuan 1 éve
  Tri Dao 7b33743a72 [Gen] Add back num_last_tokens in gpt.py 1 éve
  dan_the_3rd 011ec323d6 Support MQA + MP for decoding (#490) 1 éve
  Tri Dao f8aea6ead0 [GPT] Generalize last_token_only arg to num_last_tokens 1 éve
  Aman Gupta Karmani e0b09891c6 add llama support to GPTPreTrainedModel.from_pretrained (#479) 1 éve
  Xuechen Li 25d6b1dbcb handle uneven heads across ranks when combining state_dicts; resolves #467 (#468) 1 éve
  Xuechen Li 7fcd3e6a04 map custom model state_dict back to huggingface format (#465) 1 éve
  Tri Dao f1a73d0740 Run isort and black on python files 1 éve
  Xuechen Li bb4cded17b support when num_heads is not divisible by world_size; resolves #459 (#461) 1 éve
  Tri Dao 4b661a569d [GPT] Run black on gpt.py 1 éve
  Tri Dao 184b992dcb [GPT] Implement parallel LLaMa 1 éve
  Haodong Lyu 8ee62efca3 Implement ParallelGatedMlp (#251) 1 éve
  Tri Dao d38357dd2f [GPT] Implement Falcon 1 éve
  Tri Dao 425dbcb6c6 [MHA] Implement MQA/GQA 1 éve
  Tri Dao ec9f74ab9a [Rotary] Don't store inv_freq in state_dict 1 éve
  Tri Dao 75e334d407 [MLP] Add ParallelMLP 1 éve
  Tri Dao 48bc6eacd6 [Gen] Add rotary base as an argument to FT attention kernel 1 éve
  Federico Berto 69f5f7d0a2 [BugFix] cannot unpack non-iterable NoneType object 1 éve
  Tri Dao a9a4b4e4f2 [LLaMa] Fix last norm layer to use RMSNorm instead of LayerNorm 1 éve
  Tri Dao ba2fe7f378 [Gen] Move allocate_inference_cache to within the model 1 éve