sgsdxzy b28011e86e fix: shard exl2 weights more evenly between ranks (#437) há 8 meses atrás
..
fused_moe 41beab5dc1 add exllamav2 tensor paralell, fused MoE for GPTQ/AWQ há 9 meses atrás
ops 9181fa0396 feat: Triton kernels for sampling (#383) há 9 meses atrás
quantization 8d26cf3876 simplify model_executor logic há 8 meses atrás
__init__.py 07aa2a492f upstream: add option to specify tokenizer há 1 ano atrás
activation.py 50c2434267 move megatron to a top-level directory há 9 meses atrás
layernorm.py e31c6f0b45 feat: refactor modeling logic and support more models (#274) há 10 meses atrás
linear.py b28011e86e fix: shard exl2 weights more evenly between ranks (#437) há 8 meses atrás
logits_processor.py 50c2434267 move megatron to a top-level directory há 9 meses atrás
rejection.py d8c4193704 feat: Speculative Decoding using a draft model (#432) há 8 meses atrás
rotary_embedding.py c8a91b0b96 rope: get_device() -> device há 9 meses atrás
sampler.py d8c4193704 feat: Speculative Decoding using a draft model (#432) há 8 meses atrás
vocab_parallel_embedding.py f3b546e33a feat: upport twe lm_head for quantized weights (#409) há 8 meses atrás