sgsdxzy b28011e86e fix: shard exl2 weights more evenly between ranks (#437) 9 mesi fa
..
fused_moe 41beab5dc1 add exllamav2 tensor paralell, fused MoE for GPTQ/AWQ 9 mesi fa
ops 9181fa0396 feat: Triton kernels for sampling (#383) 9 mesi fa
quantization 8d26cf3876 simplify model_executor logic 9 mesi fa
__init__.py 07aa2a492f upstream: add option to specify tokenizer 1 anno fa
activation.py 50c2434267 move megatron to a top-level directory 9 mesi fa
layernorm.py e31c6f0b45 feat: refactor modeling logic and support more models (#274) 11 mesi fa
linear.py b28011e86e fix: shard exl2 weights more evenly between ranks (#437) 9 mesi fa
logits_processor.py 50c2434267 move megatron to a top-level directory 9 mesi fa
rejection.py d8c4193704 feat: Speculative Decoding using a draft model (#432) 9 mesi fa
rotary_embedding.py c8a91b0b96 rope: get_device() -> device 9 mesi fa
sampler.py d8c4193704 feat: Speculative Decoding using a draft model (#432) 9 mesi fa
vocab_parallel_embedding.py f3b546e33a feat: upport twe lm_head for quantized weights (#409) 9 mesi fa