AlpinDale 201db10f02 models: add support for Phi3 MoE il y a 1 mois
..
fused_moe 201db10f02 models: add support for Phi3 MoE il y a 1 mois
mamba f1d0b77c92 [0.6.0] Release Candidate (#481) il y a 5 mois
ops 8a71788372 Add OLMoE (#772) il y a 3 mois
__init__.py 07aa2a492f upstream: add option to specify tokenizer il y a 1 an
activation.py f1d0b77c92 [0.6.0] Release Candidate (#481) il y a 5 mois
layernorm.py bfc3da41ae feat: add torch.compile for GemmaRMSNorm (#898) il y a 1 mois
linear.py 6bdff60aab quant: support pre-quanted bitsandbytes checkpoints (#961) il y a 1 mois
logits_processor.py 0e558e9b2f fix: loading chameleon model with TP>1 (#695) il y a 4 mois
pooler.py f1d0b77c92 [0.6.0] Release Candidate (#481) il y a 5 mois
rejection_sampler.py e3a53712f2 fix: mlpspeculator with padded vocab (#669) il y a 4 mois
rotary_embedding.py 201db10f02 models: add support for Phi3 MoE il y a 1 mois
sampler.py 0dfa6b60ec core: support logprobs with multi-step scheduling (#963) il y a 1 mois
spec_decode_base_sampler.py 09b82f9963 feat: Add support for GPU device selection in SpecDecodeBaseSampler (#629) il y a 5 mois
typical_acceptance_sampler.py f1d0b77c92 [0.6.0] Release Candidate (#481) il y a 5 mois
vocab_parallel_embedding.py 9ff3239ce2 fix: gguf vocab embddings in TP (#958) il y a 1 mois