AlpinDale 201db10f02 models: add support for Phi3 MoE vor 1 Monat
..
fused_moe 201db10f02 models: add support for Phi3 MoE vor 1 Monat
mamba f1d0b77c92 [0.6.0] Release Candidate (#481) vor 5 Monaten
ops 8a71788372 Add OLMoE (#772) vor 3 Monaten
__init__.py 07aa2a492f upstream: add option to specify tokenizer vor 1 Jahr
activation.py f1d0b77c92 [0.6.0] Release Candidate (#481) vor 5 Monaten
layernorm.py bfc3da41ae feat: add torch.compile for GemmaRMSNorm (#898) vor 1 Monat
linear.py 6bdff60aab quant: support pre-quanted bitsandbytes checkpoints (#961) vor 1 Monat
logits_processor.py 0e558e9b2f fix: loading chameleon model with TP>1 (#695) vor 5 Monaten
pooler.py f1d0b77c92 [0.6.0] Release Candidate (#481) vor 5 Monaten
rejection_sampler.py e3a53712f2 fix: mlpspeculator with padded vocab (#669) vor 5 Monaten
rotary_embedding.py 201db10f02 models: add support for Phi3 MoE vor 1 Monat
sampler.py 0dfa6b60ec core: support logprobs with multi-step scheduling (#963) vor 1 Monat
spec_decode_base_sampler.py 09b82f9963 feat: Add support for GPU device selection in SpecDecodeBaseSampler (#629) vor 5 Monaten
typical_acceptance_sampler.py f1d0b77c92 [0.6.0] Release Candidate (#481) vor 5 Monaten
vocab_parallel_embedding.py 9ff3239ce2 fix: gguf vocab embddings in TP (#958) vor 1 Monat