.. |
fused_moe
|
201db10f02
models: add support for Phi3 MoE
|
hai 1 mes |
mamba
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
hai 4 meses |
ops
|
8a71788372
Add OLMoE (#772)
|
hai 3 meses |
__init__.py
|
07aa2a492f
upstream: add option to specify tokenizer
|
hai 1 ano |
activation.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
hai 4 meses |
layernorm.py
|
bfc3da41ae
feat: add torch.compile for GemmaRMSNorm (#898)
|
hai 1 mes |
linear.py
|
6bdff60aab
quant: support pre-quanted bitsandbytes checkpoints (#961)
|
hai 1 mes |
logits_processor.py
|
0e558e9b2f
fix: loading chameleon model with TP>1 (#695)
|
hai 4 meses |
pooler.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
hai 4 meses |
rejection_sampler.py
|
e3a53712f2
fix: mlpspeculator with padded vocab (#669)
|
hai 4 meses |
rotary_embedding.py
|
201db10f02
models: add support for Phi3 MoE
|
hai 1 mes |
sampler.py
|
0dfa6b60ec
core: support logprobs with multi-step scheduling (#963)
|
hai 1 mes |
spec_decode_base_sampler.py
|
09b82f9963
feat: Add support for GPU device selection in SpecDecodeBaseSampler (#629)
|
hai 4 meses |
typical_acceptance_sampler.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
hai 4 meses |
vocab_parallel_embedding.py
|
9ff3239ce2
fix: gguf vocab embddings in TP (#958)
|
hai 1 mes |