.. |
fused_moe
|
7ca63930c8
support deepseek_v3 model
|
1 week ago |
mamba
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
ops
|
8a71788372
Add OLMoE (#772)
|
2 months ago |
__init__.py
|
07aa2a492f
upstream: add option to specify tokenizer
|
1 year ago |
activation.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
layernorm.py
|
bfc3da41ae
feat: add torch.compile for GemmaRMSNorm (#898)
|
3 weeks ago |
linear.py
|
cec4da1dab
quants: support w8a8 fp8 block-wise quantization from DS3
|
1 week ago |
logits_processor.py
|
0e558e9b2f
fix: loading chameleon model with TP>1 (#695)
|
4 months ago |
pooler.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
rejection_sampler.py
|
e3a53712f2
fix: mlpspeculator with padded vocab (#669)
|
4 months ago |
resampler.py
|
548e864404
models: add support for QwenVL (#995)
|
2 weeks ago |
rotary_embedding.py
|
d51720114b
chore: use RoPE cache for MRoPE method (#1028)
|
1 week ago |
sampler.py
|
2261a0e8dd
cpu: fix issue with sampling kernels (#1016)
|
2 weeks ago |
spec_decode_base_sampler.py
|
09b82f9963
feat: Add support for GPU device selection in SpecDecodeBaseSampler (#629)
|
4 months ago |
typical_acceptance_sampler.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
vocab_parallel_embedding.py
|
83af2524f3
quants: add GPTQ and FBGEMM to AphroditeParameters (#987)
|
2 weeks ago |