.. |
fused_moe
|
fd07406a19
fix: grouped_topk return type (#1038)
|
1 week ago |
mamba
|
9bdf8d5bfa
mamba: enable continuous batching for mamba kernels (#1055)
|
1 week ago |
__init__.py
|
07aa2a492f
upstream: add option to specify tokenizer
|
1 year ago |
activation.py
|
6951928522
xpu: bump IPEX to 2.3, support GQA (#1042)
|
1 week ago |
layernorm.py
|
6951928522
xpu: bump IPEX to 2.3, support GQA (#1042)
|
1 week ago |
linear.py
|
b3f9ab3b72
quant: add tensor parallel support for bitsandbytes (#1052)
|
1 week ago |
logits_processor.py
|
0e558e9b2f
fix: loading chameleon model with TP>1 (#695)
|
3 months ago |
pooler.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
rejection_sampler.py
|
e3a53712f2
fix: mlpspeculator with padded vocab (#669)
|
4 months ago |
resampler.py
|
548e864404
models: add support for QwenVL (#995)
|
2 weeks ago |
rotary_embedding.py
|
d51720114b
chore: use RoPE cache for MRoPE method (#1028)
|
1 week ago |
sampler.py
|
ca7028d5ca
sampler: simplify logits resort in _apply_top_k_top_p (#1067)
|
5 days ago |
spec_decode_base_sampler.py
|
09b82f9963
feat: Add support for GPU device selection in SpecDecodeBaseSampler (#629)
|
4 months ago |
typical_acceptance_sampler.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
vocab_parallel_embedding.py
|
83af2524f3
quants: add GPTQ and FBGEMM to AphroditeParameters (#987)
|
2 weeks ago |