.. |
fused_moe
|
fd07406a19
fix: grouped_topk return type (#1038)
|
1 month ago |
mamba
|
9bdf8d5bfa
mamba: enable continuous batching for mamba kernels (#1055)
|
1 month ago |
__init__.py
|
07aa2a492f
upstream: add option to specify tokenizer
|
1 year ago |
activation.py
|
6951928522
xpu: bump IPEX to 2.3, support GQA (#1042)
|
1 month ago |
layernorm.py
|
6951928522
xpu: bump IPEX to 2.3, support GQA (#1042)
|
1 month ago |
linear.py
|
b3f9ab3b72
quant: add tensor parallel support for bitsandbytes (#1052)
|
1 month ago |
logits_processor.py
|
0e558e9b2f
fix: loading chameleon model with TP>1 (#695)
|
5 months ago |
pooler.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
5 months ago |
rejection_sampler.py
|
a0f0160b79
spec decode: remove dead code from draft bonus tokens (#1101)
|
2 weeks ago |
resampler.py
|
548e864404
models: add support for QwenVL (#995)
|
1 month ago |
rotary_embedding.py
|
d51720114b
chore: use RoPE cache for MRoPE method (#1028)
|
1 month ago |
sampler.py
|
f20f5c3491
samplers: improved DRY performance (#1108)
|
1 week ago |
spec_decode_base_sampler.py
|
a0f0160b79
spec decode: remove dead code from draft bonus tokens (#1101)
|
2 weeks ago |
typical_acceptance_sampler.py
|
eb1ffacf74
Spec Decoding: fix typical acceptance sampler with correct recovered tok IDs (#1106)
|
1 week ago |
vocab_parallel_embedding.py
|
83af2524f3
quants: add GPTQ and FBGEMM to AphroditeParameters (#987)
|
1 month ago |