AlpinDale
|
1721bea53a
vlm: add support for Pixtral model (#1022)
|
1 week ago |
AlpinDale
|
0859dc3bc0
tests: refactor speculative decoding tests to remove the async engine (#1021)
|
1 week ago |
AlpinDale
|
fe01e2ded8
chore: move `device` keys to a constant (#1020)
|
1 week ago |
AlpinDale
|
a113309876
kernel: add meta functions for ops to prevent graph breaks (#1019)
|
1 week ago |
AlpinDale
|
f2b6dc3872
cpu: add support for W8A8 quantization via compressed-tensor (#1017)
|
1 week ago |
AlpinDale
|
2261a0e8dd
cpu: fix issue with sampling kernels (#1016)
|
1 week ago |
AlpinDale
|
411ac4f405
vlm: add support for Qwen2-VL model (#1015)
|
1 week ago |
AlpinDale
|
be59e30139
vlm: add support for video modality + llava next video (#1014)
|
1 week ago |
AlpinDale
|
dcb36de9c4
quants: add support for NVIDIA's ModelOpt checkpoints (#1013)
|
1 week ago |
AlpinDale
|
a59a5f64d2
fix: internvl pipeline parallel (#1012)
|
2 weeks ago |
AlpinDale
|
5224389dae
chore: skip loading extra bias for qwen2 moe GPTQ (#1011)
|
2 weeks ago |
AlpinDale
|
51d24fc7c0
build: shallow clone cutlass 3.5.1 tag (#1010)
|
2 weeks ago |
AlpinDale
|
4737c22ab3
fix: pass `APHRODITE_ATTENTION_BACKEND` to ray workers (#1009)
|
2 weeks ago |
AlpinDale
|
de341ffb00
fix: ensure multistep lookahead allocation is compatible with cugraph max capture (#1008)
|
2 weeks ago |
AlpinDale
|
9a42869055
chore: keep chunked prefill enabled with prefix caching (#1007)
|
2 weeks ago |
AlpinDale
|
30d02d0747
chore: remove peft as a requirement (#1006)
|
2 weeks ago |
AlpinDale
|
5c3b94de45
spec decode: move ops.advane_step to flash attention backend (#1005)
|
2 weeks ago |
AlpinDale
|
135dfd648b
fix: LoRA support for Cohere and Jamba models (#1004)
|
2 weeks ago |
AlpinDale
|
0191c5efd1
tools: fix tool calls to more strictly follow OpenAI format (#1003)
|
2 weeks ago |
AlpinDale
|
4d14bd1fe5
vlm: add multi-input support for LLaVA and InternVL models (#1002)
|
2 weeks ago |
AlpinDale
|
f561a54a43
core: fix async postprocessor in case of preemption (#1000)
|
2 weeks ago |
AlpinDale
|
485d1de42e
fix: hermes tool call chat template (#999)
|
2 weeks ago |
AlpinDale
|
cbde3c66a5
quants: improve awq_triton throughput (#998)
|
2 weeks ago |
AlpinDale
|
a8ff25679f
chore: use `ray[adag]` dep instead of cuda (#997)
|
2 weeks ago |
AlpinDale
|
94a13ad036
fix: gptq_marlin exception on older GPUs (#996)
|
2 weeks ago |
AlpinDale
|
548e864404
models: add support for QwenVL (#995)
|
2 weeks ago |
AlpinDale
|
145e554a4d
neuron: add 8bit quantization for Neuron (#994)
|
2 weeks ago |
AlpinDale
|
313e198557
api: implement OpenAI-compatible tools API for Hermes/Mistral models (#993)
|
2 weeks ago |
AlpinDale
|
f644e10449
vlm: enable multimodal inputs for the LLM class (#992)
|
2 weeks ago |
AlpinDale
|
46d577f019
vlm: fix siglip layernorm and paligemma weight loading (#991)
|
2 weeks ago |