AlpinDale
|
427f6b32fb
kernels for rejection sampling
|
2 weeks ago |
AlpinDale
|
adb6982090
models: add support for IBM Granite (PowerLM) models (#978)
|
2 weeks ago |
AlpinDale
|
0f72141ca2
api: support multiple input images
|
2 weeks ago |
AlpinDale
|
d4e78a428b
fix: crash when cancelling a request with multi-step (#977)
|
2 weeks ago |
AlpinDale
|
3e4e7665a7
fix: modelscope for VLMs (#976)
|
2 weeks ago |
AlpinDale
|
201db10f02
models: add support for Phi3 MoE
|
2 weeks ago |
AlpinDale
|
032974a28a
tpu: fix TPU type api (#975)
|
2 weeks ago |
AlpinDale
|
510ae5b949
core: fix chunked prefill not being enabled by default for long contexts (#974)
|
2 weeks ago |
AlpinDale
|
b3f6eeb1d2
vlm: increase the default `max_num_batched_tokens` for multimodal models (#973)
|
2 weeks ago |
AlpinDale
|
7eeee771f2
tests: update internvl test for #971 (#972)
|
2 weeks ago |
AlpinDale
|
b4a1e2fd02
vlm: add tensor parallel support for vision transformer models (#971)
|
2 weeks ago |
AlpinDale
|
61103b92d4
tpu: support single and multi-host TPUs on GKE and RayServe (#970)
|
2 weeks ago |
AlpinDale
|
b26a014b12
fix: prometheus.yaml path in monitoring example (#969)
|
2 weeks ago |
AlpinDale
|
5bec8fbb1b
tpu: add support for async postprocessing (#968)
|
2 weeks ago |
AlpinDale
|
a8bdd488b9
distributed: support pipeline parallelism for internvl and internlm2 (#965)
|
2 weeks ago |
AlpinDale
|
cbd51a208a
ci: bump to 0.6.5 (#964)
|
2 weeks ago |
AlpinDale
|
0dfa6b60ec
core: support logprobs with multi-step scheduling (#963)
|
2 weeks ago |
AlpinDale
|
34e8606e81
vlm: do not allow max_model_len overflow (#962)
|
2 weeks ago |
AlpinDale
|
6bdff60aab
quant: support pre-quanted bitsandbytes checkpoints (#961)
|
2 weeks ago |
AlpinDale
|
ba6d798784
neuron: support for context length and token bucketing (#960)
|
2 weeks ago |
AlpinDale
|
f4b62bf803
quant: update tpu_int8 to use AphroditeParameters (#959)
|
2 weeks ago |
AlpinDale
|
9ff3239ce2
fix: gguf vocab embddings in TP (#958)
|
2 weeks ago |
AlpinDale
|
22b8096006
misc: extend cuda graph capture size for H200 (#957)
|
2 weeks ago |
AlpinDale
|
d6cbbba95f
Revert "fix: issues with flashinfer fp8 kv (#950)" (#956)
|
2 weeks ago |
AlpinDale
|
5be6225f38
core: support multi-step scheduling w/ async post-processor (#955)
|
2 weeks ago |
AlpinDale
|
564d197687
spec decode: match the original rank computation impl for spec decoding (#954)
|
2 weeks ago |
AlpinDale
|
2aabf8fcf7
vlm: fix errors on ragged NestedTensors (#953)
|
2 weeks ago |
AlpinDale
|
ea59784f59
tpu: remove torch._dynamo.reset() (#952)
|
2 weeks ago |
AlpinDale
|
39b2e83ac3
api: optimize zeromq frontend performance (#951)
|
2 weeks ago |
AlpinDale
|
cef6da8863
fix: issues with flashinfer fp8 kv (#950)
|
2 weeks ago |