.. |
adapter_commons
|
2f61644f6e
SPMD optimizations (#824)
|
1 month ago |
assets
|
411ac4f405
vlm: add support for Qwen2-VL model (#1015)
|
1 week ago |
attention
|
1390915778
multi-step: add support for flashinfer attention backend (#1033)
|
1 week ago |
common
|
7ca63930c8
support deepseek_v3 model
|
1 week ago |
compilation
|
0e5cf7f840
tpu: avoid dynamo guard eval overhead (#949)
|
2 weeks ago |
distributed
|
61103b92d4
tpu: support single and multi-host TPUs on GKE and RayServe (#970)
|
2 weeks ago |
endpoints
|
a56bce4c94
fix: remove duplicate assignment in Hermes2ProToolParser
|
1 week ago |
engine
|
ddaefd8d38
chore: remove engine_use_ray (#1024)
|
1 week ago |
executor
|
f2b6dc3872
cpu: add support for W8A8 quantization via compressed-tensor (#1017)
|
1 week ago |
inputs
|
908ff753a1
fix: phi_3.5_v loading (#896)
|
3 weeks ago |
kv_quant
|
8a71788372
Add OLMoE (#772)
|
2 months ago |
lora
|
bf4a4d8516
fix: do not register punica with torch if using older torch (#948)
|
2 weeks ago |
modeling
|
7ca63930c8
support deepseek_v3 model
|
1 week ago |
multimodal
|
411ac4f405
vlm: add support for Qwen2-VL model (#1015)
|
1 week ago |
platforms
|
f2b6dc3872
cpu: add support for W8A8 quantization via compressed-tensor (#1017)
|
1 week ago |
plugins
|
22a4cd4595
core: fix spec decode metrics and envs circular import (#889)
|
3 weeks ago |
processing
|
f561a54a43
core: fix async postprocessor in case of preemption (#1000)
|
2 weeks ago |
prompt_adapter
|
30d02d0747
chore: remove peft as a requirement (#1006)
|
2 weeks ago |
quantization
|
7ca63930c8
support deepseek_v3 model
|
1 week ago |
server
|
22a4cd4595
core: fix spec decode metrics and envs circular import (#889)
|
3 weeks ago |
spec_decode
|
5c3b94de45
spec decode: move ops.advane_step to flash attention backend (#1005)
|
2 weeks ago |
transformers_utils
|
411ac4f405
vlm: add support for Qwen2-VL model (#1015)
|
1 week ago |
triton_utils
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
worker
|
1390915778
multi-step: add support for flashinfer attention backend (#1033)
|
1 week ago |
__init__.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
_core_ext.py
|
f1ea7711bd
core: do not compile ScalarType for torch < 2.4.0 (#938)
|
2 weeks ago |
_custom_ops.py
|
1390915778
multi-step: add support for flashinfer attention backend (#1033)
|
1 week ago |
_ipex_ops.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
connections.py
|
c6c91edab7
ci: update & overhaul test units (#769)
|
1 month ago |
constants.py
|
2f61644f6e
SPMD optimizations (#824)
|
1 month ago |
py.typed
|
1c988a48b2
fix logging and add py.typed
|
1 year ago |
scalar_type.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
version.py
|
cbd51a208a
ci: bump to 0.6.5 (#964)
|
2 weeks ago |