.. |
adapter_commons
|
2f61644f6e
SPMD optimizations (#824)
|
1 month ago |
assets
|
411ac4f405
vlm: add support for Qwen2-VL model (#1015)
|
1 week ago |
attention
|
d9d287a288
rocm: enable multi-step scheduling for rocm (#1071)
|
3 days ago |
common
|
814c850d89
fix: validate `n` in the sampling params (#1075)
|
14 hours ago |
compilation
|
960dee2f97
torch.compile: fix functionalization (#1045)
|
1 week ago |
distributed
|
f81e7d7010
distributed: bind only to 127.0.0.1 for local-only usage (#1061)
|
5 days ago |
endpoints
|
f6df92bde0
fix: unexpected kwarg for the legacy API server (#1076)
|
14 hours ago |
engine
|
12b0059b47
api: enable MQAphroditeEngine for embedding models (#1065)
|
3 days ago |
executor
|
6212072245
api: support LoRA lineage and base model metadata management (#1072)
|
3 days ago |
inputs
|
05be6085ec
core: factor out input preprocessing into a separate class (#1039)
|
1 week ago |
kv_quant
|
8a71788372
Add OLMoE (#772)
|
2 months ago |
lora
|
6212072245
api: support LoRA lineage and base model metadata management (#1072)
|
3 days ago |
modeling
|
766ea79b89
vlm: fix feature size calculation for llava-next models (#1079)
|
6 hours ago |
multimodal
|
6212072245
api: support LoRA lineage and base model metadata management (#1072)
|
3 days ago |
platforms
|
f2b6dc3872
cpu: add support for W8A8 quantization via compressed-tensor (#1017)
|
1 week ago |
plugins
|
9797d38b24
torch.compile: allow adding custom compile backends via plugins (#1041)
|
1 week ago |
processing
|
f561a54a43
core: fix async postprocessor in case of preemption (#1000)
|
1 week ago |
prompt_adapter
|
30d02d0747
chore: remove peft as a requirement (#1006)
|
1 week ago |
quantization
|
92cee435e2
rocm: add more quants, fix _scaled_mm call (#1062)
|
4 days ago |
server
|
9a7d5514c4
feat: introduce MQAphroditeEngine (#1056)
|
6 days ago |
spec_decode
|
5c3b94de45
spec decode: move ops.advane_step to flash attention backend (#1005)
|
1 week ago |
transformers_utils
|
7b6501bd05
tests: refactor model tests (#1078)
|
6 hours ago |
triton_utils
|
4593a3b306
chore: remove dead code from triton sampling kernels (#1049)
|
1 week ago |
worker
|
6212072245
api: support LoRA lineage and base model metadata management (#1072)
|
3 days ago |
__init__.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
_core_ext.py
|
f1ea7711bd
core: do not compile ScalarType for torch < 2.4.0 (#938)
|
2 weeks ago |
_custom_ops.py
|
61aed092a5
rocm: add support for FP8 KV cache in the custom paged attention kkernels (#1066)
|
3 days ago |
_ipex_ops.py
|
6951928522
xpu: bump IPEX to 2.3, support GQA (#1042)
|
1 week ago |
connections.py
|
c6c91edab7
ci: update & overhaul test units (#769)
|
1 month ago |
constants.py
|
2f61644f6e
SPMD optimizations (#824)
|
1 month ago |
py.typed
|
1c988a48b2
fix logging and add py.typed
|
1 year ago |
scalar_type.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
version.py
|
cbd51a208a
ci: bump to 0.6.5 (#964)
|
2 weeks ago |