AlpinDale 766ea79b89 vlm: fix feature size calculation for llava-next models (#1079) vor 3 Tagen
..
adapter_commons 2f61644f6e SPMD optimizations (#824) vor 1 Monat
assets 411ac4f405 vlm: add support for Qwen2-VL model (#1015) vor 2 Wochen
attention d9d287a288 rocm: enable multi-step scheduling for rocm (#1071) vor 6 Tagen
common 814c850d89 fix: validate `n` in the sampling params (#1075) vor 3 Tagen
compilation 960dee2f97 torch.compile: fix functionalization (#1045) vor 1 Woche
distributed f81e7d7010 distributed: bind only to 127.0.0.1 for local-only usage (#1061) vor 1 Woche
endpoints f6df92bde0 fix: unexpected kwarg for the legacy API server (#1076) vor 3 Tagen
engine 12b0059b47 api: enable MQAphroditeEngine for embedding models (#1065) vor 6 Tagen
executor 6212072245 api: support LoRA lineage and base model metadata management (#1072) vor 6 Tagen
inputs 05be6085ec core: factor out input preprocessing into a separate class (#1039) vor 1 Woche
kv_quant 8a71788372 Add OLMoE (#772) vor 2 Monaten
lora 6212072245 api: support LoRA lineage and base model metadata management (#1072) vor 6 Tagen
modeling 766ea79b89 vlm: fix feature size calculation for llava-next models (#1079) vor 3 Tagen
multimodal 6212072245 api: support LoRA lineage and base model metadata management (#1072) vor 6 Tagen
platforms f2b6dc3872 cpu: add support for W8A8 quantization via compressed-tensor (#1017) vor 2 Wochen
plugins 9797d38b24 torch.compile: allow adding custom compile backends via plugins (#1041) vor 1 Woche
processing f561a54a43 core: fix async postprocessor in case of preemption (#1000) vor 2 Wochen
prompt_adapter 30d02d0747 chore: remove peft as a requirement (#1006) vor 2 Wochen
quantization 92cee435e2 rocm: add more quants, fix _scaled_mm call (#1062) vor 1 Woche
server 9a7d5514c4 feat: introduce MQAphroditeEngine (#1056) vor 1 Woche
spec_decode 5c3b94de45 spec decode: move ops.advane_step to flash attention backend (#1005) vor 2 Wochen
transformers_utils 7b6501bd05 tests: refactor model tests (#1078) vor 3 Tagen
triton_utils 4593a3b306 chore: remove dead code from triton sampling kernels (#1049) vor 1 Woche
worker 6212072245 api: support LoRA lineage and base model metadata management (#1072) vor 6 Tagen
__init__.py f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
_core_ext.py f1ea7711bd core: do not compile ScalarType for torch < 2.4.0 (#938) vor 2 Wochen
_custom_ops.py 61aed092a5 rocm: add support for FP8 KV cache in the custom paged attention kkernels (#1066) vor 6 Tagen
_ipex_ops.py 6951928522 xpu: bump IPEX to 2.3, support GQA (#1042) vor 1 Woche
connections.py c6c91edab7 ci: update & overhaul test units (#769) vor 1 Monat
constants.py 2f61644f6e SPMD optimizations (#824) vor 1 Monat
py.typed 1c988a48b2 fix logging and add py.typed vor 1 Jahr
scalar_type.py f1d0b77c92 [0.6.0] Release Candidate (#481) vor 4 Monaten
version.py cbd51a208a ci: bump to 0.6.5 (#964) vor 2 Wochen