.. |
adapter_commons
|
2f61644f6e
SPMD optimizations (#824)
|
1 month ago |
assets
|
411ac4f405
vlm: add support for Qwen2-VL model (#1015)
|
1 week ago |
attention
|
a985143768
core: add cuda graph support for encoder-decoder models (#1051)
|
1 week ago |
common
|
9bdf8d5bfa
mamba: enable continuous batching for mamba kernels (#1055)
|
1 week ago |
compilation
|
960dee2f97
torch.compile: fix functionalization (#1045)
|
1 week ago |
distributed
|
b3f9ab3b72
quant: add tensor parallel support for bitsandbytes (#1052)
|
1 week ago |
endpoints
|
1264e0b5d8
api: add mistral function calling format to all models loaded with "mistral" format (#1053)
|
1 week ago |
engine
|
a985143768
core: add cuda graph support for encoder-decoder models (#1051)
|
1 week ago |
executor
|
638c08d9dc
fix: clean shutdown issues (#1047)
|
1 week ago |
inputs
|
05be6085ec
core: factor out input preprocessing into a separate class (#1039)
|
1 week ago |
kv_quant
|
8a71788372
Add OLMoE (#772)
|
2 months ago |
lora
|
bf4a4d8516
fix: do not register punica with torch if using older torch (#948)
|
2 weeks ago |
modeling
|
9bdf8d5bfa
mamba: enable continuous batching for mamba kernels (#1055)
|
1 week ago |
multimodal
|
411ac4f405
vlm: add support for Qwen2-VL model (#1015)
|
1 week ago |
platforms
|
f2b6dc3872
cpu: add support for W8A8 quantization via compressed-tensor (#1017)
|
1 week ago |
plugins
|
9797d38b24
torch.compile: allow adding custom compile backends via plugins (#1041)
|
1 week ago |
processing
|
f561a54a43
core: fix async postprocessor in case of preemption (#1000)
|
2 weeks ago |
prompt_adapter
|
30d02d0747
chore: remove peft as a requirement (#1006)
|
2 weeks ago |
quantization
|
8976805f90
kernel: asymmetric AQ AZP quantization kernels (#1048)
|
1 week ago |
server
|
638c08d9dc
fix: clean shutdown issues (#1047)
|
1 week ago |
spec_decode
|
5c3b94de45
spec decode: move ops.advane_step to flash attention backend (#1005)
|
2 weeks ago |
transformers_utils
|
84163654f4
tokenizer: allow skip_special_tokens=False for mistral tokenizer
|
1 week ago |
triton_utils
|
4593a3b306
chore: remove dead code from triton sampling kernels (#1049)
|
1 week ago |
worker
|
a985143768
core: add cuda graph support for encoder-decoder models (#1051)
|
1 week ago |
__init__.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
_core_ext.py
|
f1ea7711bd
core: do not compile ScalarType for torch < 2.4.0 (#938)
|
2 weeks ago |
_custom_ops.py
|
9bdf8d5bfa
mamba: enable continuous batching for mamba kernels (#1055)
|
1 week ago |
_ipex_ops.py
|
6951928522
xpu: bump IPEX to 2.3, support GQA (#1042)
|
1 week ago |
connections.py
|
c6c91edab7
ci: update & overhaul test units (#769)
|
1 month ago |
constants.py
|
2f61644f6e
SPMD optimizations (#824)
|
1 month ago |
py.typed
|
1c988a48b2
fix logging and add py.typed
|
1 year ago |
scalar_type.py
|
f1d0b77c92
[0.6.0] Release Candidate (#481)
|
4 months ago |
version.py
|
cbd51a208a
ci: bump to 0.6.5 (#964)
|
2 weeks ago |