AlpinDale 84163654f4 tokenizer: allow skip_special_tokens=False for mistral tokenizer il y a 1 semaine
..
adapter_commons 2f61644f6e SPMD optimizations (#824) il y a 1 mois
assets 411ac4f405 vlm: add support for Qwen2-VL model (#1015) il y a 2 semaines
attention a985143768 core: add cuda graph support for encoder-decoder models (#1051) il y a 1 semaine
common 9bdf8d5bfa mamba: enable continuous batching for mamba kernels (#1055) il y a 1 semaine
compilation 960dee2f97 torch.compile: fix functionalization (#1045) il y a 1 semaine
distributed b3f9ab3b72 quant: add tensor parallel support for bitsandbytes (#1052) il y a 1 semaine
endpoints 1264e0b5d8 api: add mistral function calling format to all models loaded with "mistral" format (#1053) il y a 1 semaine
engine a985143768 core: add cuda graph support for encoder-decoder models (#1051) il y a 1 semaine
executor 638c08d9dc fix: clean shutdown issues (#1047) il y a 1 semaine
inputs 05be6085ec core: factor out input preprocessing into a separate class (#1039) il y a 1 semaine
kv_quant 8a71788372 Add OLMoE (#772) il y a 2 mois
lora bf4a4d8516 fix: do not register punica with torch if using older torch (#948) il y a 2 semaines
modeling 9bdf8d5bfa mamba: enable continuous batching for mamba kernels (#1055) il y a 1 semaine
multimodal 411ac4f405 vlm: add support for Qwen2-VL model (#1015) il y a 2 semaines
platforms f2b6dc3872 cpu: add support for W8A8 quantization via compressed-tensor (#1017) il y a 2 semaines
plugins 9797d38b24 torch.compile: allow adding custom compile backends via plugins (#1041) il y a 1 semaine
processing f561a54a43 core: fix async postprocessor in case of preemption (#1000) il y a 2 semaines
prompt_adapter 30d02d0747 chore: remove peft as a requirement (#1006) il y a 2 semaines
quantization 8976805f90 kernel: asymmetric AQ AZP quantization kernels (#1048) il y a 1 semaine
server 638c08d9dc fix: clean shutdown issues (#1047) il y a 1 semaine
spec_decode 5c3b94de45 spec decode: move ops.advane_step to flash attention backend (#1005) il y a 2 semaines
transformers_utils 84163654f4 tokenizer: allow skip_special_tokens=False for mistral tokenizer il y a 1 semaine
triton_utils 4593a3b306 chore: remove dead code from triton sampling kernels (#1049) il y a 1 semaine
worker a985143768 core: add cuda graph support for encoder-decoder models (#1051) il y a 1 semaine
__init__.py f1d0b77c92 [0.6.0] Release Candidate (#481) il y a 4 mois
_core_ext.py f1ea7711bd core: do not compile ScalarType for torch < 2.4.0 (#938) il y a 2 semaines
_custom_ops.py 9bdf8d5bfa mamba: enable continuous batching for mamba kernels (#1055) il y a 1 semaine
_ipex_ops.py 6951928522 xpu: bump IPEX to 2.3, support GQA (#1042) il y a 1 semaine
connections.py c6c91edab7 ci: update & overhaul test units (#769) il y a 1 mois
constants.py 2f61644f6e SPMD optimizations (#824) il y a 1 mois
py.typed 1c988a48b2 fix logging and add py.typed il y a 1 an
scalar_type.py f1d0b77c92 [0.6.0] Release Candidate (#481) il y a 4 mois
version.py cbd51a208a ci: bump to 0.6.5 (#964) il y a 2 semaines