AlpinDale e13a66925c feat: add fuyu vision model and persimmon language model support 6 月之前
..
adapter_commons 99680b2d23 feat: soft prompts (#589) 6 月之前
attention d8f9f0ec16 fix: prefix prefill kernels for fp32 data type 6 月之前
common bf15e1b4e8 chore: deprecation warning for beam search 6 月之前
distributed cc6399792f fix: keep consistent with how pytorch finds libcudart.so 6 月之前
endpoints a3b56353fa fix: another one missed 6 月之前
engine 0c17c2a8a7 chore: add commit hash, clean up engine logs 6 月之前
executor 23408b9b2b chore: skip the driver worker 6 月之前
inputs 4f7d212b70 feat: remove vision language config 6 月之前
kv_quant e42a78381a feat: switch from pylint to ruff (#322) 1 年之前
lora 99680b2d23 feat: soft prompts (#589) 6 月之前
modeling e13a66925c feat: add fuyu vision model and persimmon language model support 6 月之前
multimodal c11a8bdaad fix: calculate max number of multi-modal tokens automatically 6 月之前
platforms 1a40bf438b fix: incorrect gpu capability when used mixed gpus 6 月之前
processing 99680b2d23 feat: soft prompts (#589) 6 月之前
prompt_adapter 99680b2d23 feat: soft prompts (#589) 6 月之前
quantization 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE 6 月之前
spec_decode 16dff9babc chore: enable bonus token in spec decoding for KV cache based models 6 月之前
task_handler ddb28a80a3 fix: bump torch for rocm, unify CUDA_VISIBLE_DEVICES for cuda and rocm 6 月之前
transformers_utils 63becc67c0 fix: prompt logprob detokenization 6 月之前
__init__.py 0c17c2a8a7 chore: add commit hash, clean up engine logs 6 月之前
_custom_ops.py ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs 6 月之前
_ipex_ops.py 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) 7 月之前
py.typed 1c988a48b2 fix logging and add py.typed 1 年之前
version.py 0c17c2a8a7 chore: add commit hash, clean up engine logs 6 月之前