AlpinDale 05e45aeb53 fix: dtype mismatch for paligemma 6 mesi fa
..
adapter_commons 99680b2d23 feat: soft prompts (#589) 6 mesi fa
attention a2d476183f fix: remove scipy and re-implement CSR matrix 6 mesi fa
common ddb28a80a3 fix: bump torch for rocm, unify CUDA_VISIBLE_DEVICES for cuda and rocm 6 mesi fa
distributed cc6399792f fix: keep consistent with how pytorch finds libcudart.so 6 mesi fa
endpoints a3b56353fa fix: another one missed 6 mesi fa
engine 63becc67c0 fix: prompt logprob detokenization 6 mesi fa
executor 4501ae5f15 fix: neuron executor for adapters 6 mesi fa
inputs 4f7d212b70 feat: remove vision language config 6 mesi fa
kv_quant e42a78381a feat: switch from pylint to ruff (#322) 1 anno fa
lora 99680b2d23 feat: soft prompts (#589) 6 mesi fa
modeling 05e45aeb53 fix: dtype mismatch for paligemma 6 mesi fa
multimodal c11a8bdaad fix: calculate max number of multi-modal tokens automatically 6 mesi fa
platforms 1a40bf438b fix: incorrect gpu capability when used mixed gpus 6 mesi fa
processing 99680b2d23 feat: soft prompts (#589) 6 mesi fa
prompt_adapter 99680b2d23 feat: soft prompts (#589) 6 mesi fa
quantization 500f3b654f fix: support bias term in compressed-tensors quant 6 mesi fa
spec_decode 16dff9babc chore: enable bonus token in spec decoding for KV cache based models 6 mesi fa
task_handler ddb28a80a3 fix: bump torch for rocm, unify CUDA_VISIBLE_DEVICES for cuda and rocm 6 mesi fa
transformers_utils 63becc67c0 fix: prompt logprob detokenization 6 mesi fa
__init__.py a07fc83bc8 chore: proper util for aphrodite version 7 mesi fa
_custom_ops.py ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs 6 mesi fa
_ipex_ops.py 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) 7 mesi fa
py.typed 1c988a48b2 fix logging and add py.typed 1 anno fa
version.py 7e54c3916d chore: factor out epilogues from cutlass kernels 7 mesi fa