AlpinDale 058e629f8e chore: refactor marlin python utils vor 6 Monaten
..
adapter_commons 99680b2d23 feat: soft prompts (#589) vor 6 Monaten
attention 2105e4fd6b feat: correctly invoke prefill & decode kernels for cross-attention vor 6 Monaten
common 16dff9babc chore: enable bonus token in spec decoding for KV cache based models vor 6 Monaten
distributed dba22e4f83 fix: add zeromq fallback for broadcasting large objects (e.g. vlm images) vor 6 Monaten
endpoints a3b56353fa fix: another one missed vor 6 Monaten
engine c0c2b1ac20 fix: get_and_reset only when scheduler outputs are not empty vor 6 Monaten
executor 4501ae5f15 fix: neuron executor for adapters vor 6 Monaten
inputs 4f7d212b70 feat: remove vision language config vor 6 Monaten
kv_quant e42a78381a feat: switch from pylint to ruff (#322) vor 1 Jahr
lora 99680b2d23 feat: soft prompts (#589) vor 6 Monaten
modeling db73f03cdc fix: use ParallelLMHead for MLPSpeculator vor 6 Monaten
multimodal c11a8bdaad fix: calculate max number of multi-modal tokens automatically vor 6 Monaten
platforms 1a40bf438b fix: incorrect gpu capability when used mixed gpus vor 6 Monaten
processing 99680b2d23 feat: soft prompts (#589) vor 6 Monaten
prompt_adapter 99680b2d23 feat: soft prompts (#589) vor 6 Monaten
quantization 058e629f8e chore: refactor marlin python utils vor 6 Monaten
spec_decode 16dff9babc chore: enable bonus token in spec decoding for KV cache based models vor 6 Monaten
task_handler d9f4c36edd feat: Medusa speculative decoding support (#590) vor 6 Monaten
transformers_utils d9f4c36edd feat: Medusa speculative decoding support (#590) vor 6 Monaten
__init__.py a07fc83bc8 chore: proper util for aphrodite version vor 7 Monaten
_custom_ops.py ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs vor 6 Monaten
_ipex_ops.py 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) vor 7 Monaten
py.typed 1c988a48b2 fix logging and add py.typed vor 1 Jahr
version.py 7e54c3916d chore: factor out epilogues from cutlass kernels vor 7 Monaten