AlpinDale 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE há 6 meses atrás
..
adapter_commons 99680b2d23 feat: soft prompts (#589) há 6 meses atrás
attention d8f9f0ec16 fix: prefix prefill kernels for fp32 data type há 6 meses atrás
common bf15e1b4e8 chore: deprecation warning for beam search há 6 meses atrás
distributed cc6399792f fix: keep consistent with how pytorch finds libcudart.so há 6 meses atrás
endpoints a3b56353fa fix: another one missed há 6 meses atrás
engine 0c17c2a8a7 chore: add commit hash, clean up engine logs há 6 meses atrás
executor 23408b9b2b chore: skip the driver worker há 6 meses atrás
inputs 4f7d212b70 feat: remove vision language config há 6 meses atrás
kv_quant e42a78381a feat: switch from pylint to ruff (#322) há 1 ano atrás
lora 99680b2d23 feat: soft prompts (#589) há 6 meses atrás
modeling 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE há 6 meses atrás
multimodal c11a8bdaad fix: calculate max number of multi-modal tokens automatically há 6 meses atrás
platforms 1a40bf438b fix: incorrect gpu capability when used mixed gpus há 6 meses atrás
processing 99680b2d23 feat: soft prompts (#589) há 6 meses atrás
prompt_adapter 99680b2d23 feat: soft prompts (#589) há 6 meses atrás
quantization 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE há 6 meses atrás
spec_decode 16dff9babc chore: enable bonus token in spec decoding for KV cache based models há 6 meses atrás
task_handler ddb28a80a3 fix: bump torch for rocm, unify CUDA_VISIBLE_DEVICES for cuda and rocm há 6 meses atrás
transformers_utils 63becc67c0 fix: prompt logprob detokenization há 6 meses atrás
__init__.py 0c17c2a8a7 chore: add commit hash, clean up engine logs há 6 meses atrás
_custom_ops.py ad24e74a99 feat: FP8 weight-only quantization support for Ampere GPUs há 6 meses atrás
_ipex_ops.py 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) há 7 meses atrás
py.typed 1c988a48b2 fix logging and add py.typed há 1 ano atrás
version.py 0c17c2a8a7 chore: add commit hash, clean up engine logs há 6 meses atrás