AlpinDale 017b42c517 chore: use fork as the default method for mp backend il y a 7 mois
..
attention 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) il y a 7 mois
common 0613d91551 fix: kv head calculation with MPT GQA il y a 7 mois
distributed 017b42c517 chore: use fork as the default method for mp backend il y a 7 mois
endpoints c05a45f22f chore: minor updates to throughput benchmark and llm class il y a 7 mois
engine 3c7444c89b fix: asyncio.run hangs in python < 3.12 il y a 7 mois
executor 017b42c517 chore: use fork as the default method for mp backend il y a 7 mois
kv_quant e42a78381a feat: switch from pylint to ruff (#322) il y a 1 an
lora 42d2ee0f43 chore: better error logging for unsupported lora weights il y a 7 mois
modeling 025322ee5f fix: fp8 kv cache for qwen2 models il y a 7 mois
multimodal f2e94e2184 chore: minor llava cleanups in preparation for llava-next il y a 7 mois
processing f9a10145d1 fix: v2 block manager + prefix caching il y a 7 mois
quantization cd9ed8623b fix: cuda version check for fp8 support in the cutlass kernels il y a 7 mois
spec_decode 313e6e1ec7 feat: add typical acceptance sampling il y a 7 mois
task_handler 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) il y a 7 mois
transformers_utils bba89fc6d3 chore: make the automatic rope scaling behave properly with rope_scaling arg, add rope theta il y a 7 mois
__init__.py a07fc83bc8 chore: proper util for aphrodite version il y a 7 mois
_custom_ops.py cd9ed8623b fix: cuda version check for fp8 support in the cutlass kernels il y a 7 mois
_ipex_ops.py 6a57861fca feat: initial XPU support via intel_extension_for_pytorch (#571) il y a 7 mois
py.typed 1c988a48b2 fix logging and add py.typed il y a 1 an
version.py 7e54c3916d chore: factor out epilogues from cutlass kernels il y a 7 mois