AlpinDale acbdc50a71 fix: `vocab_size` field access in llava 5 mesi fa
..
__init__.py fa15bad2ea chore: minor AMD fixes 5 mesi fa
arctic.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
baichuan.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
bloom.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
chameleon.py a0d031efcc feat: initial text-to-text support for Chameleon model 5 mesi fa
chatglm.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
clip.py e26a4ac698 chore: avoid loading the unused layers and init the VLM up to the required feature space 5 mesi fa
commandr.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
dbrx.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
decilm.py 56e0b8223c chore: add base class for LoRA-supported models 6 mesi fa
deepseek.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
deepseek_v2.py 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE 5 mesi fa
falcon.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
fuyu.py e13a66925c feat: add fuyu vision model and persimmon language model support 5 mesi fa
gemma.py 05e45aeb53 fix: dtype mismatch for paligemma 5 mesi fa
gemma2.py 5761ef8c35 feat: gemma-2 support 5 mesi fa
gpt2.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
gpt_bigcode.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
gpt_j.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
gpt_neox.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
interfaces.py e76bbe72eb chore: handle aborted requests for jamba 5 mesi fa
internlm2.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
jais.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
jamba.py f5d52320da Port mamba kernels to Aphrodite (#595) 5 mesi fa
llama.py 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama 5 mesi fa
llama_embedding.py 50b7c13db0 refactor: attention selector (#552) 6 mesi fa
llava.py acbdc50a71 fix: `vocab_size` field access in llava 5 mesi fa
llava_next.py acbdc50a71 fix: `vocab_size` field access in llava 5 mesi fa
medusa.py d9f4c36edd feat: Medusa speculative decoding support (#590) 5 mesi fa
minicpm.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
mixtral.py 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama 5 mesi fa
mixtral_quant.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
mlp_speculator.py db73f03cdc fix: use ParallelLMHead for MLPSpeculator 5 mesi fa
mpt.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
olmo.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
opt.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
orion.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
paligemma.py 05e45aeb53 fix: dtype mismatch for paligemma 5 mesi fa
persimmon.py e13a66925c feat: add fuyu vision model and persimmon language model support 5 mesi fa
phi.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
phi3_small.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
phi3v.py ad68d149d8 chore: refactor and decouple phi3v image embedding 5 mesi fa
qwen.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
qwen2.py fb4c01740c feat: add asymmetric TP support for Qwen2 5 mesi fa
qwen2_moe.py 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE 5 mesi fa
stablelm.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
starcoder2.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa
utils.py 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama 5 mesi fa
xverse.py 0f4a9ee77b quantized lm_head (#582) 6 mesi fa