.. |
__init__.py
|
fa15bad2ea
chore: minor AMD fixes
|
5 сар өмнө |
arctic.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
baichuan.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
bloom.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
chameleon.py
|
a0d031efcc
feat: initial text-to-text support for Chameleon model
|
5 сар өмнө |
chatglm.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
clip.py
|
e26a4ac698
chore: avoid loading the unused layers and init the VLM up to the required feature space
|
5 сар өмнө |
commandr.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
dbrx.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
decilm.py
|
56e0b8223c
chore: add base class for LoRA-supported models
|
6 сар өмнө |
deepseek.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
deepseek_v2.py
|
1efd0f89b7
feat: support FP8 for DeepSeekV2 MoE
|
5 сар өмнө |
falcon.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
fuyu.py
|
e13a66925c
feat: add fuyu vision model and persimmon language model support
|
5 сар өмнө |
gemma.py
|
05e45aeb53
fix: dtype mismatch for paligemma
|
5 сар өмнө |
gemma2.py
|
5761ef8c35
feat: gemma-2 support
|
5 сар өмнө |
gpt2.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
gpt_bigcode.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
gpt_j.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
gpt_neox.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
interfaces.py
|
e76bbe72eb
chore: handle aborted requests for jamba
|
5 сар өмнө |
internlm2.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
jais.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
jamba.py
|
f5d52320da
Port mamba kernels to Aphrodite (#595)
|
5 сар өмнө |
llama.py
|
00503b9fc1
feat: non-uniform quantization via `compressed-tensors` for llama
|
5 сар өмнө |
llama_embedding.py
|
50b7c13db0
refactor: attention selector (#552)
|
6 сар өмнө |
llava.py
|
acbdc50a71
fix: `vocab_size` field access in llava
|
5 сар өмнө |
llava_next.py
|
acbdc50a71
fix: `vocab_size` field access in llava
|
5 сар өмнө |
medusa.py
|
d9f4c36edd
feat: Medusa speculative decoding support (#590)
|
5 сар өмнө |
minicpm.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
mixtral.py
|
00503b9fc1
feat: non-uniform quantization via `compressed-tensors` for llama
|
5 сар өмнө |
mixtral_quant.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
mlp_speculator.py
|
db73f03cdc
fix: use ParallelLMHead for MLPSpeculator
|
5 сар өмнө |
mpt.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
olmo.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
opt.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
orion.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
paligemma.py
|
05e45aeb53
fix: dtype mismatch for paligemma
|
5 сар өмнө |
persimmon.py
|
e13a66925c
feat: add fuyu vision model and persimmon language model support
|
5 сар өмнө |
phi.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
phi3_small.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
phi3v.py
|
ad68d149d8
chore: refactor and decouple phi3v image embedding
|
5 сар өмнө |
qwen.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
qwen2.py
|
fb4c01740c
feat: add asymmetric TP support for Qwen2
|
5 сар өмнө |
qwen2_moe.py
|
1efd0f89b7
feat: support FP8 for DeepSeekV2 MoE
|
5 сар өмнө |
stablelm.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
starcoder2.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |
utils.py
|
00503b9fc1
feat: non-uniform quantization via `compressed-tensors` for llama
|
5 сар өмнө |
xverse.py
|
0f4a9ee77b
quantized lm_head (#582)
|
5 сар өмнө |