.. |
__init__.py
|
fa15bad2ea
chore: minor AMD fixes
|
há 5 meses atrás |
arctic.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
baichuan.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
bloom.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
chameleon.py
|
a0d031efcc
feat: initial text-to-text support for Chameleon model
|
há 5 meses atrás |
chatglm.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
clip.py
|
e26a4ac698
chore: avoid loading the unused layers and init the VLM up to the required feature space
|
há 5 meses atrás |
commandr.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
dbrx.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
decilm.py
|
56e0b8223c
chore: add base class for LoRA-supported models
|
há 6 meses atrás |
deepseek.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
deepseek_v2.py
|
1efd0f89b7
feat: support FP8 for DeepSeekV2 MoE
|
há 5 meses atrás |
falcon.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
fuyu.py
|
e13a66925c
feat: add fuyu vision model and persimmon language model support
|
há 5 meses atrás |
gemma.py
|
05e45aeb53
fix: dtype mismatch for paligemma
|
há 5 meses atrás |
gemma2.py
|
5761ef8c35
feat: gemma-2 support
|
há 5 meses atrás |
gpt2.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
gpt_bigcode.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
gpt_j.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
gpt_neox.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
interfaces.py
|
e76bbe72eb
chore: handle aborted requests for jamba
|
há 5 meses atrás |
internlm2.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
jais.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
jamba.py
|
f5d52320da
Port mamba kernels to Aphrodite (#595)
|
há 5 meses atrás |
llama.py
|
00503b9fc1
feat: non-uniform quantization via `compressed-tensors` for llama
|
há 5 meses atrás |
llama_embedding.py
|
50b7c13db0
refactor: attention selector (#552)
|
há 6 meses atrás |
llava.py
|
acbdc50a71
fix: `vocab_size` field access in llava
|
há 5 meses atrás |
llava_next.py
|
acbdc50a71
fix: `vocab_size` field access in llava
|
há 5 meses atrás |
medusa.py
|
d9f4c36edd
feat: Medusa speculative decoding support (#590)
|
há 5 meses atrás |
minicpm.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
mixtral.py
|
00503b9fc1
feat: non-uniform quantization via `compressed-tensors` for llama
|
há 5 meses atrás |
mixtral_quant.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
mlp_speculator.py
|
db73f03cdc
fix: use ParallelLMHead for MLPSpeculator
|
há 5 meses atrás |
mpt.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
olmo.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
opt.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
orion.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
paligemma.py
|
05e45aeb53
fix: dtype mismatch for paligemma
|
há 5 meses atrás |
persimmon.py
|
e13a66925c
feat: add fuyu vision model and persimmon language model support
|
há 5 meses atrás |
phi.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
phi3_small.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
phi3v.py
|
ad68d149d8
chore: refactor and decouple phi3v image embedding
|
há 5 meses atrás |
qwen.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
qwen2.py
|
fb4c01740c
feat: add asymmetric TP support for Qwen2
|
há 5 meses atrás |
qwen2_moe.py
|
1efd0f89b7
feat: support FP8 for DeepSeekV2 MoE
|
há 5 meses atrás |
stablelm.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
starcoder2.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |
utils.py
|
00503b9fc1
feat: non-uniform quantization via `compressed-tensors` for llama
|
há 5 meses atrás |
xverse.py
|
0f4a9ee77b
quantized lm_head (#582)
|
há 5 meses atrás |