AlpinDale 4d4e767838 ci: take one of fixing lint issues 4 ay önce
..
__init__.py 6b1fdd07bd chore: add isort and refactor formatting script and utils 4 ay önce
arctic.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
baichuan.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
blip.py 9f010deb8a feat: add blip-2 support 4 ay önce
blip2.py 9f010deb8a feat: add blip-2 support 4 ay önce
bloom.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
chameleon.py c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models 4 ay önce
chatglm.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
clip.py e26a4ac698 chore: avoid loading the unused layers and init the VLM up to the required feature space 4 ay önce
commandr.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
dbrx.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
decilm.py 56e0b8223c chore: add base class for LoRA-supported models 5 ay önce
deepseek.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
deepseek_v2.py 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE 4 ay önce
falcon.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
fuyu.py 9e9515f39a fix: feature size calculation for Llava-next 4 ay önce
gemma.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
gemma2.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
gpt2.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
gpt_bigcode.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
gpt_j.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
gpt_neox.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
idefics2_vision_model.py 9a50e3b4eb refactor: minicpmv and port Idefix2VisionTransformer 4 ay önce
interfaces.py e76bbe72eb chore: handle aborted requests for jamba 4 ay önce
intern_vit.py 92987963a4 fix: RMSNorm forward in InternViT attention qk_layernorm 4 ay önce
internlm2.py 9cf1275f03 feat: add internvl support 4 ay önce
internvl.py 9e9515f39a fix: feature size calculation for Llava-next 4 ay önce
jais.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
jamba.py f83eb07fd1 feat: use FusedMoE for jamba 4 ay önce
llama.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
llama_embedding.py 50b7c13db0 refactor: attention selector (#552) 5 ay önce
llava.py acbdc50a71 fix: `vocab_size` field access in llava 4 ay önce
llava_next.py 9e9515f39a fix: feature size calculation for Llava-next 4 ay önce
medusa.py d9f4c36edd feat: Medusa speculative decoding support (#590) 4 ay önce
minicpm.py c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models 4 ay önce
minicpmv.py 9a50e3b4eb refactor: minicpmv and port Idefix2VisionTransformer 4 ay önce
mixtral.py 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama 4 ay önce
mixtral_quant.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
mlp_speculator.py db73f03cdc fix: use ParallelLMHead for MLPSpeculator 4 ay önce
mpt.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
na_vit.py 9a50e3b4eb refactor: minicpmv and port Idefix2VisionTransformer 4 ay önce
nemotron.py 18b45266bb feat: add nemotron HF support (#606) 4 ay önce
olmo.py c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models 4 ay önce
opt.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
orion.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
paligemma.py c3ee71a437 feat: port SiglipVisionModel from transformers 4 ay önce
persimmon.py e13a66925c feat: add fuyu vision model and persimmon language model support 4 ay önce
phi.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
phi3_small.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
phi3v.py 9e9515f39a fix: feature size calculation for Llava-next 4 ay önce
qwen.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
qwen2.py d357341203 chore: add pipeline parallel support for Qwen 4 ay önce
qwen2_moe.py d357341203 chore: add pipeline parallel support for Qwen 4 ay önce
siglip.py 4d4e767838 ci: take one of fixing lint issues 4 ay önce
stablelm.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
starcoder2.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce
utils.py 165a3aa7b3 fix: fp8 marlin and cpu offloading with fp8 marlin 4 ay önce
xverse.py 0f4a9ee77b quantized lm_head (#582) 4 ay önce