AlpinDale 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
..
__init__.py 6b1fdd07bd chore: add isort and refactor formatting script and utils 4 ماه پیش
arctic.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
baichuan.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
blip.py 9f010deb8a feat: add blip-2 support 4 ماه پیش
blip2.py 9f010deb8a feat: add blip-2 support 4 ماه پیش
bloom.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
chameleon.py c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models 4 ماه پیش
chatglm.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
clip.py e26a4ac698 chore: avoid loading the unused layers and init the VLM up to the required feature space 4 ماه پیش
commandr.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
dbrx.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
decilm.py 56e0b8223c chore: add base class for LoRA-supported models 5 ماه پیش
deepseek.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
deepseek_v2.py 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE 4 ماه پیش
falcon.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
fuyu.py 9e9515f39a fix: feature size calculation for Llava-next 4 ماه پیش
gemma.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
gemma2.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
gpt2.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
gpt_bigcode.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
gpt_j.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
gpt_neox.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
idefics2_vision_model.py 9a50e3b4eb refactor: minicpmv and port Idefix2VisionTransformer 4 ماه پیش
interfaces.py e76bbe72eb chore: handle aborted requests for jamba 4 ماه پیش
intern_vit.py 92987963a4 fix: RMSNorm forward in InternViT attention qk_layernorm 4 ماه پیش
internlm2.py 9cf1275f03 feat: add internvl support 4 ماه پیش
internvl.py 9e9515f39a fix: feature size calculation for Llava-next 4 ماه پیش
jais.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
jamba.py f83eb07fd1 feat: use FusedMoE for jamba 4 ماه پیش
llama.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
llama_embedding.py 50b7c13db0 refactor: attention selector (#552) 5 ماه پیش
llava.py acbdc50a71 fix: `vocab_size` field access in llava 4 ماه پیش
llava_next.py 9e9515f39a fix: feature size calculation for Llava-next 4 ماه پیش
medusa.py d9f4c36edd feat: Medusa speculative decoding support (#590) 4 ماه پیش
minicpm.py c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models 4 ماه پیش
minicpmv.py 9a50e3b4eb refactor: minicpmv and port Idefix2VisionTransformer 4 ماه پیش
mixtral.py 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama 4 ماه پیش
mixtral_quant.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
mlp_speculator.py db73f03cdc fix: use ParallelLMHead for MLPSpeculator 4 ماه پیش
mpt.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
na_vit.py 9a50e3b4eb refactor: minicpmv and port Idefix2VisionTransformer 4 ماه پیش
nemotron.py 18b45266bb feat: add nemotron HF support (#606) 4 ماه پیش
olmo.py c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models 4 ماه پیش
opt.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
orion.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
paligemma.py c3ee71a437 feat: port SiglipVisionModel from transformers 4 ماه پیش
persimmon.py e13a66925c feat: add fuyu vision model and persimmon language model support 4 ماه پیش
phi.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
phi3_small.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
phi3v.py 9e9515f39a fix: feature size calculation for Llava-next 4 ماه پیش
qwen.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
qwen2.py d357341203 chore: add pipeline parallel support for Qwen 4 ماه پیش
qwen2_moe.py d357341203 chore: add pipeline parallel support for Qwen 4 ماه پیش
siglip.py 4d4e767838 ci: take one of fixing lint issues 4 ماه پیش
stablelm.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
starcoder2.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش
utils.py 165a3aa7b3 fix: fp8 marlin and cpu offloading with fp8 marlin 4 ماه پیش
xverse.py 0f4a9ee77b quantized lm_head (#582) 4 ماه پیش