Histórico de Commits

Autor SHA1 Mensagem Data
  AlpinDale 4d4e767838 ci: take one of fixing lint issues há 4 meses atrás
  AlpinDale 0e6c400b13 feat: re-add GGUF (#600) há 4 meses atrás
  AlpinDale c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models há 4 meses atrás
  AlpinDale 1441326ac8 fix: cleanup minicpm-v and port na_vit model há 4 meses atrás
  AlpinDale e348ca3540 feat: add support for MiniCPM-V há 4 meses atrás
  AlpinDale cb44c8daa8 feat: support FP8 KV Cache scales from compressed-tensors há 4 meses atrás
  AlpinDale 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama há 4 meses atrás
  AlpinDale 0429cb2229 fix: only create embeddings and lm_head when necessary for PP há 4 meses atrás
  AlpinDale 5289c14b24 feat: Asymmetric Tensor Parallel (#594) há 4 meses atrás
  AlpinDale 9d7beaa5b9 chore: separate kv_scale into k_scale and v_scale há 4 meses atrás
  AlpinDale 497bf64942 chore: simplify pipeline parallel code in llama há 4 meses atrás
  AlpinDale 0f4a9ee77b quantized lm_head (#582) há 4 meses atrás
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) há 4 meses atrás
  AlpinDale c5d8028668 fix: no need to redefine supports_vision and supports_lora in model class há 5 meses atrás
  AlpinDale 56e0b8223c chore: add base class for LoRA-supported models há 5 meses atrás
  AlpinDale 690110a051 feat: bitsandbytes quantization há 5 meses atrás
  AlpinDale ac79d115b3 add guards for prefix caching, fp8, chunked, etc há 5 meses atrás
  AlpinDale f4ea11b982 feat: initial support for activation quantization há 5 meses atrás
  AlpinDale c1ed789835 fix: typo in llama.py há 5 meses atrás
  AlpinDale 656459fd84 make fp8_e4m3 work on nvidia há 5 meses atrás
  AlpinDale 9e73559eba make use of batched rotary embedding kernels to support long context lora há 5 meses atrás
  AlpinDale 2ecfa98da9 re-fix mistral nemo há 5 meses atrás
  AlpinDale 50b7c13db0 refactor: attention selector (#552) há 5 meses atrás
  AlpinDale 54a4cef647 add bias and tie word embedding support for llama há 5 meses atrás
  AlpinDale 639e48e47d fix: mistral nemo há 5 meses atrás
  AlpinDale b178ae4b4a chore: generalize linear_method to be quant_method (#540) há 5 meses atrás
  AlpinDale e7b1368156 feat: Phi3 support há 6 meses atrás
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) há 6 meses atrás
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) há 8 meses atrás
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) há 9 meses atrás