Historique des commits

Auteur SHA1 Message Date
  AlpinDale 0e6c400b13 feat: re-add GGUF (#600) il y a 5 mois
  AlpinDale 9a50e3b4eb refactor: minicpmv and port Idefix2VisionTransformer il y a 5 mois
  AlpinDale 6b1fdd07bd chore: add isort and refactor formatting script and utils il y a 5 mois
  AlpinDale 2226c1b7bd fix: replicatedlinear weight loading il y a 5 mois
  AlpinDale ba371fbbbd feat: AWQ marlin kernels (#603) il y a 5 mois
  AlpinDale 08373fd1ee fix: asymmetric TP changes breaking the gptq and awq quants (#602) il y a 5 mois
  AlpinDale 9be43994fe feat: fbgemm quantization support (#601) il y a 5 mois
  AlpinDale 6600c082bc chore: pass bias to quant_method.apply il y a 5 mois
  AlpinDale 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama il y a 5 mois
  AlpinDale 5289c14b24 feat: Asymmetric Tensor Parallel (#594) il y a 5 mois
  AlpinDale 9d7beaa5b9 chore: separate kv_scale into k_scale and v_scale il y a 5 mois
  AlpinDale d2f38f6f81 chore: remove separate bias add il y a 5 mois
  AlpinDale 6abf4e3883 fix: needs_scalar_to_array logic check in linear layer il y a 5 mois
  AlpinDale ddb3323f94 refactor: have w8a8 compressed tensors use `process_weights_after_load` for fp8 il y a 5 mois
  AlpinDale 272c64ab88 chore: allow loading fp8 models with fused qkv/mlp il y a 5 mois
  AlpinDale 772a126c08 chore: simplify fp8 weight loading il y a 5 mois
  AlpinDale 9b4c72a801 feat: support channel-wise quant for w8a8 dynamic per token activation quant il y a 5 mois
  AlpinDale 690110a051 feat: bitsandbytes quantization il y a 6 mois
  AlpinDale f4ea11b982 feat: initial support for activation quantization il y a 6 mois
  AlpinDale 6fc1ec6e9a fix redirects and improve low level debugging il y a 6 mois
  AlpinDale 7d23892501 static and dynamic fp8 il y a 6 mois
  AlpinDale b178ae4b4a chore: generalize linear_method to be quant_method (#540) il y a 6 mois
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) il y a 7 mois
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) il y a 8 mois
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) il y a 10 mois
  AlpinDale c2d77b1822 chore: logging refactor (#302) il y a 11 mois
  AlpinDale 705821a7fe feat: AQLM quantization support (#293) il y a 11 mois
  AlpinDale 72229a94da feat: better marlin kernels (#285) il y a 11 mois
  AlpinDale ea0f57b233 feat: allow further support for non-cuda devices (#247) il y a 1 an
  AlpinDale c3a221eb02 feat: GGUF, QuIP#, and Marlin support (#228) il y a 1 an