Commit History

Autor SHA1 Mensaxe Data
  AlpinDale fb4c01740c feat: add asymmetric TP support for Qwen2 hai 5 meses
  AlpinDale 9d7beaa5b9 chore: separate kv_scale into k_scale and v_scale hai 5 meses
  AlpinDale 0f4a9ee77b quantized lm_head (#582) hai 5 meses
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) hai 6 meses
  AlpinDale c5d8028668 fix: no need to redefine supports_vision and supports_lora in model class hai 6 meses
  AlpinDale 56e0b8223c chore: add base class for LoRA-supported models hai 6 meses
  AlpinDale 025322ee5f fix: fp8 kv cache for qwen2 models hai 6 meses
  AlpinDale ac79d115b3 add guards for prefix caching, fp8, chunked, etc hai 6 meses
  AlpinDale 656459fd84 make fp8_e4m3 work on nvidia hai 6 meses
  AlpinDale 295cfb2f39 add rope scaling for qwen2 hai 6 meses
  AlpinDale 50b7c13db0 refactor: attention selector (#552) hai 6 meses
  AlpinDale b178ae4b4a chore: generalize linear_method to be quant_method (#540) hai 6 meses
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) hai 7 meses
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) hai 9 meses
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) hai 10 meses
  AlpinDale da223153c6 feat&fix: cohere support and missing GPU blocks (#333) hai 11 meses
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) hai 11 meses
  AlpinDale e31c6f0b45 feat: refactor modeling logic and support more models (#274) hai 11 meses