Commit History

Author SHA1 Message Date
  AlpinDale fb4c01740c feat: add asymmetric TP support for Qwen2 5 months ago
  AlpinDale 9d7beaa5b9 chore: separate kv_scale into k_scale and v_scale 5 months ago
  AlpinDale 0f4a9ee77b quantized lm_head (#582) 5 months ago
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) 6 months ago
  AlpinDale c5d8028668 fix: no need to redefine supports_vision and supports_lora in model class 6 months ago
  AlpinDale 56e0b8223c chore: add base class for LoRA-supported models 6 months ago
  AlpinDale 025322ee5f fix: fp8 kv cache for qwen2 models 6 months ago
  AlpinDale ac79d115b3 add guards for prefix caching, fp8, chunked, etc 6 months ago
  AlpinDale 656459fd84 make fp8_e4m3 work on nvidia 6 months ago
  AlpinDale 295cfb2f39 add rope scaling for qwen2 6 months ago
  AlpinDale 50b7c13db0 refactor: attention selector (#552) 6 months ago
  AlpinDale b178ae4b4a chore: generalize linear_method to be quant_method (#540) 6 months ago
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) 7 months ago
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) 9 months ago
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 10 months ago
  AlpinDale da223153c6 feat&fix: cohere support and missing GPU blocks (#333) 11 months ago
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) 11 months ago
  AlpinDale e31c6f0b45 feat: refactor modeling logic and support more models (#274) 11 months ago