Commit History

Author SHA1 Message Date
  AlpinDale 0f4a9ee77b quantized lm_head (#582) 5 months ago
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) 5 months ago
  AlpinDale c5d8028668 fix: no need to redefine supports_vision and supports_lora in model class 5 months ago
  AlpinDale 56e0b8223c chore: add base class for LoRA-supported models 5 months ago
  AlpinDale ac79d115b3 add guards for prefix caching, fp8, chunked, etc 6 months ago
  AlpinDale 656459fd84 make fp8_e4m3 work on nvidia 6 months ago
  AlpinDale 50b7c13db0 refactor: attention selector (#552) 6 months ago
  AlpinDale b178ae4b4a chore: generalize linear_method to be quant_method (#540) 6 months ago
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) 6 months ago