Commit History

Author SHA1 Message Date
  AlpinDale c9310eeb02 fix: skip loading lm_head for tie_word_embeddings models 4 months ago
  AlpinDale 0f4a9ee77b quantized lm_head (#582) 5 months ago
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) 5 months ago
  AlpinDale 656459fd84 make fp8_e4m3 work on nvidia 6 months ago
  AlpinDale 50b7c13db0 refactor: attention selector (#552) 6 months ago
  AlpinDale b178ae4b4a chore: generalize linear_method to be quant_method (#540) 6 months ago
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) 6 months ago
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) 8 months ago
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 9 months ago
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) 10 months ago
  AlpinDale c41462cfcd feat: exllamav2 quantization (#305) 10 months ago
  AlpinDale e31c6f0b45 feat: refactor modeling logic and support more models (#274) 11 months ago