Histórico de Commits

Autor SHA1 Mensagem Data
  AlpinDale 00503b9fc1 feat: non-uniform quantization via `compressed-tensors` for llama há 4 meses atrás
  AlpinDale 6c4c20652b feat: pipeline parallel support for mixtral há 4 meses atrás
  AlpinDale 9d7beaa5b9 chore: separate kv_scale into k_scale and v_scale há 4 meses atrás
  AlpinDale 1efd0f89b7 feat: support FP8 for DeepSeekV2 MoE há 4 meses atrás
  AlpinDale 9622c59f8f chore: support 2D input shape in MoE layer há 4 meses atrás
  AlpinDale 0f4a9ee77b quantized lm_head (#582) há 4 meses atrás
  AlpinDale cf472315cc refactor: isolate FP8 from mixtral há 4 meses atrás
  AlpinDale ae04f57ec1 feat: Pipeline Parallel support (#581) há 4 meses atrás
  AlpinDale c5d8028668 fix: no need to redefine supports_vision and supports_lora in model class há 5 meses atrás
  AlpinDale 56e0b8223c chore: add base class for LoRA-supported models há 5 meses atrás
  AlpinDale ab5ffb228c fp8: `act_scale` -> `input_scale` há 5 meses atrás
  AlpinDale 39b36efabf fix: mixtral fp8 ckpt loading há 5 meses atrás
  AlpinDale 67084aca5b do not build cutlass kernels if cuda version is too low há 5 meses atrás
  AlpinDale ac79d115b3 add guards for prefix caching, fp8, chunked, etc há 5 meses atrás
  AlpinDale 656459fd84 make fp8_e4m3 work on nvidia há 5 meses atrás
  AlpinDale d7c0dd5b50 fix: do not set the weight to fp8 for fp16 checkpoints há 5 meses atrás
  AlpinDale 50b7c13db0 refactor: attention selector (#552) há 5 meses atrás
  AlpinDale 40a59cca1d support fp8 mixtral checkpoints with static weights and dynamic/statics acts há 5 meses atrás
  AlpinDale a12a1bbf82 fix mixtral for non-cuda devices há 5 meses atrás
  AlpinDale 36660b55c2 chore: mixtral fp8 w/ static scales (#542) há 5 meses atrás
  AlpinDale b178ae4b4a chore: generalize linear_method to be quant_method (#540) há 5 meses atrás
  AlpinDale 46159b107a formatting: pt1 há 6 meses atrás
  AlpinDale fca911ee0a vLLM Upstream Sync (#526) há 6 meses atrás
  AlpinDale 9d81716bfd [v0.5.3] Release Candidate (#388) há 8 meses atrás
  AlpinDale f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) há 9 meses atrás
  AlpinDale da223153c6 feat&fix: cohere support and missing GPU blocks (#333) há 10 meses atrás
  AlpinDale e42a78381a feat: switch from pylint to ruff (#322) há 10 meses atrás
  AlpinDale e31c6f0b45 feat: refactor modeling logic and support more models (#274) há 10 meses atrás
  AlpinDale 7d6ba53602 feat: fused top-k kernels for MoE (#273) há 10 meses atrás
  AlpinDale 224b87b484 feat: add fused mixtral moe support (#238) há 10 meses atrás