AlpinDale
|
0f4a9ee77b
quantized lm_head (#582)
|
5 months ago |
AlpinDale
|
ae04f57ec1
feat: Pipeline Parallel support (#581)
|
5 months ago |
AlpinDale
|
c5d8028668
fix: no need to redefine supports_vision and supports_lora in model class
|
5 months ago |
AlpinDale
|
56e0b8223c
chore: add base class for LoRA-supported models
|
5 months ago |
AlpinDale
|
656459fd84
make fp8_e4m3 work on nvidia
|
6 months ago |
AlpinDale
|
8077af0b2f
add lora support for phi
|
6 months ago |
AlpinDale
|
50b7c13db0
refactor: attention selector (#552)
|
6 months ago |
AlpinDale
|
9fba7f1d36
remove quant_config from a few legacy models
|
6 months ago |
AlpinDale
|
b178ae4b4a
chore: generalize linear_method to be quant_method (#540)
|
6 months ago |
AlpinDale
|
fca911ee0a
vLLM Upstream Sync (#526)
|
6 months ago |
AlpinDale
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
8 months ago |
AlpinDale
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
9 months ago |
AlpinDale
|
da223153c6
feat&fix: cohere support and missing GPU blocks (#333)
|
10 months ago |
AlpinDale
|
e42a78381a
feat: switch from pylint to ruff (#322)
|
10 months ago |
AlpinDale
|
e31c6f0b45
feat: refactor modeling logic and support more models (#274)
|
11 months ago |
AlpinDale
|
842912d022
feat: on-the-fly gguf conversion (#250)
|
11 months ago |
AlpinDale
|
c3a221eb02
feat: GGUF, QuIP#, and Marlin support (#228)
|
1 year ago |