AlpinDale
|
00503b9fc1
feat: non-uniform quantization via `compressed-tensors` for llama
|
5 kuukautta sitten |
AlpinDale
|
0429cb2229
fix: only create embeddings and lm_head when necessary for PP
|
5 kuukautta sitten |
AlpinDale
|
5289c14b24
feat: Asymmetric Tensor Parallel (#594)
|
5 kuukautta sitten |
AlpinDale
|
9d7beaa5b9
chore: separate kv_scale into k_scale and v_scale
|
5 kuukautta sitten |
AlpinDale
|
497bf64942
chore: simplify pipeline parallel code in llama
|
5 kuukautta sitten |
AlpinDale
|
0f4a9ee77b
quantized lm_head (#582)
|
5 kuukautta sitten |
AlpinDale
|
ae04f57ec1
feat: Pipeline Parallel support (#581)
|
6 kuukautta sitten |
AlpinDale
|
c5d8028668
fix: no need to redefine supports_vision and supports_lora in model class
|
6 kuukautta sitten |
AlpinDale
|
56e0b8223c
chore: add base class for LoRA-supported models
|
6 kuukautta sitten |
AlpinDale
|
690110a051
feat: bitsandbytes quantization
|
6 kuukautta sitten |
AlpinDale
|
ac79d115b3
add guards for prefix caching, fp8, chunked, etc
|
6 kuukautta sitten |
AlpinDale
|
f4ea11b982
feat: initial support for activation quantization
|
6 kuukautta sitten |
AlpinDale
|
c1ed789835
fix: typo in llama.py
|
6 kuukautta sitten |
AlpinDale
|
656459fd84
make fp8_e4m3 work on nvidia
|
6 kuukautta sitten |
AlpinDale
|
9e73559eba
make use of batched rotary embedding kernels to support long context lora
|
6 kuukautta sitten |
AlpinDale
|
2ecfa98da9
re-fix mistral nemo
|
6 kuukautta sitten |
AlpinDale
|
50b7c13db0
refactor: attention selector (#552)
|
6 kuukautta sitten |
AlpinDale
|
54a4cef647
add bias and tie word embedding support for llama
|
6 kuukautta sitten |
AlpinDale
|
639e48e47d
fix: mistral nemo
|
6 kuukautta sitten |
AlpinDale
|
b178ae4b4a
chore: generalize linear_method to be quant_method (#540)
|
6 kuukautta sitten |
AlpinDale
|
e7b1368156
feat: Phi3 support
|
7 kuukautta sitten |
AlpinDale
|
fca911ee0a
vLLM Upstream Sync (#526)
|
7 kuukautta sitten |
AlpinDale
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
9 kuukautta sitten |
AlpinDale
|
f8dfac6372
chore: attention refactor and upstream sync apr01 (#365)
|
10 kuukautta sitten |
sgsdxzy
|
6ebac34dc1
chore: cleaner pre-llamafied Yi implementation (#352)
|
10 kuukautta sitten |
AlpinDale
|
681e94611f
fix: restore backwards compatibility with old Yi models (#351)
|
10 kuukautta sitten |
AlpinDale
|
da223153c6
feat&fix: cohere support and missing GPU blocks (#333)
|
11 kuukautta sitten |
AlpinDale
|
e42a78381a
feat: switch from pylint to ruff (#322)
|
11 kuukautta sitten |
AlpinDale
|
9810daa699
feat: INT8 KV Cache (#298)
|
11 kuukautta sitten |
AlpinDale
|
e31c6f0b45
feat: refactor modeling logic and support more models (#274)
|
11 kuukautta sitten |