AlpinDale
|
842912d022
feat: on-the-fly gguf conversion (#250)
|
11 months ago |
AlpinDale
|
d9b65e6c5f
feat: DeepSeek MoE support (#237)
|
11 months ago |
AlpinDale
|
c3a221eb02
feat: GGUF, QuIP#, and Marlin support (#228)
|
11 months ago |
AlpinDale
|
8fa608aeb7
feat: replace Ray with NCCL for control plane comms (#221)
|
11 months ago |
AlpinDale
|
97f37c1cb2
fix: use tensor parallel for quantized mixtral (#213)
|
11 months ago |
AlpinDale
|
193287b2ef
fix: mixtral unused import
|
1 year ago |
AlpinDale
|
53d391e1f2
merge 'dev' into 'main'
|
1 year ago |
AlpinDale
|
7e72ce0a73
feat: mixtral tensor parallelism (#193)
|
1 year ago |
AlpinDale
|
b9b295d74e
chore: backlogs 1 (#191)
|
1 year ago |
AlpinDale
|
f013d714c0
chore: merge dev branch into main (#177)
|
1 year ago |
g4rg
|
fe57bb7ad2
feat: add rope scaling to mixtral (#174)
|
1 year ago |
AlpinDale
|
7d91e9e0f2
feat: CUDA graphs (#172)
|
1 year ago |
AlpinDale
|
725be3e0de
feat: mixtral HF with expert parallelism (#167)
|
1 year ago |
AlpinDale
|
730357c7d5
chore: implement lazy module loader for models (#165)
|
1 year ago |
AlpinDale
|
2755a48d51
merge dev branch into main (#153)
|
1 year ago |
AlpinDale
|
87277c76e4
feat: Mixtral 8x7B support (#155)
|
1 year ago |