.. |
compressed_tensors
|
058e629f8e
chore: refactor marlin python utils
|
6 months ago |
gguf_utils
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
10 months ago |
utils
|
058e629f8e
chore: refactor marlin python utils
|
6 months ago |
__init__.py
|
517676249c
chore: update the compressed-tensors config
|
7 months ago |
aqlm.py
|
156f577f79
feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569)
|
7 months ago |
autoquant.py
|
156f577f79
feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569)
|
7 months ago |
awq.py
|
17f7089e26
fix: `get_min_capability` for all quants
|
7 months ago |
base_config.py
|
0f4a9ee77b
quantized lm_head (#582)
|
6 months ago |
bitsandbytes.py
|
17f7089e26
fix: `get_min_capability` for all quants
|
7 months ago |
deepspeedfp.py
|
4acf34417a
feat: add DeepSpeedFP quantization for all models
|
7 months ago |
eetq.py
|
17f7089e26
fix: `get_min_capability` for all quants
|
7 months ago |
exl2.py
|
156f577f79
feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569)
|
7 months ago |
fp8.py
|
058e629f8e
chore: refactor marlin python utils
|
6 months ago |
gguf.py
|
17f7089e26
fix: `get_min_capability` for all quants
|
7 months ago |
gptq.py
|
0f4a9ee77b
quantized lm_head (#582)
|
6 months ago |
gptq_marlin.py
|
058e629f8e
chore: refactor marlin python utils
|
6 months ago |
gptq_marlin_24.py
|
156f577f79
feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569)
|
7 months ago |
hadamard.safetensors
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
10 months ago |
marlin.py
|
0f4a9ee77b
quantized lm_head (#582)
|
6 months ago |
quip.py
|
156f577f79
feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569)
|
7 months ago |
quip_utils.py
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
10 months ago |
schema.py
|
9d81716bfd
[v0.5.3] Release Candidate (#388)
|
10 months ago |
squeezellm.py
|
17f7089e26
fix: `get_min_capability` for all quants
|
7 months ago |