AlpinDale cd9ed8623b fix: cuda version check for fp8 support in the cutlass kernels 7 hónapja
..
compressed_tensors cd9ed8623b fix: cuda version check for fp8 support in the cutlass kernels 7 hónapja
gguf_utils 9d81716bfd [v0.5.3] Release Candidate (#388) 10 hónapja
__init__.py 517676249c chore: update the compressed-tensors config 7 hónapja
aqlm.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
autoquant.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
awq.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
base_config.py c66b1b57b1 Marlin 2:4 sparsity (#555) 7 hónapja
bitsandbytes.py 690110a051 feat: bitsandbytes quantization 7 hónapja
deepspeedfp.py 4acf34417a feat: add DeepSpeedFP quantization for all models 7 hónapja
eetq.py b178ae4b4a chore: generalize linear_method to be quant_method (#540) 8 hónapja
exl2.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
fp8.py cd9ed8623b fix: cuda version check for fp8 support in the cutlass kernels 7 hónapja
gguf.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
gptq.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
gptq_marlin.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
gptq_marlin_24.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
hadamard.safetensors 9d81716bfd [v0.5.3] Release Candidate (#388) 10 hónapja
marlin.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
quip.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja
quip_utils.py 9d81716bfd [v0.5.3] Release Candidate (#388) 10 hónapja
schema.py 9d81716bfd [v0.5.3] Release Candidate (#388) 10 hónapja
squeezellm.py 156f577f79 feat: switch from `PYBIND11_MODULE` to `TORCH_LIBRARY` (#569) 7 hónapja