.. |
attention
|
1270b5567e
triton compile error for flash_attn
|
8 months ago |
common
|
fcfb72af24
Support arbitrary model in GGUF. (#381)
|
8 months ago |
distributed
|
b1caee23a6
cache the p2p access check for memory saving
|
8 months ago |
endpoints
|
b1caee23a6
cache the p2p access check for memory saving
|
8 months ago |
engine
|
bd0ddf1cfe
feat: EETQ quantization (#408)
|
8 months ago |
executor
|
373e0d3c01
fix neuron
|
8 months ago |
kv_quant
|
e42a78381a
feat: switch from pylint to ruff (#322)
|
9 months ago |
lora
|
fe17712f29
fully working chunked prefill
|
8 months ago |
modeling
|
fcfb72af24
Support arbitrary model in GGUF. (#381)
|
8 months ago |
processing
|
fe17712f29
fully working chunked prefill
|
8 months ago |
spec_decode
|
4d33ce60da
feat: Triton flash attention backend for ROCm (#407)
|
8 months ago |
task_handler
|
6e0761ba5d
make init_distributed_environment compatible with init_process_group
|
8 months ago |
transformers_utils
|
c18bf116da
fix stop strings not being excluded from outputs
|
8 months ago |
__init__.py
|
c2aaaefd57
allow out-of-tree model registry
|
8 months ago |
py.typed
|
1c988a48b2
fix logging and add py.typed
|
1 year ago |