.. |
attention
|
1270b5567e
triton compile error for flash_attn
|
8 months ago |
common
|
6c43e00e60
add jamba modeling code
|
8 months ago |
distributed
|
b1caee23a6
cache the p2p access check for memory saving
|
8 months ago |
endpoints
|
b1caee23a6
cache the p2p access check for memory saving
|
8 months ago |
engine
|
a1f18f17e6
modify the cache engine and model runner/worker to support mamba states
|
8 months ago |
executor
|
a1f18f17e6
modify the cache engine and model runner/worker to support mamba states
|
8 months ago |
kv_quant
|
e42a78381a
feat: switch from pylint to ruff (#322)
|
9 months ago |
lora
|
fe17712f29
fully working chunked prefill
|
8 months ago |
modeling
|
65cd99ba89
fix KVCache type
|
8 months ago |
processing
|
fe17712f29
fully working chunked prefill
|
8 months ago |
spec_decode
|
4d33ce60da
feat: Triton flash attention backend for ROCm (#407)
|
8 months ago |
task_handler
|
a1f18f17e6
modify the cache engine and model runner/worker to support mamba states
|
8 months ago |
transformers_utils
|
4fbb052b34
add jamba config file
|
8 months ago |
__init__.py
|
c2aaaefd57
allow out-of-tree model registry
|
8 months ago |
py.typed
|
1c988a48b2
fix logging and add py.typed
|
1 year ago |