.. |
backends
|
de341ffb00
fix: ensure multistep lookahead allocation is compatible with cugraph max capture (#1008)
|
2 months ago |
ops
|
e200775863
feat: enable using fp8 kv and prefix caching with chunked prefill (#668)
|
6 months ago |
__init__.py
|
1405051912
attention: add `AttentionState` abstraction (#863)
|
3 months ago |
layer.py
|
bf88c8567e
feat: mamba model support (#674)
|
6 months ago |
selector.py
|
4ddc14d653
core: use flashinfer for FP8 KV when available (#944)
|
2 months ago |