.. |
backends
|
a985143768
core: add cuda graph support for encoder-decoder models (#1051)
|
1 hafta önce |
ops
|
e200775863
feat: enable using fp8 kv and prefix caching with chunked prefill (#668)
|
4 ay önce |
__init__.py
|
1405051912
attention: add `AttentionState` abstraction (#863)
|
1 ay önce |
layer.py
|
bf88c8567e
feat: mamba model support (#674)
|
4 ay önce |
selector.py
|
4ddc14d653
core: use flashinfer for FP8 KV when available (#944)
|
2 hafta önce |