.. |
backends
|
1390915778
multi-step: add support for flashinfer attention backend (#1033)
|
1 هفته پیش |
ops
|
e200775863
feat: enable using fp8 kv and prefix caching with chunked prefill (#668)
|
4 ماه پیش |
__init__.py
|
1405051912
attention: add `AttentionState` abstraction (#863)
|
1 ماه پیش |
layer.py
|
bf88c8567e
feat: mamba model support (#674)
|
4 ماه پیش |
selector.py
|
4ddc14d653
core: use flashinfer for FP8 KV when available (#944)
|
2 هفته پیش |