sclarkson 1feb711f46 Fix compilation with clang on ARM64 (#1285) | hai 1 semana | |
---|---|---|
.. | ||
README.md | %!s(int64=2) %!d(string=hai) anos | |
fused_dense.cpp | hai 1 semana | |
fused_dense_cuda.cu | hai 1 ano | |
setup.py | hai 9 meses |
This CUDA extension implements fused matmul + bias (forward and backward), and fused matmul + bias + gelu (forward and backward), adapted from Apex's FusedDense. We make it work for bfloat16.
For best performance, you should use CUDA >= 11.8. CuBLAS versions before this doesn't have the best matmul + bias + gelu performance for bfloat16.
It has only been tested on A100s.
cd csrc/fused_dense_lib && pip install .