sclarkson 1feb711f46 Fix compilation with clang on ARM64 (#1285) il y a 1 semaine
..
README.md 43ab0b5205 Mention that some CUDA extensions have only been tested on A100s il y a 2 ans
fused_dense.cpp 1feb711f46 Fix compilation with clang on ARM64 (#1285) il y a 1 semaine
fused_dense_cuda.cu 27f8f890df [FusedDense] Allocate lt_workspace on input device il y a 1 an
setup.py 50896ec574 Make nvcc threads configurable via environment variable (#885) il y a 9 mois

README.md

This CUDA extension implements fused matmul + bias (forward and backward), and fused matmul + bias + gelu (forward and backward), adapted from Apex's FusedDense. We make it work for bfloat16.

For best performance, you should use CUDA >= 11.8. CuBLAS versions before this doesn't have the best matmul + bias + gelu performance for bfloat16.

It has only been tested on A100s.

cd csrc/fused_dense_lib && pip install .