Forráskód Böngészése

Mention that some CUDA extensions have only been tested on A100s

Tri Dao 2 éve
szülő
commit
43ab0b5205

+ 3 - 0
csrc/fused_dense_lib/README.md

@@ -5,6 +5,9 @@ We make it work for bfloat16.
 
 For best performance, you should use CUDA >= 11.8. CuBLAS versions before
 this doesn't have the best matmul + bias + gelu performance for bfloat16.
+
+It has only been tested on A100s.
+
 ```sh
 cd csrc/fused_dense_lib && pip install .
 ```

+ 3 - 0
csrc/layer_norm/README.md

@@ -1,6 +1,9 @@
 This CUDA extension implements fused dropout + residual + LayerNorm, based on
 Apex's [FastLayerNorm](https://github.com/NVIDIA/apex/tree/master/apex/contrib/layer_norm).
 We add dropout and residual, and make it work for both pre-norm and post-norm architecture.
+
+It has only been tested on A100s.
+
 ```sh
 cd csrc/layer_norm && pip install .
 ```

+ 3 - 0
csrc/xentropy/README.md

@@ -1,6 +1,9 @@
 This CUDA extension implements optimized cross-entropy loss, adapted from Apex's
 [Xentropy](https://github.com/NVIDIA/apex/tree/master/apex/contrib/xentropy).
 We make it work for bfloat16 and support in-place backward to save memory.
+
+It has only been tested on A100s.
+
 ```sh
 cd csrc/xentropy && pip install .
 ```