AlpinDale 696f2cd59c add phi3_small support with blocksparse attention 5 months ago
..
README.md 4061b1721f chore: add NVIDIA's license to README 1 year ago
attention_dtypes.h 9d81716bfd [v0.5.3] Release Candidate (#388) 8 months ago
attention_generic.cuh 081545bde6 fix: various CUDA kernel tweaks 1 year ago
attention_kernels.cu 696f2cd59c add phi3_small support with blocksparse attention 5 months ago
attention_utils.cuh 2755a48d51 merge dev branch into main (#153) 1 year ago
dtype_bfloat16.cuh 2755a48d51 merge dev branch into main (#153) 1 year ago
dtype_complex64.cuh e2f3ee4e29 add complex64 datatype kernel 1 year ago
dtype_float16.cuh 2755a48d51 merge dev branch into main (#153) 1 year ago
dtype_float32.cuh f8dfac6372 chore: attention refactor and upstream sync apr01 (#365) 9 months ago
dtype_fp8.cuh 3bdeb3e116 fix: clang formatting for all kernels (#558) 5 months ago

README.md

The attention code here is adapted from NVIDIA's FasterTransformer. All rights reserved.

Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.