1
0
AlpinDale b9b295d74e chore: backlogs 1 (#191) 1 жил өмнө
..
README.md 4061b1721f chore: add NVIDIA's license to README 1 жил өмнө
attention_dtypes.h 5e82533d02 upstream: add option to specify tokenizer 1 жил өмнө
attention_generic.cuh 081545bde6 fix: various CUDA kernel tweaks 1 жил өмнө
attention_kernels.cu b9b295d74e chore: backlogs 1 (#191) 1 жил өмнө
attention_utils.cuh 1334a833a4 feat: AMD ROCm support (#95) 1 жил өмнө
dtype_bfloat16.cuh 1334a833a4 feat: AMD ROCm support (#95) 1 жил өмнө
dtype_complex64.cuh e2f3ee4e29 add complex64 datatype kernel 1 жил өмнө
dtype_float16.cuh 1334a833a4 feat: AMD ROCm support (#95) 1 жил өмнө
dtype_float32.cuh 23389d0108 zero out a variable instead of vector in kernels 1 жил өмнө

README.md

The attention code here is adapted from NVIDIA's FasterTransformer. All rights reserved.

Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.