AlpinDale 4e71bd1d12 feat: add PagedAttention V2 kernels (#76) před 1 rokem
..
README.md 4061b1721f chore: add NVIDIA's license to README před 1 rokem
attention_dtypes.h 5e82533d02 upstream: add option to specify tokenizer před 1 rokem
attention_generic.cuh 081545bde6 fix: various CUDA kernel tweaks před 1 rokem
attention_kernels.cu 4e71bd1d12 feat: add PagedAttention V2 kernels (#76) před 1 rokem
attention_utils.cuh 081545bde6 fix: various CUDA kernel tweaks před 1 rokem
dtype_bfloat16.cuh 4e71bd1d12 feat: add PagedAttention V2 kernels (#76) před 1 rokem
dtype_complex64.cuh e2f3ee4e29 add complex64 datatype kernel před 1 rokem
dtype_float16.cuh 23389d0108 zero out a variable instead of vector in kernels před 1 rokem
dtype_float32.cuh 23389d0108 zero out a variable instead of vector in kernels před 1 rokem

README.md

The attention code here is adapted from NVIDIA's FasterTransformer. All rights reserved.

Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.