Cronologia Commit

Autore SHA1 Messaggio Data
  Tri Dao 844912dca0 [CI] Switch from CUDA 12.2 to 12.3 5 mesi fa
  Tri Dao 908511b2b6 Split into more .cu files to speed up compilation 5 mesi fa
  Tri Dao beb2bf2a32 Drop support for pytorch 1.12, 1.13, and python 3.7 5 mesi fa
  Nicolas Patry 8f873cc6ac Implement softcapping. (#1025) 5 mesi fa
  Corey James Levinson beb8b8ba9f add exception to Timeout Error (#963) 6 mesi fa
  Wei Ji 9c0e9ee86d Move packaging and ninja from install_requires to setup_requires (#937) 7 mesi fa
  Tri Dao 2aea958f89 [CI] Compile with torch 2.3.0.dev20240207 8 mesi fa
  Arvind Sundararajan 26c9e82743 Support ARM builds (#757) 9 mesi fa
  Chirag Jain 50896ec574 Make nvcc threads configurable via environment variable (#885) 9 mesi fa
  Qubitium f45bbb4c94 Optimize compile to 1: avoid oom 2: minimize swap usage 3: avoid threads starvation when letting ninja decide how many workers to spawn or manual MAX_JOBS "guesses". Logic is to take the min value of MAX_JOBS auto-calculated by two metrics: 1: cpu cores 2: free memory. This should allow flash-attn to compile close to the most efficient manner under any consumer/server env. (#832) 10 mesi fa
  Tri Dao d4a7c8ffbb [CI] Only compile for CUDA 11.8 & 12.2, MAX_JOBS=2,add torch-nightly 1 anno fa
  Tri Dao 5e525a8dc8 [CI] Use official Pytorch 2.1, add CUDA 11.8 for Pytorch 2.1 1 anno fa
  Tri Dao 1879e089c7 Reduce number of templates for headdim > 128 1 anno fa
  Tri Dao bff3147175 Re-enable compilation for Hopper 1 anno fa
  Tri Dao dfe29f5e2b [Gen] Don't use ft_attention, use flash_attn_with_kvcache instead 1 anno fa
  Federico Berto fa3ddcbaaa [Minor] add nvcc note on bare_metal_version `RuntimeError` (#552) 1 anno fa
  Tri Dao 799f56fa90 Don't compile for Pytorch 2.1 on CUDA 12.1 due to nvcc segfaults 1 anno fa
  Tri Dao bb9beb3645 Remove some unused headers 1 anno fa
  Tri Dao 0c04943fa2 Require CUDA 11.6+, clean up setup.py 1 anno fa
  Tri Dao b1fbbd8337 Implement splitKV attention 1 anno fa
  Tri Dao cbb4cf5f46 Don't need to set TORCH_CUDA_ARCH_LIST in setup.py 1 anno fa
  Aman Gupta Karmani aab603af4f fix binary wheel installation when nvcc is not available (#448) 1 anno fa
  Tri Dao 9c531bdc0a Use single thread compilation for cuda12.1, torch2.1 to avoid OOM CI 1 anno fa
  Tri Dao 2ddeaa406c Fix wheel building 1 anno fa
  Tri Dao 3c458cff77 Merge branch 'feature/demo-wheels' of https://github.com/piercefreeman/flash-attention into piercefreeman-feature/demo-wheels 1 anno fa
  Tri Dao 1c41d2b0e5 Fix race condition in bwd (overwriting sK) 1 anno fa
  Tri Dao 4f285b3547 FlashAttention-2 release 1 anno fa
  Pierce Freeman 9af165c389 Clean setup.py imports 1 anno fa
  Pierce Freeman 494b2aa486 Add notes to github action workflow 1 anno fa
  Pierce Freeman ea2ed88623 Refactor and clean of setup.py 1 anno fa