Skip to content

Latest commit

 

History

History
23 lines (15 loc) · 1.01 KB

README.md

File metadata and controls

23 lines (15 loc) · 1.01 KB

Ring Flash Attention

This repo implements the RingAttention with FlashAttention. Currently, this repo implements:

  • ring_flash_attn_qkvpacked_func: corresponding to flash_attn_qkvpacked_func
  • ring_flash_attn_varlen_qkvpacked_func: corresponding to flash_attn_varlen_qkvpacked_func

The main idea is to use the softmax_lse output from the flash attention kernels.

There are some arithmetic errors with the current implementation. The reason for them is probably that flash attention will return bf16 value for each block, so we cannont accumluate the values with the original fp32 ones.

TODOs

  • Implement ring_flash_attn_varlen_qkvpacked_func
  • Implement zigzag block issue#2
  • Try to upstream to flash attention.

Test

torchrun --nproc_per_node 8 test_qkvpacked_func.py
torchrun --nproc_per_node 8 test_varlen_qkvpacked_func.py