This repo implements the RingAttention with FlashAttention. Currently, this repo implements:
ring_flash_attn_qkvpacked_func
: corresponding toflash_attn_qkvpacked_func
ring_flash_attn_varlen_qkvpacked_func
: corresponding toflash_attn_varlen_qkvpacked_func
The main idea is to use the softmax_lse
output from the flash attention kernels.
There are some arithmetic errors with the current implementation. The reason for them is probably that flash attention will return bf16 value for each block, so we cannont accumluate the values with the original fp32 ones.
- Implement
ring_flash_attn_varlen_qkvpacked_func
- Implement zigzag block issue#2
- Try to upstream to flash attention.
torchrun --nproc_per_node 8 test_qkvpacked_func.py
torchrun --nproc_per_node 8 test_varlen_qkvpacked_func.py