Skip to content

Ring attention implementation with flash attention

Notifications You must be signed in to change notification settings

reyoung/ring-flash-attention

Repository files navigation

Ring Flash Attention

This repo implements the RingAttention with FlashAttention. Currently, this repo implements:

  • ring_flash_attn_qkvpacked_func: corresponding to flash_attn_qkvpacked_func
  • ring_flash_attn_varlen_qkvpacked_func: corresponding to flash_attn_varlen_qkvpacked_func

The main idea is to use the softmax_lse output from the flash attention kernels.

There are some arithmetic errors with the current implementation. The reason for them is probably that flash attention will return bf16 value for each block, so we cannont accumluate the values with the original fp32 ones.

TODOs

  • Implement ring_flash_attn_varlen_qkvpacked_func
  • Implement zigzag block issue#2
  • Try to upstream to flash attention.

Test

torchrun --nproc_per_node 8 test_qkvpacked_func.py
torchrun --nproc_per_node 8 test_varlen_qkvpacked_func.py

About

Ring attention implementation with flash attention

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%