Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
jzhang38 authored Apr 6, 2024
1 parent 9fb990b commit 6574a80
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ We then proceed to train Llama-2-7B on 8 A100 by gradually increasing its rope b
from easy_context.zigzag_ring_attn.monkey_patch import apply_zigzag_ring_attn_monkey_patch
from easy_context.zigzag_ring_attn.prepare_inputs import prepare_zigzag_ring_attn_inputs
# Alternatively, you can use dist flash attn
# from easy_context.dist_flash_attn.monkey_patch import apply_dist_flash_attn_monkey_patch
# from easy_context.dist_flash_attn.prepare_inputs import prepare_dist_flash_attn_inputs
from easy_context.dist_flash_attn.monkey_patch import apply_dist_flash_attn_monkey_patch
from easy_context.dist_flash_attn.prepare_inputs import prepare_dist_flash_attn_inputs
from transformers import LlamaForCausalLM
# Swap attention implementation from flash attn to flash ring attn
apply_zigzag_ring_attn_monkey_patch()
Expand Down

0 comments on commit 6574a80

Please sign in to comment.