Skip to content

Pinned Loading

  1. flash-linear-attention Public

    šŸš€ Efficient implementations of state-of-the-art linear attention models in Torch and Triton

    Python 2.3k 153

  2. flame Public

    šŸ”„ A minimal training framework for scaling FLA models

    Python 107 15

  3. native-sparse-attention Public

    🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"

    Python 633 30

Repositories

Showing 8 of 8 repositories
  • flash-linear-attention Public

    šŸš€ Efficient implementations of state-of-the-art linear attention models in Torch and Triton

    Python 2,304 MIT 153 36 3 Updated Apr 22, 2025
  • fla-zoo Public

    Flash-Linear-Attention models beyond language

    Python 12 1 0 0 Updated Apr 22, 2025
  • flame Public

    šŸ”„ A minimal training framework for scaling FLA models

    Python 107 MIT 15 2 0 Updated Apr 12, 2025
  • fla-rl Public

    A minimal RL frame work for scaling FLA models on long-horizon reasoning and agentic scenarios.

    4 MIT 0 0 0 Updated Apr 2, 2025
  • ThunderKittens Public Forked from HazyResearch/ThunderKittens

    Tile primitives for speedy kernels

    Cuda 2 MIT 135 0 0 Updated Mar 27, 2025
  • native-sparse-attention Public

    🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"

    Python 633 MIT 30 7 0 Updated Mar 19, 2025
  • 7 0 0 0 Updated Mar 5, 2025
  • flash-bidirectional-linear-attention Public

    Triton implement of bi-directional (non-causal) linear attention

    Python 46 MIT 1 1 0 Updated Feb 4, 2025