Skip to content
@DL-ViT

DL-ViT

Popular repositories Loading

  1. NATTEN NATTEN Public

    Forked from SHI-Labs/NATTEN

    Neighborhood Attention Extension. Bringing attention to a neighborhood near you!

    Cuda 1

  2. ViLT ViLT Public

    Forked from dandelin/ViLT

    Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

    Python

  3. early_convolutions_vit_pytorch early_convolutions_vit_pytorch Public

    Forked from Jack-Etheredge/early_convolutions_vit_pytorch

    (Unofficial) PyTorch implementation of the paper Early Convolutions Help Transformers See Better

    Jupyter Notebook

  4. Fastformer-PyTorch Fastformer-PyTorch Public

    Forked from wilile26811249/Fastformer-PyTorch

    Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."

    Python

  5. TransT TransT Public

    Forked from chenxin-dlut/TransT

    Transformer Tracking (CVPR2021)

    Python

  6. Video-Swin-Transformer Video-Swin-Transformer Public

    Forked from SwinTransformer/Video-Swin-Transformer

    This is an official implementation for "Video Swin Transformers".

    Python

Repositories

Showing 10 of 428 repositories
  • DeBiFormer Public Forked from maclong01/DeBiFormer

    [ACCV 2024 ] Official code for "DeBiFormer: Vision Transformer with Deformable Agent Bi-level Routing Attention"

    DL-ViT/DeBiFormer’s past year of commit activity
    Python 0 Apache-2.0 2 0 0 Updated Oct 14, 2024
  • Dynamic-Tuning Public Forked from NUS-HPC-AI-Lab/Dynamic-Tuning

    The official implementation of "2024Arxiv Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"

    DL-ViT/Dynamic-Tuning’s past year of commit activity
    Python 0 1 0 0 Updated Sep 26, 2024
  • Mixture-of-Depths Public Forked from kyegomez/Mixture-of-Depths

    Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"

    DL-ViT/Mixture-of-Depths’s past year of commit activity
    Python 0 MIT 6 0 0 Updated Sep 23, 2024
  • Gradient-Transformer Public

    It is the code for the paper "Gradient is All You Need to Fuse", where we propose a new module named GradFormer, which performs well in IRSTD.

    DL-ViT/Gradient-Transformer’s past year of commit activity
    Python 0 0 0 0 Updated Sep 14, 2024
  • LowFormer Public Forked from altair199797/LowFormer
    DL-ViT/LowFormer’s past year of commit activity
    Python 0 Apache-2.0 2 0 0 Updated Sep 10, 2024
  • transformer-explainer Public Forked from poloclub/transformer-explainer

    Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization

    DL-ViT/transformer-explainer’s past year of commit activity
    JavaScript 0 MIT 349 0 0 Updated Aug 16, 2024
  • DiT-MoE Public Forked from feizc/DiT-MoE

    Scaling Diffusion Transformers with Mixture of Experts

    DL-ViT/DiT-MoE’s past year of commit activity
    Python 0 11 0 0 Updated Jul 21, 2024
  • DL-ViT/Fair-Vision-Transformer’s past year of commit activity
    Python 0 2 0 0 Updated Jul 7, 2024
  • SwinJSCC Public Forked from semcomm/SwinJSCC
    DL-ViT/SwinJSCC’s past year of commit activity
    Python 0 5 0 0 Updated Jul 5, 2024
  • Switch-DiT Public Forked from byeongjun-park/Switch-DiT

    [ECCV 2024] Official pytorch implementation of "Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts"

    DL-ViT/Switch-DiT’s past year of commit activity
    Python 0 MIT 6 0 0 Updated Jul 4, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…