Skip to content
View lxtGH's full-sized avatar
💬
At home
💬
At home

Highlights

  • Pro

Block or report lxtGH

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
lxtGH/README.md

Website GitHub Stars

My name is Xiangtai. My research focuses on computer vision, deep learning, and multi-modal models.

Currently, I am working as a Research Scientist at Bytedance Seed in SG.

I was a research fellow at MMlab@NTU, supervised by Prof. Chen Change Loy. I have worked as a research scientist/associate at various places before, including JD, Sensetime, and Shanghai AI Laboratory. I obtained my Ph.D. from Peking University.

My published works are on my homepage, and I am open to discussing potential remote collaborations on research. Please feel free to email me at [email protected].

I love coding and building universal, larger, efficient models (pure vision and multi-modal large language models).

Moreover, most of my works, including the ones I have profoundly contributed to, are open-sourced on GitHub.

Pinned Loading

  1. Awesome-Segmentation-With-Transformer Public

    [T-PAMI-2024] Transformer-Based Visual Segmentation: A Survey

    739 53

  2. OMG-Seg Public

    OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]

    Python 1.3k 49

  3. SFSegNets Public

    [ECCV-2020-oral]-Semantic Flow for Fast and Accurate Scene Parsing

    Python 378 44

  4. Tube-Link Public

    [ICCV-2023]-Universal Video Segmentaion For VSS, VPS and VIS

    Python 110 3

  5. HarborYuan/ovsam Public

    [ECCV 2024] The official code of paper "Open-Vocabulary SAM".

    Python 958 32

  6. magic-research/Sa2VA Public

    🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos

    Python 1.1k 67

73 contributions in the last year

Contribution Graph
Day of Week April May June July August September October November December January February March April
Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday
Less
No contributions.
Low contributions.
Medium-low contributions.
Medium-high contributions.
High contributions.
More

Activity overview

Contributed to magic-research/Sa2VA, lxtGH/OMG-Seg, jianzongwu/Awesome-Open-Vocabulary and 13 other repositories
Loading A graph representing lxtGH's contributions from April 21, 2024 to April 21, 2025. The contributions are 94% commits, 6% issues, 0% pull requests, 0% code review.

Contribution activity

April 2025

Created 3 commits in 1 repository
Loading