- ๐ Hi, Iโm Qingyun Li (ๆ้ไบ).
- ๐ Iโm interested in Multimodal Model/Data, Detection/Segmentation, Weakly Supervised Learning, and Remote Sensing.
- ๐ฑ Iโm currently a Phd candidate at Harbin Institude of Technology (HIT), supervised by Prof. Yushi Chen.
- ๐ณ I've participated in researches at OpenGVLab, collaborated with Xue Yang, Wenhai Wang and Jifeng Dai.
- ๐๏ธ I've been an active contributor of MMDetection, collaborated with Shilong Zhang and Haian Huang.
- ๐จโ๐ป I'm expected to graduate in June 2026. I'm seeking job opportunities for the summer/autumn of 2025. Contact me through email [email protected] or WeChat 18545525156.
๐ฎ
Hail Black Myth Wukong, Zelda, and Elden Ring ๐
Highlights
- Pro
Pinned Loading
-
OpenGVLab/OmniCorpus
OpenGVLab/OmniCorpus PublicOmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text
-
OpenGVLab/all-seeing
OpenGVLab/all-seeing Public[ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of the Open World"
-
sam-mmrotate
sam-mmrotate PublicSAM (Segment Anything Model) for generating rotated bounding boxes with MMRotate, which is a comparison method of H2RBox-v2.
-
open-mmlab/mmdetection
open-mmlab/mmdetection PublicOpenMMLab Detection Toolbox and Benchmark
-
mllm-mmrotate
mllm-mmrotate PublicA Simple Aerial Detection Baseline of Multimodal Language Models.
Python 7
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.