-
National University of Singapore
- Singapore
- https://xiangwang1223.github.io/
Highlights
- Pro
Stars
The implementation of paper "Language Representations Can be What Recommenders Need: Findings and Potentials"
[NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$
[NeurIPS 2024] The implementation of paper "On Softmax Direct Preference Optimization for Recommendation"
Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"
[ACL 2024] ReactXT: Understanding Molecular “Reaction-ship” via Reaction-Contextualized Molecule-Text Pretraining. by Zhiyuan Liu*, Yaorui Shi*, An Zhang, Sihang Li, Enzhi Zhang, Xiang Wang, Kenji …
Code for EMNLP2023 paper "MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter".
Code for EMNLP2023 paper "MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter".
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Refine high-quality datasets and visual AI models
[NeurIPS 2023] "Unleashing the Power of Graph Data Augmentation on Covariate Distribution Shift" by Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He.
[EMNLP 2023] ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction.
[NeurIPS 2023] The implementation of paper "Empowering Collaborative Filtering Generalization via Principled Adversarial Contrastive Loss"
[NeurIPS 2023] "Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules"
[SIGIR 2024 perspective] The implementation of paper "On Generative Agents in Recommendation"
Code for the paper "A Comprehensive Evaluation of Large Language Models on Legal Judgment Prediction"
[NeurIPS2023] Official code of "Understanding Contrastive Learning via Distributionally Robust Optimization"
✨✨Latest Advances on Multimodal Large Language Models
ACL 2023 (Findings) - BertNet: Harvesting Knowledge Graphs from Pretrained Language Models
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters