-
Tsinghua University
- Beijing
-
03:46
(UTC -12:00) - cgq15.github.io
Highlights
- Pro
Stars
Scalable RL solution for advanced reasoning of language models
Repo of paper "Free Process Rewards without Process Labels"
Data processing for and with foundation models! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷
Robust recipes to align language models with human and AI preferences
800,000 step-level correctness labels on LLM solutions to MATH problems
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)
This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.
A list of backdoor learning resources
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记
Code for EMNLP 2021 paper: Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
TrojanLM: Trojaning Language Models for Fun and Profit
Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-HLT 2021)
ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Repo. explores how we can use these artifacts to develop strong…
DimeNet and DimeNet++ models, as proposed in "Directional Message Passing for Molecular Graphs" (ICLR 2020) and "Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules…
Documents used for grad school application
The source codes for Fine-grained Fact Verification with Kernel Graph Attention Network.
An Open-Source Package for Information Retrieval.
Shared repository for open-sourced projects from the Google AI Language team.
PyTorch implementation of Contrastive Learning methods