Lists (3)
Sort Name ascending (A-Z)
👍Crowdsourcing Dataset
GLAD Implementation
The implementation codes of the GLAD model from "Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise" whitehill✨My Papers
This is a list of my published & working papers.Starred repositories
Google Chrome, Firefox, and Thunderbird extension that lets you write email in Markdown and render it before sending.
Template for a clear GitHub README (markdown)
Implementation of Attention-based Deep Multiple Instance Learning in PyTorch
English Template for PKU Doctoral Dissertation
Awesome Active Learning Paper List
A curated list of resources for Learning with Noisy Labels
This project reproduces the book Dive Into Deep Learning (https://d2l.ai/), adapting the code from MXNet into PyTorch.
本项目将《动手学深度学习》(Dive into Deep Learning)原书中的MXNet实现改为PyTorch实现。
Benchmarking algorithms for assessing quality of data labeled by multiple annotators
Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activelo…
SSR: An Efficient and Robust Framework for Learning with Unknown Label Noise (BMVC2022)
Our crowdsourcing datasets for diverse formats of data, e.g., categorical label in difficult tasks, text sequence, pairwise preference, triplet similarity.
A curated list of awesome datasets with human label variation (un-aggregated labels) in Natural Language Processing and Computer Vision, accompanying The 'Problem' of Human Label Variation: On Grou…
The source code of CVPR 2019 paper "Leveraging Crowdsourced GPS Data for Road Extraction from Aerial Imagery"
A package for dealing with crowdsourced big data. Website: https://enriquegrodrigo.github.io/spark-crowd/
DynaSent: Dynamic Sentiment Analysis Dataset
Some experiments with CIFAR-10 dataset
A curated (most recent) list of resources for Learning with Noisy Labels
Crowd Labeling For Continuous Valued Annotations
This is the code of the article "A clarity and fairness aware framework for selecting workers in competitive crowdsourcing tasks".
Estimating true labels from multiple workers in crowdsourcing tasks
Human annotated noisy labels for CIFAR-10 and CIFAR-100. The website of CIFAR-N is available at http://www.noisylabels.com/.
Crowdsourcing CIFAR10 with Amazon mTurk
Dataset for the paper: The Effectiveness of Peer Prediction in Long-Term Forecasting