Stars
TensorFlow code and pre-trained models for BERT
955 不加班的公司名单 - 工作 955,work–life balance (工作与生活的平衡)
code for ACL 2020 paper: FLAT: Chinese NER Using Flat-Lattice Transformer
State-of-the-Art Text Embeddings
Tensors and Dynamic neural networks in Python with strong GPU acceleration
pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
Must-read papers on improving efficiency for pre-trained language models.
NLP超强入门指南,包括各任务sota模型汇总(文本分类、文本匹配、序列标注、文本生成、语言模型),以及代码、技巧
QAmatch(qa_match)/文本匹配/文本分类/文本embedding/文本聚类/文本检索(bow/ifidf/ngramtf-df/bert/albert/bm25/…/nn/gbdt/xgb/kmeans/dscan/faiss/….)
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。
Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit
This is the PyTorch implementation of the ACL 2019 paper RankQA: Neural Question Answering with Answer Re-Ranking.
Code for using and evaluating SpanBERT.
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
Facilitating the design, comparison and sharing of deep text matching models.
Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI
Open source annotation tool for machine learning practitioners.
Keyphrase or Keyword Extraction 基于预训练模型的中文关键词抽取方法(论文SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model 的中文版代码)
RUCAIBox / Awesome-KBQA
Forked from JBoRu/Awesome-KBQAPaper list of KBQA
搞定C++:punch:。C++ Primer 中文版第5版学习仓库,包括笔记和课后练习答案。
Chinese version of GPT2 training code, using BERT tokenizer.
A very simple framework for state-of-the-art Natural Language Processing (NLP)
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.