Stars
Stable Diffusion web UI
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Official Code for DragGAN (SIGGRAPH 2023)
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch
An open source implementation of CLIP.
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services
Pretrained language model with 100B parameters
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
Source code and dataset for ACL 2019 paper "ERNIE: Enhanced Language Representation with Informative Entities"
An ultra fast cross-platform multiple screenshots module in pure Python using ctypes.
Self-contained, minimalistic implementation of diffusion models with Pytorch.
ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels
Japanese Riichi Mahjong AI agent. (Feel free to extend this agent or develop your own agent)
Memory mapped numpy arrays of varying shapes
Original implementation of Spatially Invariant Attend, Infer, Repeat (SPAIR) in TensorFlow.