Highlights
- Pro
Stars
Readable Conditional Denoising Diffusion
An OAI compatible exllamav2 API that's both lightweight and fast
Annotated Flow Matching paper
Zstandard - Fast real-time compression algorithm
A mini-library for training consistency models.
This is the implementation of k-space cold diffusion model for accelerated MRI reconstruction.
Hunt down social media accounts by username across social networks
Official Pytorch Implementation of the paper: Wavelet Diffusion Models are fast and scalable Image Generators (CVPR'23)
Stable diffusion for inpainting
🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022
Official PyTorch Code and Models of "RePaint: Inpainting using Denoising Diffusion Probabilistic Models", CVPR 2022
Xray, Penetrates Everything. Also the best v2ray-core, with XTLS support. Fully compatible configuration.
Prediction molecular structure from NMR spectra
Datagrok repository for ADMET property evaluation
A fast solid-state NMR spectrum simulation and analysis library.
A lightweight python-only library for reading and writing SMILES strings
A curated list of papers related to constrained decoding of LLM, along with their relevant code and resources.
Written in C++ and using SDL, The Powder Toy is a desktop version of the classic 'falling sand' physics sandbox, it simulates air pressure and velocity as well as heat.
Open-source Windows and Office activator featuring HWID, Ohook, KMS38, and Online KMS activation methods, along with advanced troubleshooting.
Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and…
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
Efficient and general syntactical decoding for Large Language Models
A fast inference library for running LLMs locally on modern consumer-class GPUs