- New Delhi, India
-
11:02
(UTC +05:30) - http://thevasudevgupta.com
- @thevasudevgupta
- in/thevasudevgupta
- https://unboxai.com/
Lists (2)
Sort Name ascending (A-Z)
Stars
Recipes to train reward model for RLHF.
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
An extremely fast Python package and project manager, written in Rust.
Ongoing research training transformer models at scale
Implementation of a Transformer, but completely in Triton
Flops counter for convolutional networks in pytorch framework
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
Helpful tools and examples for working with flex-attention
Minimalistic large language model 3D-parallelism training
A library for efficient similarity search and clustering of dense vectors.
Dataframes powered by a multithreaded, vectorized query engine, written in Rust
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
A Data Streaming Library for Efficient Neural Network Training
Fast Inference Solutions for BLOOM
DataComp: In search of the next generation of multimodal datasets
This project shows how to serve an ONNX-optimized image classification model as a web service with FastAPI, Docker, and Kubernetes.
An open-source framework for training large multimodal models.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Instruct-tune LLaMA on consumer hardware
GSoC'2021 | TensorFlow implementation of Wav2Vec2
Using Low-rank adaptation to quickly fine-tune diffusion models.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
The simplest, fastest repository for training/finetuning medium-sized GPTs.