![:octocat: :octocat:](https://github.githubassets.com/images/icons/emoji/octocat.png)
- Bangalore, India
-
16:40
(UTC -12:00) - @DataSenseiAryan
Stars
Ground-up implementation of LLMs with minimal dependencies
Ongoing research training transformer models at scale
Everything you need to build state-of-the-art foundation models, end-to-end.
Self-Supervised Speech Pre-training and Representation Learning Toolkit
TIGER: Time-frequency Interleaved Gain Extraction and Reconstruction for Efficient Speech Separation
Inframon - Local Network Server Monitor for macOS and Linux AMD Machines!
Repo to accompany my mastering LLM engineering course
A one stop repository for generative AI research updates, interview resources, notebooks and much more!
All Algorithms implemented in Python
Phase-Aware Speech Enhancement with Deep Complex U-Net
Implementation of Deep Complex UNet Using PyTorch
The implementation of "Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement"
Conformer-based Metric GAN for speech enhancement
Explicit Estimation of Magnitude and Phase Spectra in Parallel for High-Quality Speech Enhancement
speech enhancement\speech seperation\sound source localization
Implement Wave-U-Net by PyTorch, and migrate it to the speech enhancement.
120+ interactive Python coding interview challenges (algorithms and data structures). Includes Anki flashcards.
Learn Low Level Design (LLD) and prepare for interviews using free resources.
A 4-hour coding workshop to understand how LLMs are implemented and used
🔥Highlighting the top ML papers every week.