Skip to content

๐Ÿค– ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป for ๐—ณ๐—ฟ๐—ฒ๐—ฒ how to ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ an end-to-end ๐—ฝ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป-๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐—Ÿ๐—Ÿ๐—  & ๐—ฅ๐—”๐—š ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ using ๐—Ÿ๐—Ÿ๐— ๐—ข๐—ฝ๐˜€ best practices: ~ ๐˜ด๐˜ฐ๐˜ถ๐˜ณ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ฅ๐˜ฆ + 12 ๐˜ฉ๐˜ข๐˜ฏ๐˜ฅ๐˜ด-๐˜ฐ๐˜ฏ ๐˜ญ๐˜ฆ๐˜ด๐˜ด๐˜ฐ๐˜ฏ๐˜ด

License

Notifications You must be signed in to change notification settings

reban87/llm-twin-course

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

LLM Twin Course: Building Your Production-Ready AI Replica

Learn to architect and implement a production-ready LLM & RAG system by building your LLM Twin

From data gathering to productionizing LLMs using LLMOps good practices.

by Decoding ML

Your image description

Why is this course different?

By finishing the "LLM Twin: Building Your Production-Ready AI Replica" free course, you will learn how to design, train, and deploy a production-ready LLM twin of yourself powered by LLMs, vector DBs, and LLMOps good practices.

Why should you care? ๐Ÿซต

โ†’ No more isolated scripts or Notebooks! Learn production ML by building and deploying an end-to-end production-grade LLM system.

What will you learn to build by the end of thisย course?

You will learn how to architect and build a real-world LLM system from start to finishโ€Š-โ€Šfrom data collection to deployment.

You will also learn to leverage MLOps best practices, such as experiment trackers, model registries, prompt monitoring, and versioning.

The end goal? Build and deploy your own LLM twin.

What is an LLM Twin? It is an AI character that learns to write like somebody by incorporating its style and personality into an LLM.

The architecture of the LLM Twin is split into 4 Python microservices:

LLM Twin Architecture

The data collection pipeline

  • Crawl your digital data from various social media platforms, such as Medium, Substack and GitHub.
  • Clean, normalize and load the data to a Mongo NoSQL DB through a series of ETL pipelines.
  • Send database changes to a RabbitMQ queue using the CDC pattern.
  • Learn to package the crawlers as AWS Lambda functions.

The feature pipeline

  • Consume messages in real-time from a queue through a Bytewax streaming pipeline.
  • Every message will be cleaned, chunked, embedded and loaded into a Qdrant vector DB.
  • In the bonus series, we refactor the cleaning, chunking, and embedding logic using Superlinked, a specialized vector compute engine. We will also load and index the vectors to a Redis vector DB.

The training pipeline

  • Create a custom instruction dataset based on your custom digital data to do SFT.
  • Fine-tune an LLM using LoRA or QLoRA.
  • Use Comet ML's experiment tracker to monitor the experiments.
  • Evaluate the LLM using Opik
  • Save and version the best model to the Hugging Face model registry.
  • Run and automate the training pipeline using AWS SageMaker.

The inference pipeline

  • Load the fine-tuned LLM from the Hugging Face model registry.
  • Deploy the LLM as a scalable REST API using AWS SageMaker inference endpoints.
  • Enhance the prompts using advanced RAG techniques.
  • Monitor the prompts and LLM generated results using Opik
  • In the bonus series, we refactor the advanced RAG layer to write more optimal queries using Superlinked.
  • Wrap up everything with a Gradio UI (as seen below) where you can start playing around with the LLM Twin to generate content that follows your writing style.

Gradio UI

Along the 4 microservices, you will learn to integrate 4 serverless tools:

  • Comet ML as your experiment tracker and data registry;
  • Qdrant as your vector DB;
  • AWS SageMaker as your ML infrastructure;
  • Opik as your prompt evaluation and monitoring tool.

Who is thisย for?

Audience: MLE, DE, DS, or SWE who want to learn to engineer production-ready LLM & RAG systems using LLMOps good principles.

Level: intermediate

Prerequisites: basic knowledge of Python and ML

How will youย learn?

The course contains 10 hands-on written lessons and the open-source code you can access on GitHub, showing how to build an end-to-end LLM system.

Also, it includes 2 bonus lessons on how to improve the RAG system.

You can read everything at your own pace.

Costs?

The articles and code are completely free. They will always remain free.

If you plan to run the code while reading it, you must know that we use several cloud tools that might generate additional costs.

Pay as you go

  • AWS offers accessible plans to new joiners.
    • For a new first-time account, you could get up to 300$ in free credits which are valid for 6 months. For more, consult the AWS Offerings page.

Freemium (Free-of-Charge)

Questions and troubleshooting

Please ask us any questions if anything gets confusing while studying the articles or running the code.

You can ask any question by opening an issue in this GitHub repository by clicking here.

Lessons

This self-paced course consists of 12 comprehensive lessons covering theory, system design, and hands-on implementation.

Our recommendation for each lesson:

  1. Read the article
  2. Run the code
  3. Review the source code
Module Article Category Description Source Code
1 An End-to-End Framework for Production-Ready LLM Systems System Design Learn the overall architecture and design principles of production LLM systems No code
2 Your Content is Gold Data Engineering Learn to crawl and process blog posts for LLM training src/data_crawling
3 CDC Magic Data Engineering Learn to implement Change Data Capture for efficient data pipelines src/data_cdc
4 SOTA Python Streaming Pipelines Feature Pipeline Build real-time streaming pipelines for LLM data processing src/feature_pipeline
5 Advanced RAG Algorithms Feature Pipeline Implement advanced RAG techniques for better retrieval src/feature_pipeline
6 Fine-Tuning Datasets Training Pipeline Create custom datasets for LLM fine-tuning src/training_pipeline/datasets
7 LLM Fine-tuning Pipeline Training Pipeline Build an end-to-end LLM fine-tuning pipeline src/training_pipeline
8 LLM & RAG Evaluation Training Pipeline Learn to evaluate LLM and RAG system performance src/inference_pipeline/evaluation
9 Scalable RAG Systems Inference Pipeline Design and implement production-grade RAG systems src/inference_pipeline
10 Prompt Monitoring Inference Pipeline Build robust prompt monitoring systems src/inference_pipeline
11 Scalable RAG Ingestion RAG Optimization Optimize RAG ingestion pipelines src/bonus_superlinked_rag
12 Multi-Index RAG Apps RAG Optimization Build advanced multi-index RAG applications src/bonus_superlinked_rag

Note

Check the INSTALL_AND_USAGE doc for a step-by-step installation and usage guide.

Project Structure

At Decoding ML we teach how to build production ML systems, thus the course follows the structure of a real-world Python project:

llm-twin-course/
โ”œโ”€โ”€ src/                     # Source code for all microservices
โ”‚ โ”œโ”€โ”€ data_crawling/         # Data collection pipeline code
โ”‚ โ”œโ”€โ”€ data_cdc/              # Change Data Capture pipeline code
โ”‚ โ”œโ”€โ”€ feature_pipeline/      # Feature engineering pipeline code
โ”‚ โ”œโ”€โ”€ training_pipeline/     # Training pipeline code
โ”‚ โ”œโ”€โ”€ inference_pipeline/    # Inference service code
โ”‚ โ””โ”€โ”€ bonus_superlinked_rag/ # Bonus RAG optimization code
โ”œโ”€โ”€ .env.example             # Example environment variables template
โ”œโ”€โ”€ Makefile                 # Commands to build and run the project
โ”œโ”€โ”€ pyproject.toml           # Project dependencies

Install & Usage

To understand how to install and run the LLM Twin code end-to-end, go to the INSTALL_AND_USAGE dedicated document.

Note

Even though you can run everything solely using the INSTALL_AND_USAGE dedicated document, we recommend that you read the articles to understand the LLM Twin system and design choices fully.

Bonus Superlinked series

The bonus Superlinked series has an extra dedicated README that you can access under the src/bonus_superlinked_rag directory.

In that section, we explain how to run it with the improved RAG layer powered by Superlinked.

License

This course is an open-source project released under the MIT license. Thus, as long you distribute our LICENSE and acknowledge our work, you can safely clone or fork this project and use it as a source of inspiration for whatever you want (e.g., university projects, college degree projects, personal projects, etc.).

Contributors

A big "Thank you ๐Ÿ™" to all our contributors! This course is possible only because of their efforts.

Sponsors

Also, another big "Thank you ๐Ÿ™" to all our sponsors who supported our work and made this course possible.

Comet Opik Bytewax Qdrant Superlinked
Comet Opik Bytewax Qdrant Superlinked

Next steps

Our LLM Engineerโ€™s Handbook inspired the open-source LLM Twin course.

Consider supporting our work by getting our book to learn a complete framework for building and deploying production LLM & RAG systems โ€” from data to deployment.

Perfect for practitioners who want both theory and hands-on expertise by connecting the dots between DE, research, MLE and MLOps:

Buy the LLM Engineerโ€™s Handbook

LLM Engineer's Handbook

About

๐Ÿค– ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป for ๐—ณ๐—ฟ๐—ฒ๐—ฒ how to ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ an end-to-end ๐—ฝ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป-๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐—Ÿ๐—Ÿ๐—  & ๐—ฅ๐—”๐—š ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ using ๐—Ÿ๐—Ÿ๐— ๐—ข๐—ฝ๐˜€ best practices: ~ ๐˜ด๐˜ฐ๐˜ถ๐˜ณ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ฅ๐˜ฆ + 12 ๐˜ฉ๐˜ข๐˜ฏ๐˜ฅ๐˜ด-๐˜ฐ๐˜ฏ ๐˜ญ๐˜ฆ๐˜ด๐˜ด๐˜ฐ๐˜ฏ๐˜ด

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 86.4%
  • TypeScript 10.3%
  • Makefile 1.9%
  • Other 1.4%