Skip to content

๐Ÿค– ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป for ๐—ณ๐—ฟ๐—ฒ๐—ฒ how to ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ an end-to-end ๐—ฝ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป-๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐—Ÿ๐—Ÿ๐—  & ๐—ฅ๐—”๐—š ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ using ๐—Ÿ๐—Ÿ๐— ๐—ข๐—ฝ๐˜€ best practices: ~ ๐˜ด๐˜ฐ๐˜ถ๐˜ณ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ฅ๐˜ฆ + 11 ๐˜ฉ๐˜ข๐˜ฏ๐˜ฅ๐˜ด-๐˜ฐ๐˜ฏ ๐˜ญ๐˜ฆ๐˜ด๐˜ด๐˜ฐ๐˜ฏ๐˜ด

License

Notifications You must be signed in to change notification settings

shivam1750/llm-twin

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

LLM Twin Course: Building Your Production-Ready AI Replica

An End-to-End Framework for Production-Ready LLM & RAG Systems by Building Your LLM Twin

From data gathering to productionizing LLMs using LLMOps good practices.

by Paul Iusztin, Alexandru Vesa and Alexandru Razvant

Your image description


Why is this course different?

By finishing the "LLM Twin: Building Your Production-Ready AI Replica" free course, you will learn how to design, train, and deploy a production-ready LLM twin of yourself powered by LLMs, vector DBs, and LLMOps good practices.

Why should you care? ๐Ÿซต

โ†’ No more isolated scripts or Notebooks! Learn production ML by building and deploying an end-to-end production-grade LLM system.

What will you learn to build by the end of thisย course?

You will learn how to architect and build a real-world LLM system from start to finishโ€Š-โ€Šfrom data collection to deployment.

You will also learn to leverage MLOps best practices, such as experiment trackers, model registries, prompt monitoring, and versioning.

The end goal? Build and deploy your own LLM twin.

What is an LLM Twin? It is an AI character that learns to write like somebody by incorporating its style and personality into an LLM.

The architecture of the LLM twin is split into 4 Python microservices:

The data collection pipeline

  • Crawl your digital data from various social media platforms.
  • Clean, normalize and load the data to a Mongo NoSQL DB through a series of ETL pipelines.
  • Send database changes to a RabbitMQ queue using the CDC pattern.
  • โ˜๏ธ Deployed on AWS.

The feature pipeline

  • Consume messages from a queue through a Bytewax streaming pipeline.
  • Every message will be cleaned, chunked, embedded (using Superlinked, and loaded into a Qdrant vector DB in real-time.
  • โ˜๏ธ Deployed on AWS.

The training pipeline

  • Create a custom dataset based on your digital data.
  • Fine-tune an LLM using QLoRA.
  • Use Comet ML's experiment tracker to monitor the experiments.
  • Evaluate and save the best model to Comet's model registry.
  • โ˜๏ธ Deployed on Qwak.

The inference pipeline

  • Load and quantize the fine-tuned LLM from Comet's model registry.
  • Deploy it as a REST API.
  • Enhance the prompts using RAG.
  • Generate content using your LLM twin.
  • Monitor the LLM using Comet's prompt monitoring dashboard.
  • โ˜๏ธ Deployed on Qwak.

Your image description


Along the 4 microservices, you will learn to integrate 3 serverless tools:

Who is thisย for?

Audience: MLE, DE, DS, or SWE who want to learn to engineer production-ready LLM systems using LLMOps good principles.

Level: intermediate

Prerequisites: basic knowledge of Python, ML, and the cloud

How will youย learn?

The course contains 11 hands-on written lessons and the open-source code you can access on GitHub.

You can read everything and try out the code at your own pace.ย 

Costs?

The articles and code are completely free. They will always remain free.

If you plan to run the code while reading it, you have to know that we use several cloud tools that might generate additional costs.

Pay as you go

  • AWS offers accessible plans to new joiners.
    • For a new first-time account, you could get up to 300$ in free credits which are valid for 6 months. For more, consult the AWS Offerings page.
  • Qwak has a QPU based pricing plan. Here's what you need to know:
    • A QPU stands for Qwak Processing Unit, and is the equivalent of 4vCPU-16GB.
    • Qwak offers up to 100QPU/month for free for up to one year after registration.
    • After that, a policy of 1.2$/QPU is applied as a pay-as-you-go tactic.
    • To find more about Qwak pricing, consult Qwak Pricing Page
    • To find more about Qwak Compute Instances, consult Qwak Instances Page

Freemium (Free-of-Charge)

Lessons

Important

To understand the entire code step-by-step, check out our articles โ†“

The course is split into 11 lessons. Every Medium article will be its own lesson.

System Design

  1. An End-to-End Framework for Production-Ready LLM Systems by Building Your LLM Twin

Data Engineering: Gather & store the data for your LLM twin

  1. The Importance of Data Pipelines in the Era of Generative AI
  2. Change Data Capture: Enabling Event-Driven Architectures

Feature Pipeline: prepare data for LLM fine-tuning & RAG

  1. SOTA Python Streaming Pipelines for Fine-tuning LLMs and RAG โ€” in Real-Time!
  2. The 4 Advanced RAG Algorithms You Must Know to Implement

Training Pipeline: fine-tune your LLM twin

  1. The Role of Feature Stores in Fine-Tuning LLMs: From raw data to instruction dataset
  2. How to fine-tune LLMs on custom datasets at Scale using Qwak and CometML
  3. Best Practices when evaluating fine-tuned LLMs

Inference Pipeline: serve your LLM twin

  1. Architect scalable and cost-effective LLM & RAG inference pipelines
  2. How to evaluate your RAG using RAGAs Framework

Grand Finale

  1. The LLM-Twin Free Course on Production-Ready RAG applications
  2. Ending Notes (TBD)

Meet your teachers!

The course is created under the Decoding ML umbrella by:

Paul Iusztin
Senior ML & MLOps Engineer
Alexandru Vesa
Senior AI Engineer
Rฤƒzvanศ› Alexandru
Senior ML Engineer

License

This course is an open-source project released under the MIT license. Thus, as long you distribute our LICENSE and acknowledge our work, you can safely clone or fork this project and use it as a source of inspiration for whatever you want (e.g., university projects, college degree projects, personal projects, etc.).

๐Ÿ† Contribution

A big "Thank you ๐Ÿ™" to all our contributors! This course is possible only because of their efforts.

About

๐Ÿค– ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป for ๐—ณ๐—ฟ๐—ฒ๐—ฒ how to ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ an end-to-end ๐—ฝ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป-๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐—Ÿ๐—Ÿ๐—  & ๐—ฅ๐—”๐—š ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ using ๐—Ÿ๐—Ÿ๐— ๐—ข๐—ฝ๐˜€ best practices: ~ ๐˜ด๐˜ฐ๐˜ถ๐˜ณ๐˜ค๐˜ฆ ๐˜ค๐˜ฐ๐˜ฅ๐˜ฆ + 11 ๐˜ฉ๐˜ข๐˜ฏ๐˜ฅ๐˜ด-๐˜ฐ๐˜ฏ ๐˜ญ๐˜ฆ๐˜ด๐˜ด๐˜ฐ๐˜ฏ๐˜ด

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 83.1%
  • TypeScript 12.2%
  • Makefile 3.8%
  • Other 0.9%