Skip to content

Local Chatbot with RAG: A privacy-focused AI chatbot using Retrieval-Augmented Generation. Features FastAPI backend, React UI, and Docker deployment. Processes uploaded documents for contextual responses. Ideal for custom, locally-deployed AI solutions.

Notifications You must be signed in to change notification settings

tbowman-bah/local-chatbot-with-rag-with-fastapi-ollama-react-faiss-postgres

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Local Chatbot with RAG

This project implements a local chatbot using Retrieval-Augmented Generation (RAG) technology. It consists of three main components: a chatbot API, a user interface, and the infrastructure to support the chatbot system.

Ideal for developers, researchers looking to implement a customizable, privacy-focused chatbot solution with the power of large language models and the flexibility of local deployment.

demo

Table of Contents

Chatbot API

The chatbot-api component is responsible for handling the core logic of the chatbot, including:

  • Processing user inputs
  • Generating appropriate responses using RAG technology
  • Managing conversation context
  • Integrating with external services and databases

Key Features

  • RESTful API endpoints for chatbot interactions
  • Document upload and text extraction (PDF, DOCX, TXT)
  • Vector-based document search
  • Integration with Ollama for language model inference
  • Database storage for chat history and documents

Technologies Used

  • Python 3.12
  • FastAPI
  • SQLAlchemy
  • Alembic for database migrations
  • FAISS for vector storage
  • Sentence Transformers for text embedding

Chatbot UI

The chatbot-ui component provides the user interface for interacting with the chatbot. It includes:

  • A responsive web interface built with React.js
  • Real-time chat functionality
  • Model selection from available models
  • Document upload functionality
  • Chat history display
  • Dark mode UI using Material-UI

Technologies Used

  • React.js
  • Material-UI
  • Axios for API communication

Chatbot Infrastructure

The chatbot-infrastructure component manages the deployment, scaling, and monitoring of the chatbot system. It includes:

  • Containerization using Docker
  • Docker Compose for local development and testing
  • PostgreSQL database setup
  • Environment configuration for API and UI components

Key Components

  • Docker containers for API, UI, and database
  • Makefile for easy management of services
  • Integration with locally running Ollama instance

Getting Started

To get started with the project, follow these steps:

  1. Clone the repository:

    git clone https://github.com/doanhat/local-chatbot-with-rag-with-fastapi-ollama-react-faiss-postgres.git
    cd local-chatbot-with-rag-with-fastapi-ollama-react-faiss-postgres
  2. Set up and start the infrastructure:

    cd chatbot-infrastructure
    make run
  3. Access the components:

For detailed setup instructions for each component, refer to their individual README files:

About

Local Chatbot with RAG: A privacy-focused AI chatbot using Retrieval-Augmented Generation. Features FastAPI backend, React UI, and Docker deployment. Processes uploaded documents for contextual responses. Ideal for custom, locally-deployed AI solutions.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 67.1%
  • JavaScript 20.7%
  • Makefile 4.6%
  • HTML 3.3%
  • CSS 1.8%
  • Dockerfile 1.3%
  • Mako 1.2%