Skip to content

Commit

Permalink
fix: update default settings and local model guide (Cinnamon#156)
Browse files Browse the repository at this point in the history
  • Loading branch information
taprosoft authored Aug 30, 2024
1 parent 4b2b334 commit 9354ad8
Show file tree
Hide file tree
Showing 9 changed files with 170 additions and 5 deletions.
12 changes: 8 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
An open-source clean & customizable RAG UI for chatting with your documents. Built with both end users and
developers in mind.

![Preview](docs/images/preview-graph.png)
![Preview](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/preview-graph.png)

[Live Demo](https://huggingface.co/spaces/taprosoft/kotaemon) |
[Source Code](https://github.com/Cinnamon/kotaemon)
Expand Down Expand Up @@ -68,7 +68,7 @@ appreciated.

- **Extensible**. Being built on Gradio, you are free to customize / add any UI elements as you like. Also, we aim to support multiple strategies for document indexing & retrieval. `GraphRAG` indexing pipeline is provided as an example.

![Preview](docs/images/preview.png)
![Preview](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/preview.png)

## Installation

Expand Down Expand Up @@ -114,7 +114,7 @@ pip install -e "libs/ktem"

- (Optional) To enable in-browser PDF_JS viewer, download [PDF_JS_DIST](https://github.com/mozilla/pdf.js/releases/download/v4.0.379/pdfjs-4.0.379-dist.zip) and extract it to `libs/ktem/ktem/assets/prebuilt`

<img src="docs/images/pdf-viewer-setup.png" alt="pdf-setup" width="300">
<img src="https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/pdf-viewer-setup.png" alt="pdf-setup" width="300">

- Start the web server:

Expand All @@ -128,6 +128,10 @@ Default username / password are: `admin` / `admin`. You can setup additional use

![Chat tab](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/chat-tab.png)

## Setup local models (for local / private RAG)

See [Local model setup](docs/local_model.md).

## Customize your application

By default, all application data are stored in `./ktem_app_data` folder. You can backup or copy this folder to move your installation to a new machine.
Expand Down Expand Up @@ -225,7 +229,7 @@ ollama pull nomic-embed-text

Set the model names on web UI and make it as default.

![Models](docs/images/models.png)
![Models](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/models.png)

##### Using GGUF with llama-cpp-python

Expand Down
1 change: 1 addition & 0 deletions doc_env_reqs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ mkdocstrings[python]
mkdocs-material
mkdocs-gen-files
mkdocs-literate-nav
mkdocs-video
mkdocs-git-revision-date-localized-plugin
mkdocs-section-index
mdx_truly_sane_lists
Binary file added docs/images/index-embedding.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/llm-default.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/retrieval-setting.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
90 changes: 90 additions & 0 deletions docs/local_model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Setup local LLMs & Embedding models

## Prepare local models

#### NOTE

In the case of using Docker image, please replace `http://localhost` with `http://host.docker.internal` to correctly communicate with service on the host machine. See [more detail](https://stackoverflow.com/questions/31324981/how-to-access-host-port-from-docker-container).

### Ollama OpenAI compatible server (recommended)

Install [ollama](https://github.com/ollama/ollama) and start the application.

Pull your model (e.g):

```
ollama pull llama3.1:8b
ollama pull nomic-embed-text
```

Setup LLM and Embedding model on Resources tab with type OpenAI. Set these model parameters to connect to Ollama:

```
api_key: ollama
base_url: http://localhost:11434/v1/
model: gemma2:2b (for llm) | nomic-embed-text (for embedding)
```

![Models](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/models.png)

### oobabooga/text-generation-webui OpenAI compatible server

Install [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui/).

Follow the setup guide to download your models (GGUF, HF).
Also take a look at [OpenAI compatible server](https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API) for detail instructions.

Here is a short version

```
# install sentence-transformer for embeddings creation
pip install sentence_transformers
# change to text-generation-webui src dir
python server.py --api
```

Use the `Models` tab to download new model and press Load.

Setup LLM and Embedding model on Resources tab with type OpenAI. Set these model parameters to connect to `text-generation-webui`:

```
api_key: dummy
base_url: http://localhost:5000/v1/
model: any
```

### llama-cpp-python server (LLM only)

See [llama-cpp-python OpenAI server](https://llama-cpp-python.readthedocs.io/en/latest/server/).

Download any GGUF model weight on HuggingFace or other source. Place it somewhere on your local machine.

Run

```
LOCAL_MODEL=<path/to/GGUF> python scripts/serve_local.py
```

Setup LLM model on Resources tab with type OpenAI. Set these model parameters to connect to `llama-cpp-python`:

```
api_key: dummy
base_url: http://localhost:8000/v1/
model: model_name
```

## Use local models for RAG

- Set default LLM and Embedding model to a local variant.

![Models](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/llm-default.png)

- Set embedding model for the File Collection to a local model (e.g: `ollama`)

![Index](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/index-embedding.png)

- Go to Retrieval settings and choose LLM relevant scoring model as a local model (e.g: `ollama`). Or, you can choose to disable this feature if your machine cannot handle a lot of parallel LLM requests at the same time.

![Settings](https://raw.githubusercontent.com/Cinnamon/kotaemon/main/docs/images/retrieval-setting.png)

You are set! Start a new conversation to test your local RAG pipeline.
2 changes: 2 additions & 0 deletions flowsettings.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@
"__type__": "kotaemon.llms.ChatOpenAI",
"base_url": "http://localhost:11434/v1/",
"model": config("LOCAL_MODEL", default="llama3.1:8b"),
"api_key": "ollama",
},
"default": False,
}
Expand All @@ -160,6 +161,7 @@
"__type__": "kotaemon.embeddings.OpenAIEmbeddings",
"base_url": "http://localhost:11434/v1/",
"model": config("LOCAL_MODEL_EMBEDDINGS", default="nomic-embed-text"),
"api_key": "ollama",
},
"default": False,
}
Expand Down
36 changes: 35 additions & 1 deletion scripts/run_linux.sh
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ function activate_conda_env() {

source "$conda_root/etc/profile.d/conda.sh" # conda init
conda activate "$env_dir" || {
echo "Failed to activate environment. Please remove $env_dir and run the installer again"
echo "Failed to activate environment. Please remove $env_dir and run the installer again."
exit 1
}
echo "Activate conda environment at $CONDA_PREFIX"
Expand Down Expand Up @@ -143,6 +143,32 @@ function setup_local_model() {
python $(pwd)/scripts/serve_local.py
}

function download_and_unzip() {
local url=$1
local dest_dir=$2

# Check if the destination directory exists, create if it doesn't
if [ -d "$dest_dir" ]; then
echo "Destination directory $dest_dir already exists. Skipping download."
return
fi

mkdir -p "$dest_dir"

# Download the ZIP file
local zip_file="${dest_dir}/downloaded.zip"
echo "Downloading $url to $zip_file"
curl -L -o "$zip_file" "$url"

# Unzip the file to the destination directory
echo "Unzipping $zip_file to $dest_dir"
unzip -o "$zip_file" -d "$dest_dir"

# Clean up the downloaded ZIP file
rm "$zip_file"
echo "Download and unzip completed successfully."
}

function launch_ui() {
python $(pwd)/app.py || {
echo "" && echo "Will exit now..."
Expand All @@ -167,6 +193,11 @@ conda_root="${install_dir}/conda"
env_dir="${install_dir}/env"
python_version="3.10"

pdf_js_version="4.0.379"
pdf_js_dist_name="pdfjs-${pdf_js_version}-dist"
pdf_js_dist_url="https://github.com/mozilla/pdf.js/releases/download/v${pdf_js_version}/${pdf_js_dist_name}.zip"
target_pdf_js_dir="$(pwd)/libs/ktem/ktem/assets/prebuilt/${pdf_js_dist_name}"

check_path_for_spaces

print_highlight "Setting up Miniconda"
Expand All @@ -179,6 +210,9 @@ activate_conda_env
print_highlight "Installing requirements"
install_dependencies

print_highlight "Downloading and unzipping PDF.js"
download_and_unzip $pdf_js_dist_url $target_pdf_js_dir

print_highlight "Setting up a local model"
setup_local_model

Expand Down
34 changes: 34 additions & 0 deletions scripts/run_macos.sh
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,32 @@ function setup_local_model() {
python $(pwd)/scripts/serve_local.py
}

function download_and_unzip() {
local url=$1
local dest_dir=$2

# Check if the destination directory exists, create if it doesn't
if [ -d "$dest_dir" ]; then
echo "Destination directory $dest_dir already exists. Skipping download."
return
fi

mkdir -p "$dest_dir"

# Download the ZIP file
local zip_file="${dest_dir}/downloaded.zip"
echo "Downloading $url to $zip_file"
curl -L -o "$zip_file" "$url"

# Unzip the file to the destination directory
echo "Unzipping $zip_file to $dest_dir"
unzip -o "$zip_file" -d "$dest_dir"

# Clean up the downloaded ZIP file
rm "$zip_file"
echo "Download and unzip completed successfully."
}

function launch_ui() {
python $(pwd)/app.py || {
echo "" && echo "Will exit now..."
Expand Down Expand Up @@ -171,6 +197,11 @@ conda_root="${install_dir}/conda"
env_dir="${install_dir}/env"
python_version="3.10"

pdf_js_version="4.0.379"
pdf_js_dist_name="pdfjs-${pdf_js_version}-dist"
pdf_js_dist_url="https://github.com/mozilla/pdf.js/releases/download/v${pdf_js_version}/${pdf_js_dist_name}.zip"
target_pdf_js_dir="$(pwd)/libs/ktem/ktem/assets/prebuilt/${pdf_js_dist_name}"

check_path_for_spaces

print_highlight "Setting up Miniconda"
Expand All @@ -183,6 +214,9 @@ activate_conda_env
print_highlight "Installing requirements"
install_dependencies

print_highlight "Downloading and unzipping PDF.js"
download_and_unzip $pdf_js_dist_url $target_pdf_js_dir

print_highlight "Setting up a local model"
setup_local_model

Expand Down

0 comments on commit 9354ad8

Please sign in to comment.