Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from zylon-ai:main #34

Open
wants to merge 50 commits into
base: main
Choose a base branch
from
Open

[pull] main from zylon-ai:main #34

wants to merge 50 commits into from

Conversation

pull[bot]
Copy link

@pull pull bot commented Jul 8, 2024

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

shane-huang and others added 5 commits July 8, 2024 09:42
* Support for Google Gemini LLMs and Embeddings

Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml)

Install via
poetry install --extras "llms-gemini embeddings-gemini"

Notes:
* had to bump llama-index-core to later version that supports Gemini
* poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work...

* fix: crash when gemini is not selected

* docs: add gemini llm

---------

Co-authored-by: Javier Martinez <[email protected]>
* Added ClickHouse vector sotre support

* port fix

* updated lock file

* fix: mypy

* fix: mypy

---------

Co-authored-by: Valery Denisov <[email protected]>
Co-authored-by: Javier Martinez <[email protected]>
* Update settings.mdx

* docs: add cmd

---------

Co-authored-by: Javier Martinez <[email protected]>
* Fix/update concepts.mdx referencing to installation page

The link for `/installation` is broken in the "Main Concepts" page.

The correct path would be `./installation` or  maybe `/installation/getting-started/installation`

* fix: docs

---------

Co-authored-by: Javier Martinez <[email protected]>
@pull pull bot added the ⤵️ pull label Jul 8, 2024
fern-support and others added 24 commits July 9, 2024 08:48
* docs: update project links

...

* docs: update citation
#1998)

* docs: add troubleshooting

* fix: pass HF token to setup script and prevent to download tokenizer when it is empty

* fix: improve log and disable specific tokenizer by default

* chore: change HF_TOKEN environment to be aligned with default config

* ifx: mypy
* docs: add missing configurations

* docs: change HF embeddings by ollama

* docs: add disclaimer about Gradio UI

* docs: improve readability in concepts

* docs: reorder `Fully Local Setups`

* docs: improve setup instructions

* docs: prevent have duplicate documentation and use table to show different options

* docs: rename privateGpt to PrivateGPT

* docs: update ui image

* docs: remove useless header

* docs: convert to alerts ingestion disclaimers

* docs: add UI alternatives

* docs: reference UI alternatives in disclaimers

* docs: fix table

* chore: update doc preview version

* chore: add permissions

* chore: remove useless line

* docs: fixes

...
* integrate Milvus into Private GPT

* adjust milvus settings

* update doc info and reformat

* adjust milvus initialization

* adjust import error

* mionr update

* adjust format

* adjust the db storing path

* update doc
* Update README.md

Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT

* Update README.md

Update text to address the comments

* Update README.md

Improve text
* chore: add pull request template

* chore: add issue templates

* chore: require more information in bugs
* fix: ffmpy dependency

* fix: block ffmpy to commit sha
* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info
* fix: when two user messages were sent

* fix: add source divider

* fix: add favicon

* fix: add zylon link

* refactor: update label
* added llama3 prompt

* more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14

* fix: new llama3 prompt

---------

Co-authored-by: Javier Martinez <[email protected]>
* `UID` and `GID` build arguments for `worker` user

* `POETRY_EXTRAS` build argument with default values

* Copy `Makefile` for `make ingest` command

* Do NOT copy markdown files
I doubt anyone reads a markdown file within a Docker image

* Fix PYTHONPATH value

* Set home directory to `/home/worker` when creating user

* Combine `ENV` instructions together

* Define environment variables with their defaults
- For documentation purposes
- Reflect defaults set in settings-docker.yml

* `PGPT_EMBEDDING_MODE` to define embedding mode

* Remove ineffective `python3 -m pipx ensurepath`

* Use `&&` instead of `;` to chain commands to detect failure better

* Add `--no-root` flag to poetry install commands

* Set PGPT_PROFILES to docker

* chore: remove envs

* chore: update to use ollama in docker-compose

* chore: don't copy makefile

* chore: don't copy fern

* fix: tiktoken cache

* fix: docker compose port

* fix: ffmpy dependency (#2020)

* fix: ffmpy dependency

* fix: block ffmpy to commit sha

* feat(llm): autopull ollama models (#2019)

* chore: update ollama (llm)

* feat: allow to autopull ollama models

* fix: mypy

* chore: install always ollama client

* refactor: check connection and pull ollama method to utils

* docs: update ollama config with autopulling info

...

* chore: autopull ollama models

* chore: add GID/UID comment

...

---------

Co-authored-by: Javier Martinez <[email protected]>
* feat: prevent to local ingestion (by default) and add white-list

* docs: add local ingestion warning

* docs: add missing comment

* fix: update exception error

* fix: black
* feat: change ollama default model to llama3.1

* chore: bump versions

* feat: Change default model in local mode to llama3.1

* chore: make sure last poetry version is used

* fix: mypy

* fix: do not add BOS (with last llamacpp-python version)
* feat: unify embedding model to nomic

* docs: add embedding dimensions mismatch

* docs: fix fern
* feat: add summary recipe

* test: add summary tests

* docs: move all recipes docs

* docs: add recipes and summarize doc

* docs: update openapi reference

* refactor: split method in two method (summary)

* feat: add initial summarize ui

* feat: add mode explanation

* fix: mypy

* feat: allow to configure async property in summarize

* refactor: move modes to enum and update mode explanations

* docs: fix url

* docs: remove list-llm pages

* docs: remove double header

* fix: summary description
* fix: allow to configure trust_remote_code

based on: #1893 (comment)

* fix: nomic hf embeddings
* docs: update Readme

* style: refactor image

* docs: change important to tip
* fix: add ollama progress bar when pulling models

* feat: add ollama queue

* fix: mypy
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
itsliamdowd and others added 10 commits August 5, 2024 16:30
Fixing the error I encountered while using the azopenai mode
* chore: update docker-compose with profiles

* docs: add quick start doc

* chore: generate docker release when new version is released

* chore: add dockerhub image in docker-compose

* docs: update quickstart with local/remote images

* chore: update docker tag

* chore: refactor dockerfile names

* chore: update docker-compose names

* docs: update llamacpp naming

* fix: naming

* docs: fix llamacpp command
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* chore: block matplotlib to fix installation in window machines

* chore: remove workaround, just update poetry.lock

* fix: update matplotlib to last version
* docs: add numpy issue to troubleshooting

* fix: troubleshooting link

...
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…2062)

* Fix: Rectify ffmpy 0.3.2 poetry config

* keep optional set to false for ffmpy

* Updating ffmpy to version 0.4.0

* Remove comment about a fix
Copy link

github-actions bot commented Sep 6, 2024

Stale pull request

@github-actions github-actions bot added the stale label Sep 6, 2024
@github-actions github-actions bot removed the stale label Sep 10, 2024
jaluma and others added 6 commits September 16, 2024 16:43
* feat: add retry connection to ollama

When Ollama is running in the docker-compose, traefik is not ready sometimes to route the request, and it fails

* fix: mypy
* fix: missing depends_on

* chore: update copy permissions

* chore: update entrypoint

* Revert "chore: update entrypoint"

This reverts commit f73a36a.

* Revert "chore: update copy permissions"

This reverts commit fabc3f6.

* style: fix docker warning

* fix: multiples fixes

* fix: user permissions writing local_data folder
* Adding MistralAI mode

* Update embedding_component.py

* Update ui.py

* Update settings.py

* Update embedding_component.py

* Update settings.py

* Update settings.py

* Update settings-mistral.yaml

* Update llm_component.py

* Update settings-mistral.yaml

* Update settings.py

* Update settings.py

* Update ui.py

* Update embedding_component.py

* Delete settings-mistral.yaml

---------

Co-authored-by: SkiingIsFun123 <[email protected]>
Co-authored-by: Javier Martinez <[email protected]>
* Add default mode option to settings

* Revise default_mode to Literal (enum) and add to settings.yaml

* Revise to pass make check/test

* Default mode: RAG

---------

Co-authored-by: Jason <[email protected]>
* Sanitize null bytes before ingestion

* Added comments
* chore: update libraries

* fix: mypy

* chore: more updates

* fix: mypy/black

* chore: fix docker warnings

* fix: mypy

* fix: black
Copy link

Stale pull request

@github-actions github-actions bot added the stale label Oct 12, 2024
When running private gpt with external ollama API, ollama service
returns 503 on startup because ollama service (traefik) might not be
ready.

- Add healthcheck to ollama service to test for connection to external
ollama
- private-gpt-ollama service depends on ollama being service_healthy

Co-authored-by: Koh Meng Hui <[email protected]>
@github-actions github-actions bot removed the stale label Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.