forked from zylon-ai/private-gpt
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] main from zylon-ai:main #34
Open
pull
wants to merge
50
commits into
dumpmemory:main
Choose a base branch
from
zylon-ai:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+6,643
−3,642
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Support for Google Gemini LLMs and Embeddings Initial support for Gemini, enables usage of Google LLMs and embedding models (see settings-gemini.yaml) Install via poetry install --extras "llms-gemini embeddings-gemini" Notes: * had to bump llama-index-core to later version that supports Gemini * poetry --no-update did not work: Gemini/llama_index seem to require more (transient) updates to make it work... * fix: crash when gemini is not selected * docs: add gemini llm --------- Co-authored-by: Javier Martinez <[email protected]>
* Added ClickHouse vector sotre support * port fix * updated lock file * fix: mypy * fix: mypy --------- Co-authored-by: Valery Denisov <[email protected]> Co-authored-by: Javier Martinez <[email protected]>
* Update settings.mdx * docs: add cmd --------- Co-authored-by: Javier Martinez <[email protected]>
* Fix/update concepts.mdx referencing to installation page The link for `/installation` is broken in the "Main Concepts" page. The correct path would be `./installation` or maybe `/installation/getting-started/installation` * fix: docs --------- Co-authored-by: Javier Martinez <[email protected]>
Co-authored-by: chdeskur <[email protected]>
* docs: update project links ... * docs: update citation
#1998) * docs: add troubleshooting * fix: pass HF token to setup script and prevent to download tokenizer when it is empty * fix: improve log and disable specific tokenizer by default * chore: change HF_TOKEN environment to be aligned with default config * ifx: mypy
* docs: add missing configurations * docs: change HF embeddings by ollama * docs: add disclaimer about Gradio UI * docs: improve readability in concepts * docs: reorder `Fully Local Setups` * docs: improve setup instructions * docs: prevent have duplicate documentation and use table to show different options * docs: rename privateGpt to PrivateGPT * docs: update ui image * docs: remove useless header * docs: convert to alerts ingestion disclaimers * docs: add UI alternatives * docs: reference UI alternatives in disclaimers * docs: fix table * chore: update doc preview version * chore: add permissions * chore: remove useless line * docs: fixes ...
* integrate Milvus into Private GPT * adjust milvus settings * update doc info and reformat * adjust milvus initialization * adjust import error * mionr update * adjust format * adjust the db storing path * update doc
* Update README.md Remove the outdated contact form and point to Zylon website for those looking for a ready-to-use enterprise solution built on top of PrivateGPT * Update README.md Update text to address the comments * Update README.md Improve text
* chore: add pull request template * chore: add issue templates * chore: require more information in bugs
* fix: ffmpy dependency * fix: block ffmpy to commit sha
* chore: update ollama (llm) * feat: allow to autopull ollama models * fix: mypy * chore: install always ollama client * refactor: check connection and pull ollama method to utils * docs: update ollama config with autopulling info
* fix: when two user messages were sent * fix: add source divider * fix: add favicon * fix: add zylon link * refactor: update label
* added llama3 prompt * more fixes to pass tests; changed type VectorStore -> BasePydanticVectorStore, see https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md#2024-05-14 * fix: new llama3 prompt --------- Co-authored-by: Javier Martinez <[email protected]>
* `UID` and `GID` build arguments for `worker` user * `POETRY_EXTRAS` build argument with default values * Copy `Makefile` for `make ingest` command * Do NOT copy markdown files I doubt anyone reads a markdown file within a Docker image * Fix PYTHONPATH value * Set home directory to `/home/worker` when creating user * Combine `ENV` instructions together * Define environment variables with their defaults - For documentation purposes - Reflect defaults set in settings-docker.yml * `PGPT_EMBEDDING_MODE` to define embedding mode * Remove ineffective `python3 -m pipx ensurepath` * Use `&&` instead of `;` to chain commands to detect failure better * Add `--no-root` flag to poetry install commands * Set PGPT_PROFILES to docker * chore: remove envs * chore: update to use ollama in docker-compose * chore: don't copy makefile * chore: don't copy fern * fix: tiktoken cache * fix: docker compose port * fix: ffmpy dependency (#2020) * fix: ffmpy dependency * fix: block ffmpy to commit sha * feat(llm): autopull ollama models (#2019) * chore: update ollama (llm) * feat: allow to autopull ollama models * fix: mypy * chore: install always ollama client * refactor: check connection and pull ollama method to utils * docs: update ollama config with autopulling info ... * chore: autopull ollama models * chore: add GID/UID comment ... --------- Co-authored-by: Javier Martinez <[email protected]>
* feat: prevent to local ingestion (by default) and add white-list * docs: add local ingestion warning * docs: add missing comment * fix: update exception error * fix: black
* feat: change ollama default model to llama3.1 * chore: bump versions * feat: Change default model in local mode to llama3.1 * chore: make sure last poetry version is used * fix: mypy * fix: do not add BOS (with last llamacpp-python version)
* feat: unify embedding model to nomic * docs: add embedding dimensions mismatch * docs: fix fern
* feat: add summary recipe * test: add summary tests * docs: move all recipes docs * docs: add recipes and summarize doc * docs: update openapi reference * refactor: split method in two method (summary) * feat: add initial summarize ui * feat: add mode explanation * fix: mypy * feat: allow to configure async property in summarize * refactor: move modes to enum and update mode explanations * docs: fix url * docs: remove list-llm pages * docs: remove double header * fix: summary description
* fix: allow to configure trust_remote_code based on: #1893 (comment) * fix: nomic hf embeddings
* docs: update Readme * style: refactor image * docs: change important to tip
* fix: add ollama progress bar when pulling models * feat: add ollama queue * fix: mypy
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* chore: update docker-compose with profiles * docs: add quick start doc
Fixing the error I encountered while using the azopenai mode
* chore: update docker-compose with profiles * docs: add quick start doc * chore: generate docker release when new version is released * chore: add dockerhub image in docker-compose * docs: update quickstart with local/remote images * chore: update docker tag * chore: refactor dockerfile names * chore: update docker-compose names * docs: update llamacpp naming * fix: naming * docs: fix llamacpp command
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* chore: block matplotlib to fix installation in window machines * chore: remove workaround, just update poetry.lock * fix: update matplotlib to last version
* docs: add numpy issue to troubleshooting * fix: troubleshooting link ...
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…2062) * Fix: Rectify ffmpy 0.3.2 poetry config * keep optional set to false for ffmpy * Updating ffmpy to version 0.4.0 * Remove comment about a fix
Stale pull request |
* feat: add retry connection to ollama When Ollama is running in the docker-compose, traefik is not ready sometimes to route the request, and it fails * fix: mypy
* fix: missing depends_on * chore: update copy permissions * chore: update entrypoint * Revert "chore: update entrypoint" This reverts commit f73a36a. * Revert "chore: update copy permissions" This reverts commit fabc3f6. * style: fix docker warning * fix: multiples fixes * fix: user permissions writing local_data folder
* Adding MistralAI mode * Update embedding_component.py * Update ui.py * Update settings.py * Update embedding_component.py * Update settings.py * Update settings.py * Update settings-mistral.yaml * Update llm_component.py * Update settings-mistral.yaml * Update settings.py * Update settings.py * Update ui.py * Update embedding_component.py * Delete settings-mistral.yaml --------- Co-authored-by: SkiingIsFun123 <[email protected]> Co-authored-by: Javier Martinez <[email protected]>
* Add default mode option to settings * Revise default_mode to Literal (enum) and add to settings.yaml * Revise to pass make check/test * Default mode: RAG --------- Co-authored-by: Jason <[email protected]>
* Sanitize null bytes before ingestion * Added comments
* chore: update libraries * fix: mypy * chore: more updates * fix: mypy/black * chore: fix docker warnings * fix: mypy * fix: black
Stale pull request |
When running private gpt with external ollama API, ollama service returns 503 on startup because ollama service (traefik) might not be ready. - Add healthcheck to ollama service to test for connection to external ollama - private-gpt-ollama service depends on ollama being service_healthy Co-authored-by: Koh Meng Hui <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )