Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from zylon-ai:main #34

Open
wants to merge 50 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
19a7c06
feat(docs): update doc for ipex-llm (#1968)
shane-huang Jul 8, 2024
fc13368
feat(llm): Support for Google Gemini LLMs and Embeddings (#1965)
uw4 Jul 8, 2024
2612928
feat(vectorstore): Add clickhouse support as vectore store (#1883)
Proger666 Jul 8, 2024
067a5f1
feat(docs): Fix setup docu (#1926)
martinzrrl Jul 8, 2024
dde0224
fix(docs): Fix concepts.mdx referencing to installation page (#1779)
mtulio Jul 8, 2024
187bc93
(feat): add github button (#1989)
fern-support Jul 9, 2024
15f73db
docs: update repo links, citations (#1990)
jaluma Jul 9, 2024
01b7ccd
fix(config): make tokenizer optional and include a troubleshooting do…
jaluma Jul 17, 2024
4523a30
feat(docs): update documentation and fix preview-docs (#2000)
jaluma Jul 18, 2024
43cc31f
feat(vectordb): Milvus vector db Integration (#1996)
Jacksonxhx Jul 18, 2024
90d211c
Update README.md (#2003)
imartinez Jul 18, 2024
2c78bb2
docs: add PR and issue templates (#2002)
jaluma Jul 18, 2024
b626697
docs: update welcome page (#2004)
jaluma Jul 18, 2024
05a9862
Add proper param to demo urls (#2007)
imartinez Jul 22, 2024
dabf556
fix: ffmpy dependency (#2020)
jaluma Jul 29, 2024
20bad17
feat(llm): autopull ollama models (#2019)
jaluma Jul 29, 2024
d4375d0
fix(ui): gradio bug fixes (#2021)
jaluma Jul 29, 2024
d080969
added llama3 prompt (#1962)
hirschrobert Jul 29, 2024
65c5a17
chore(docker): dockerfiles improvements and fixes (#1792)
qdm12 Jul 30, 2024
1020cd5
fix: light mode (#2025)
jaluma Jul 31, 2024
e54a8fe
fix: prevent to ingest local files (by default) (#2010)
jaluma Jul 31, 2024
9027d69
feat: make llama3.1 as default (#2022)
jaluma Jul 31, 2024
40638a1
fix: unify embedding models (#2027)
jaluma Jul 31, 2024
8119842
feat(recipe): add our first recipe `Summarize` (#2028)
jaluma Jul 31, 2024
5465958
fix: nomic embeddings (#2030)
jaluma Aug 1, 2024
50b3027
docs: update docs and capture (#2029)
jaluma Aug 1, 2024
cf61bf7
feat(llm): add progress bar when ollama is pulling models (#2031)
jaluma Aug 1, 2024
e44a7f5
chore: bump version (#2033)
jaluma Aug 2, 2024
6674b46
chore(main): release 0.6.0 (#1834)
github-actions[bot] Aug 2, 2024
dae0727
fix(deploy): improve Docker-Compose and quickstart on Docker (#2037)
jaluma Aug 5, 2024
1d4c14d
fix(deploy): generate docker release when new version is released (#2…
jaluma Aug 5, 2024
1c665f7
fix: Adding azopenai to model list (#2035)
itsliamdowd Aug 5, 2024
f09f6dd
fix: add built image from DockerHub (#2042)
jaluma Aug 5, 2024
ca2b8da
chore(main): release 0.6.1 (#2041)
github-actions[bot] Aug 5, 2024
b16abbe
fix: update matplotlib to 3.9.1-post1 to fix win install
jaluma Aug 7, 2024
4ca6d0c
fix: add numpy issue to troubleshooting (#2048)
jaluma Aug 7, 2024
b1acf9d
fix: publish image name (#2043)
jaluma Aug 7, 2024
7fefe40
fix: auto-update version (#2052)
jaluma Aug 8, 2024
22904ca
chore(main): release 0.6.2 (#2049)
github-actions[bot] Aug 8, 2024
89477ea
fix: naming image and ollama-cpu (#2056)
jaluma Aug 12, 2024
7603b36
fix: Rectify ffmpy poetry config; update version from 0.3.2 to 0.4.0 …
arturmartins Aug 21, 2024
4262859
ci: bump actions/checkout to v4 (#2077)
trivikr Sep 9, 2024
77461b9
feat: add retry connection to ollama (#2084)
jaluma Sep 16, 2024
8c12c68
fix: docker permissions (#2059)
jaluma Sep 24, 2024
f9182b3
feat: Adding MistralAI mode (#2065)
itsliamdowd Sep 24, 2024
fa3c306
fix: Add default mode option to settings (#2078)
basicbloke Sep 24, 2024
5fbb402
fix: Sanitize null bytes before ingestion (#2090)
laoqiu233 Sep 25, 2024
5851b02
feat: update llama-index + dependencies (#2092)
jaluma Sep 26, 2024
940bdd4
fix: 503 when private gpt gets ollama service (#2104)
meng-hui Oct 17, 2024
b7ee437
Update README.md
imartinez Nov 13, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix: 503 when private gpt gets ollama service (zylon-ai#2104)
When running private gpt with external ollama API, ollama service
returns 503 on startup because ollama service (traefik) might not be
ready.

- Add healthcheck to ollama service to test for connection to external
ollama
- private-gpt-ollama service depends on ollama being service_healthy

Co-authored-by: Koh Meng Hui <[email protected]>
  • Loading branch information
meng-hui and Koh Meng Hui authored Oct 17, 2024
commit 940bdd49af14d9c1e7fd4af54f12648b5fc1f9c0
9 changes: 8 additions & 1 deletion docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ services:
- ollama-cuda
- ollama-api
depends_on:
- ollama
ollama:
condition: service_healthy

# Private-GPT service for the local mode
# This service builds from a local Dockerfile and runs the application in local mode.
Expand Down Expand Up @@ -60,6 +61,12 @@ services:
# This will route requests to the Ollama service based on the profile.
ollama:
image: traefik:v2.10
healthcheck:
test: ["CMD", "sh", "-c", "wget -q --spider http://ollama:11434 || exit 1"]
interval: 10s
retries: 3
start_period: 5s
timeout: 5s
ports:
- "8080:8080"
command:
Expand Down
Loading