Please read the documentation here to understand the concepts of Wren AI Service.
-
Python: Install Python 3.12.*
- Recommended: Use
pyenv
to manage Python versions
- Recommended: Use
-
Poetry: Install Poetry 1.8.3
curl -sSL https://install.python-poetry.org | python3 - --version 1.8.3
-
Just: Install Just command runner (version 1.36 or higher)
-
Install Dependencies:
poetry install
-
Generate Configuration Files:
just init
This creates both
.env.dev
andconfig.yaml
. Usejust init --non-dev
to generate onlyconfig.yaml
. -
Configure Environment:
- Edit
.env.dev
to set environment variables - Modify
config.yaml
to configure components, pipelines, and other settings - Refer to AI Service Configuration for detailed setup instructions
- Edit
-
Set Up Development Environment (optional):
-
Install pre-commit hooks:
poetry run pre-commit install
-
Run initial pre-commit checks:
poetry run pre-commit run --all-files
-
-
Run Tests (optional):
just test
-
Start Required Containers:
just up
-
Launch the AI Service:
just start
-
Access the Service:
- API Documentation:
http://WREN_AI_SERVICE_HOST:WREN_AI_SERVICE_PORT
(default: http://localhost:5556) - User Interface:
http://WREN_UI_HOST:WREN_UI_PORT
(default: http://localhost:3000)
- API Documentation:
-
Stop the Service: When finished, stop the containers:
just down
This setup ensures a consistent development environment and helps maintain code quality through pre-commit hooks and tests. Follow these steps to get started with local development of the Wren AI Service.
For a comprehensive understanding of how to evaluate the pipelines, please refer to the evaluation framework. This document provides detailed guidelines on the evaluation process, including how to set up and run evaluations, interpret results, and utilize the evaluation metrics effectively. It is a valuable resource for ensuring that the evaluation is conducted accurately and that the results are meaningful.
- to run the load test
- setup
DATASET_NAME
in.env.dev
- adjust test config if needed
- adjust user count in
tests/locust/config_users.json
- adjust user count in
- in wren-ai-service folder, run
just up
to start the docker containers - in wren-ai-service folder, run
just start
to start the ai service - run
just load-test
- check reports in /outputs/locust folder, there are 3 files with filename locustreport{test_timestamp}:
- .json: test report in json format, including info like llm provider, version
- .html: test report in html format, showing tables and charts
- .log: test log
- setup
- go to the
demo
folder and runpoetry install
to install the dependencies - in the
wren-ai-service
folder, open three terminals- in the first terminal, run
just up
to start the docker container - in the second terminal, run
just start
to start the wren-ai service - in the third terminal, run
just demo
to start the demo service
- in the first terminal, run
- ports of the services:
- wren-engine: ports should be 8080
- wren-ai-service: port should be 5556
- wren-ui: port should be 3000
- qdrant: ports should be 6333, 6334
Thank you for investing your time in contributing to our project! Please read this for more information!