Lightweight backend service for a grammar assistant app. Can serve as an inspiration for LLM token streaming with OpenAI SDK and FastAPI.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
The project requires Python and pip installed on your system. The required Python packages are listed in the requirements.txt
file.
Copy the .env.example
file to .env
and fill in the required values.
cp .env.example .env
To configure the application, especially the LLM prompts, copy the config.example.yaml
file to config.yaml
and fill in the required values.
cp config.example.yaml config.yaml
- Clone the repository to your local machine.
- Navigate to the project directory.
- Install the required packages using pip:
pip install -r requirements.txt
To run the application, use the following command:
uvicorn main:app --reload
Or you can run the application with Docker:
docker-compose up
The application will be available at http://localhost:80
exposed with Nginx.
The project is structured into several modules and services. For people interested only in LLM integration, the most interesting parst will be:
Endpoint documentation is available at /docs
when the application is running.