- The MySQL database needs to be installed and running before running the application.
Ensure that MySQL is installed and a database is created for this application. For a detailed guide on installing and setting up MySQL, refer to the official MySQL documentation. Additionally, ensure that a database is created specifically for this application. You can use the following SQL command to create a database:
CREATE DATABASE genai_tests;
To check if MySQL is running, you can use the following commands based on your operating system:
-
On Windows Power Shell: Open Command Prompt and run:
netstat -aon | findstr :3306
-
- On Windows Command Line: Open Command Prompt and run:
net start | find "MySQL"
-
output:
-
On macOS: Open Terminal and run:
brew services list | grep mysql
-
On Linux: Open Terminal and run:
systemctl status mysql
If MySQL is running, you should see a message indicating that the service is active.
To set up the MySQL database for this application, you can use the following SQL commands:
-- Create the database
CREATE DATABASE IF NOT EXISTS genai_tests;
-- Use the newly created database
USE genai_tests;
-- Create the test_results table
CREATE TABLE IF NOT EXISTS test_results (
id INT AUTO_INCREMENT PRIMARY KEY,
prompt TEXT NOT NULL,
expected_output TEXT NOT NULL,
model_response TEXT NOT NULL,
relevancy_score FLOAT NOT NULL,
accuracy_score FLOAT NOT NULL,
bleu_score FLOAT NOT NULL,
bert_score FLOAT NOT NULL,
response_time FLOAT NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
advance_score FLOAT NOT NULL
);
- Open your MySQL command line or a database management tool (like MySQL Workbench).
- Copy and paste the above SQL commands into the query window.
- Execute the commands to create the database and the table.
This will set up the database structure required for the application to function correctly.
This guide provides step-by-step instructions to set up and use the Ollama CLI with the Llama 3.2 model.
- Ensure your system meets the requirements to run Ollama CLI.
- A stable internet connection is required to download the model.
Follow these steps to install the Ollama CLI:
- Visit the official Ollama website: https://ollama.com
- Download the appropriate installer for your operating system.
- Follow the installation instructions provided for your OS.
Alternatively, install Ollama via a package manager (if supported).
Once installed, verify that Ollama is set up correctly by checking the version:
ollama --version
If the version is displayed, Ollama is successfully installed.
Before you can use Llama 3.2, download the model by running:
ollama pull llama3.2
This will fetch the Llama 3.2 model to your local system.
To interact with the model, start the Ollama server:
ollama serve
-
If the
ollama
command is not recognized, ensure the CLI is installed and added to your system's PATH. -
Ensure the Ollama server is running (
ollama serve
) before sending queries. -
For further assistance, visit the Ollama Documentation.
-
Navigate to the
backend
Directorycd backend
-
Create the Virtual Environment
python -m venv venv
-
Activate the Virtual Environment
- On Windows:
venv\Scripts\activate
- On Windows:
-
Run pip command to install packages
- This will take some time to finish
pip install -r requirements.txt
- Run the Python Application
python app.py
- Install Node.js: Ensure you have the latest stable version of Node.js installed. You can download it from Node.js.
- Install npm or Yarn: These package managers come with Node.js. Verify the installation:
node -v -> v22.11.0 npm -v -> 11.0.0
Move into the project directory:
cd frontend
Use the following command to create a new Next.js app:
- Using npm:
npm install
Replace my-next-app
with your desired project name.
Run the development server:
-
Using npm:
npm run dev
-
output:
> [email protected] dev > next dev --turbopack ▲ Next.js 15.0.3 (Turbopack) - Local: http://localhost:3000 ✓ Starting... ✓ Ready in 4.8s - By default, the server runs on [http://localhost:3000]