This directory contains integration tests for the AI Gateway that validate the functionality of the API endpoints and metrics collection.
The tests are organized as follows:
run_integration_tests.rs
- Main entry point for running the integration testsintegration/
- Directory containing the integration test modulesmod.rs
- Module definitionscommon.rs
- Common test utilities and shared test logicopenai_test.rs
- Tests for the OpenAI provideranthropic_test.rs
- Tests for the Anthropic provider- Additional provider-specific test files
To run these tests, you need:
- A running instance of the AI Gateway (either locally or in a development environment)
- API keys for the providers you want to test
- A configured ElasticSearch instance for metrics collection
- A
.env.test
file with the required environment variables
The AI Gateway uses different environment files for different purposes:
- Production: Uses the standard
.env
file in the project root - Tests: Uses a separate
.env.test
file to avoid conflicts with production settings
For running tests, creating a .env.test
file is recommended to keep your test configuration separate from your production settings.
-
Copy the sample environment file and modify it with your actual keys:
cp tests/.env.test.example .env.test
-
Edit the
.env.test
file to include your actual API keys and configuration:# Gateway URL (default: http://localhost:3000) GATEWAY_URL=http://localhost:3000 # ElasticSearch Configuration ELASTICSEARCH_URL=http://localhost:9200 ELASTICSEARCH_USERNAME=elastic ELASTICSEARCH_PASSWORD=your_password ELASTICSEARCH_INDEX=ai-gateway-metrics # Provider API Keys OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GROQ_API_KEY=your_groq_api_key FIREWORKS_API_KEY=your_fireworks_api_key TOGETHER_API_KEY=your_together_api_key # AWS Bedrock Credentials AWS_ACCESS_KEY_ID=your_aws_access_key_id AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key AWS_REGION=us-east-1
Note: You can place the
.env.test
file either in the project root directory or in thetests
directory. The tests will check both locations, prioritizing.env.test
over the standard.env
file.
The AWS Bedrock integration test uses the Claude 3 Sonnet model (anthropic.claude-3-sonnet-20240229-v1:0
) by default. Unlike other providers that use API keys, Bedrock uses AWS credentials for authentication:
- AWS_ACCESS_KEY_ID: Your AWS access key with Bedrock permissions
- AWS_SECRET_ACCESS_KEY: Your AWS secret key
- AWS_REGION: The AWS region where Bedrock is available (e.g., us-east-1)
The test validates that:
- Request IDs are properly extracted from the AWS Bedrock response headers
- Streaming and non-streaming modes work correctly
- Token usage and metrics are properly captured in ElasticSearch
First, start the AI Gateway in a separate terminal with ElasticSearch enabled:
# Start the gateway with ElasticSearch enabled
ENABLE_ELASTICSEARCH=true cargo run
The easiest way to run the tests is to use the provided script, which handles the setup and execution:
# Make the script executable
chmod +x tests/run_tests.sh
# Run all tests
./tests/run_tests.sh
# Run tests for a specific provider
./tests/run_tests.sh openai
./tests/run_tests.sh anthropic
This script will:
- Check if a
.env.test
file exists and create one from the template if it doesn't - Run the specified tests with the proper configuration
- Display helpful output about the test execution
If you prefer to run the tests manually:
# Run all integration tests
cargo test --test run_integration_tests -- --nocapture
The --nocapture
flag ensures that test output (e.g., request/response details) is printed to the console, which is helpful for debugging.
-
Set up your test environment:
# Copy the sample test environment file cp tests/.env.test.example .env.test # Edit the file to add your API keys for the providers you want to test nano .env.test
-
Start the gateway with ElasticSearch enabled:
ENABLE_ELASTICSEARCH=true cargo run
-
Run the integration tests:
# Run all tests cargo test --test run_integration_tests -- --nocapture # Run tests for specific providers cargo test --test run_integration_tests openai -- --nocapture cargo test --test run_integration_tests anthropic -- --nocapture cargo test --test run_integration_tests groq -- --nocapture cargo test --test run_integration_tests fireworks -- --nocapture cargo test --test run_integration_tests together -- --nocapture cargo test --test run_integration_tests bedrock -- --nocapture
If you encounter test failures, check the following:
-
Environment Variables: Ensure your
.env.test
file exists and contains valid API keys for the providers you're testing. The test output will show which file was loaded. -
Gateway Status: Make sure the AI Gateway is running with
ENABLE_ELASTICSEARCH=true
. -
ElasticSearch Setup: Verify that ElasticSearch is properly configured and accessible.
-
Console Output: Look at the test output for detailed error messages, which often point to specific configuration issues.
-
Environment File Not Found: The tests will show which environment file was loaded. If you see "Warning: Neither .env.test nor .env files were found", you need to create one of these files with your test configuration.
To add tests for a new provider:
- Create a new test file in the
tests/integration/
directory, e.g.,groq_test.rs
- Use the common test module to implement the tests:
use super::common::{ProviderTestConfig, run_non_streaming_test, run_streaming_test};
#[tokio::test]
async fn test_groq_non_streaming() {
let config = ProviderTestConfig::new("groq", "GROQ_API_KEY", "llama2-70b-4096");
run_non_streaming_test(&config).await;
}
#[tokio::test]
async fn test_groq_streaming() {
let config = ProviderTestConfig::new("groq", "GROQ_API_KEY", "llama2-70b-4096");
run_streaming_test(&config).await;
}
- Add the new module to
tests/integration/mod.rs
:
pub mod common;
pub mod openai_test;
pub mod anthropic_test;
pub mod groq_test; // Add the new module here
-
Gateway Not Running: Ensure the AI Gateway is running and accessible at the URL specified in
.env.test
. -
Authentication Errors: Make sure your API keys in
.env.test
are valid and have the necessary permissions. -
ElasticSearch Connectivity: Verify that ElasticSearch is properly configured and accessible. Check the gateway logs for any connection errors.
-
Test Failures: The tests validate a variety of metrics and response fields. If tests fail, review the test output for details on which validation failed.
-
Environment File Not Found: The tests will show which environment file was loaded. If you see "Warning: Neither .env.test nor .env files were found", you need to create one of these files with your test configuration.
To see detailed logs from the gateway during test execution, adjust the log level when starting the gateway:
RUST_LOG=debug ENABLE_ELASTICSEARCH=true cargo run
This will provide more information about request processing, metric extraction, and ElasticSearch integration.
The common test module provides a flexible way to customize test cases. You can adjust the test parameters using the fluent interface provided by ProviderTestConfig
:
let config = ProviderTestConfig::new("openai", "OPENAI_API_KEY", "gpt-4")
.with_prompt("Explain quantum computing in simple terms")
.with_max_tokens(200);
This allows you to test specific models or use cases with minimal code duplication.