A modern issue tracking application with voice control capabilities. Built with Hilla (Vaadin), Spring Boot, and OpenAI's real-time voice API.
- Voice-controlled interface for hands-free operation
- Real-time issue management
- Filter issues by assignee
- Create, delete, and select issues using voice commands
- Update issue properties (title, description, status, assignee) with voice
- Responsive web interface
-
Frontend:
- React + TypeScript
- Hilla framework
- WebRTC for real-time voice communication
- OpenAI's real-time API for voice processing
-
Backend:
- Spring Boot
- Java
- Java 17 or newer
- Node.js 18 or newer
- OpenAI API key with access to real-time voice models
- Clone the repository
- Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY=your_api_key_here
The project is a standard Maven project. To run it from the command line:
- Windows:
mvnw
- Mac & Linux:
./mvnw
Then open http://localhost:8080 in your browser.
You can also import the project to your IDE of choice as you would with any Maven project.
- Click the "Enable Voice Control" button in the application
- Once activated, you can use voice commands such as:
- "Filter issues assigned to [name]"
- "Show all issues"
- "Create a new issue"
- "Delete current issue"
- "Select issue number [id]"
- "Update the current issue's title to [title]"
- "Change the status to in progress"
- "Assign this issue to [name]"
- "Update the description to [description]"
Directory | Description |
---|---|
src/main/frontend/ | Client-side source directory |
components/ | React components including voice control |
views/ | UI view components |
themes/ | Custom CSS styles |
src/main/java/ | Server-side source directory |
application/ | Java services and models |
The VoiceControl
component is a React component that enables real-time voice control in the application using WebRTC and OpenAI's real-time voice API. It handles the audio streaming, WebRTC connection management, and function execution based on voice commands.
import { VoiceControl } from './components/VoiceControl';
// Define functions that can be called via voice commands
const functions = [
{
name: 'filterIssues',
description: 'Filter issues by assignee',
parameters: {
type: 'object',
properties: {
assignee: {
type: 'string',
description: 'Name of the person to filter by'
}
}
},
execute: async (args) => {
// Implementation of the filter function
}
}
];
function App() {
return (
<div>
<VoiceControl functions={functions} />
{/* Rest of your application */}
</div>
);
}
- Real-time voice processing using WebRTC
- Automatic audio streaming setup and management
- Bidirectional communication channel for voice commands and responses
- Function registration system for voice-controlled actions
- Built-in UI for enabling/disabling voice control
functions
: An array of function definitions that can be triggered by voice commands. Each function should have:name
: Function identifierdescription
: Description of what the function does (used by the AI)parameters
: JSON Schema of the function parametersexecute
: The actual function implementation
To create a production build:
- Windows:
mvnw clean package -Pproduction
- Mac & Linux:
./mvnw clean package -Pproduction
This will build a JAR file with all the dependencies and front-end resources, ready to be deployed. The file can be found in the target
folder after the build completes.
To run the production build:
java -jar target/voice-crud-1.0-SNAPSHOT.jar