A no-frills, developer-focused client for interacting with multiple Large Language Models (LLMs) directly from your favorite text editor.
LLMinster is built on the premise that developers don't need fancy web interfaces to interact with LLMs - they need tools that integrate seamlessly with their existing workflow. Key principles:
- Simplicity Over Features: Do one thing well - get LLM responses into your editor
- Independent Tools Over Monoliths: Each script is standalone and focuses on one model
- Developer Workflow First: Works with your existing tools and patterns
- Cost-Aware Design: Mini variants available for cost-effective development and testing
- Power Through Composition: Combine different models and variants to create sophisticated workflows
- No Web UI Required: Work directly from your development environment
- Multi-Monitor Support: Spread questions and answers across different screens
- Cross-Model Verification: Easy to get second opinions from different LLMs
- Local Control: All conversation history saved to disk
- IDE Integration: Leverage your editor's features for working with LLM responses
- Low Ceremony: Focus on getting results, not fighting with interfaces
- Claude (Anthropic)
- Uses
claude.csx
for processing.claudeq
files
- Uses
- Gemini (Google)
- Uses
flash.csx
for processing.flashq
files
- Uses
- GPT-4 (OpenAI)
- Uses
gpt4o.csx
for processing.gpt4o-q
files - Mini version available via
gpt4o-mini.csx
(.gpt4o-mini-q
files)
- Uses
- O1 (OpenAI)
- Preview version via
o1-preview.csx
(.o1-preview-q
files) - Mini version via
o1-mini.csx
(.o1-mini-q
files)
- Preview version via
-
Install required dependencies:
- .NET Core SDK
- Required NuGet packages (automatically restored on first run)
-
Create
api-keys.json
:
{
"ClaudeApiKey": "your-anthropic-key",
"GeminiKey": "your-google-key",
"OpenAIKey": "your-openai-key"
}
- Create
config.json
:
{
"WatchDirectory": "path/to/your/working/directory",
"ClaudeModel": "claude-3-opus-20240229" // Only needed for claude.csx
}
- Start the desired model watcher:
dotnet script claude.csx # For Claude
dotnet script flash.csx # For Gemini
dotnet script gpt4o.csx # For GPT-4
# etc.
- Create a question file in your watch directory:
echo "What is the meaning of life?" > myquestion.claudeq
- The script will process the question and create:
myquestion.answer.md
: Contains the model's responsemyquestion.context.md
: Maintains conversation history
- Open these files in your preferred editor and watch the answers update in real-time.
*.claudeq
: Questions for Claude*.flashq
: Questions for Gemini*.gpt4o-q
: Questions for GPT-4*.gpt4o-mini-q
: Questions for GPT-4 (mini version)*.o1-preview-q
: Questions for O1 (preview)*.o1-mini-q
: Questions for O1 (mini)*.answer.md
: Model responses*.context.md
: Conversation history
- Intentional Code Duplication: Each script is standalone and was generated by an LLM, eliminating manual maintenance overhead.
- Simple > Complex: Focus on doing one thing well - getting LLM responses into your editor.
- File-Based Interface: Follows Unix philosophy of using text files as interfaces.
- Local First: All data stored locally, no remote persistence required.
Start multiple watchers to compare responses:
# Terminal 1
dotnet script claude.csx
# Terminal 2
dotnet script gpt4o.csx
Create questions:
echo "Explain quantum computing" > quantum.claudeq
echo "Verify the above explanation" > quantum.gpt4o-q
Now you can see how different models approach the same topic or verify each other's outputs.
This pattern starts with simpler/cheaper models and progressively moves to more sophisticated ones as complexity increases.
- Starting new projects where requirements may evolve
- Working through problems where complexity isn't fully known
- Prototyping solutions before committing to expensive model usage
Example Workflow:
# 1. Start with basic implementation using cheaper model
echo "Create a basic Node.js web scraper" > scraper.flashq
# 2. Escalate when hitting edge cases
echo "Handle rate limiting and dynamic content in the scraper" > scraper.claudeq
# 3. Final optimization with premium model
echo "Optimize for parallel processing and add error resilience" > scraper.gpt4o-q
Start with premium models for complex architectural decisions, then use simpler models for implementation details.
- Complex system design projects
- When initial architecture quality is crucial
- Projects where implementation details are straightforward
Example Workflow:
# 1. Premium model for architecture
echo "Design a distributed caching system architecture" > cache.gpt4o-q
# 2. Step down to simpler model for component implementation
echo "Implement the cache invalidation mechanism based on above design" > cache.flashq
# 3. Use mini variant for routine additions
echo "Add logging and monitoring to cache components" > cache.gpt4o-mini-q
Use this pattern when errors or edge cases emerge in your initial implementation.
- Debugging complex issues
- When simpler models start producing inconsistent results
- For critical review of implementation details
Example Workflow:
# 1. Initial implementation
echo "Write a regex parser for log files" > parser.gpt4o-mini-q
# 2. Escalate when edge cases appear
echo "Fix parsing errors for multiline log entries" > parser.claudeq
# 3. Premium review for reliability
echo "Audit parser for all potential edge cases and performance issues" > parser.gpt4o-q
Balance cost and quality by strategically using premium models only where they add most value.
- Long development sessions
- Projects with tight budget constraints
- Iterative development phases
Example Workflow:
# 1. Architecture with premium model
echo "Design a scalable event processing system" > events.gpt4o-q
# 2. Iterate with mini variant
echo "Implement event validation module" > events.gpt4o-mini-q
echo "Add event transformation logic" > events.gpt4o-mini-q
# 3. Premium model for critical reviews
echo "Review entire system for race conditions" > events.claudeq
Use multiple models to verify and validate critical solutions.
- Mission-critical code
- Security-sensitive implementations
- Complex algorithmic solutions
Example Workflow:
# 1. Initial implementation
echo "Implement AES encryption wrapper" > crypto.claudeq
# 2. Security review with different model
echo "Review encryption implementation for vulnerabilities" > crypto.gpt4o-q
# 3. Additional verification
echo "Verify the security assessment and suggest improvements" > crypto.flashq
Mix different models based on their strengths for different aspects of development.
- Complex projects with varying requirements
- When different models excel at different tasks
- Balancing cost and quality across project phases
Example Workflow:
# 1. Use Claude for detailed planning
echo "Create detailed technical specification for authentication system" > auth.claudeq
# 2. GPT-4 for security-critical components
echo "Implement password hashing and verification" > auth.gpt4o-q
# 3. Mini variant for routine endpoints
echo "Implement user profile CRUD endpoints" > auth.gpt4o-mini-q
# 4. Gemini for performance optimization
echo "Optimize database queries and caching" > auth.flashq
Switch between models based on context window limitations. This pattern is crucial when conversations grow beyond a model's token limits.
- Long development sessions with extensive context
- When approaching the 128k token limit with most LLMs
- Projects requiring retention of large amounts of previous conversation
- When working with large codebases or extensive documentation
Example Workflow:
# 1. Start with GPT-4 for initial development
echo "Design a complex React application architecture" > app.gpt4o-q
echo "Implement core components" > app.gpt4o-q
echo "Add state management" > app.gpt4o-q
# 2. When approaching context limit, migrate to Flash (1M token context)
echo "Review all previous implementations and extend the application with additional features" > app.flashq
# 3. Continue development with larger context window
echo "Implement complex features building on all previous context" > app.flashq
echo "Add comprehensive testing suite for all components" > app.flashq
- Intentional Code Duplication: Each script is standalone and was generated by an LLM, eliminating manual maintenance overhead.
- Strategic Model Selection: Easy switching between models based on task complexity and cost considerations.
- Flexible Workflows: Support for both escalation (simple→complex) and de-escalation (complex→simple) patterns.
- Cost Optimization: Use premium models for critical design decisions and simpler models for implementation details.
- Independent Scripts: Each model gets its own script, making it easy to modify or replace individual components.
- File-Based Interface: Follows Unix philosophy of using text files as interfaces.
- Local First: All data stored locally, no remote persistence required.
- Generated Code: All scripts were created using LLMs, demonstrating practical AI-assisted development.
This project intentionally keeps things simple. Each script is standalone and generated via LLM. If you'd like to contribute:
- Focus on documentation and usage examples
- Report issues with specific models/providers
- Share interesting use cases and workflows
MIT
The scripts in this project were generated using AI assistance, demonstrating how LLMs can be used to create tools for working with LLMs.