An AutoHotkey v2 application that utilizes OpenRouter.ai to seamlessly integrate Large Language Models into your daily workflow. Process texts with customizable prompts by pressing a hotkey and interact with multiple AI models simultaneously.
Tip
Want to ask questions on how to use this app? Download this documentation and include it in your prompt when using your preferred AI chat app!
Navigate through this page by clicking on the menu button at the upper-right corner.
Simply highlight any text and press a hotkey to access AI-powered text processing.
Summarize.mp4
Translate.mp4
Define.mp4
With.copied.text.mp4
Without.copied.text.mp4
Chat.mp4
Copy.mp4
Retry.mp4
Chat.History.and.Latest.Response.mp4
Auto.paste.mp4
Auto.paste.with.custom.prompt.mp4
Two.models.mp4
Reply.to.two.models.mp4
Conversing.with.10.models.at.once.mp4
Two.models.online.search.mp4
- AutoHotkey v2 (requires version
2.0.18
or later) - Windows OS
- API key from OpenRouter.ai
- Run the
LLM AutoHotkey Assistant.ahk
script and press thebacktick
hotkey. - Select
Options
β‘Edit prompts
- Enter your OpenRouter.ai API key within the quotation marks. Then, press
CTRL + S
to save the file automatically and reload the application.
Note
To ensure the API key is automatically applied and the application reloads, use the keyboard shortcut CTRL + S
to save. Saving via File
β‘ Save
will not trigger the automatic reload.
- You can now use the app! If you want to further enhance your experience and customize your prompts, press the
backtick
hotkey and selectOptions
β‘Edit prompts
again. See Editing prompts for more info.
Note
The app icon will appear in your system tray and will indicate that the script is running in the background.
To terminate the script, right-click the icon and select Exit
.
- Highlight any text.
- Press the
backtick
hotkey to bring up the prompt menu. - Select a prompt to process the text.
- View and interact with the AI response in the Response Window.
- If you want to use the
backtick
character, you can pressCapsLock + Backtick
to suspend and unsuspend the script. A message will be displayed at the bottom indicating that the app is suspended.
Backtick
: Show prompt menuCtrl + S
: Will automatically save and reload the script when editing in Notepad (or any other editing tool that matchesLLM AutoHotkey Assistant.ahk
title window)CapsLock + backtick
: Suspend/resume hotkeysESC
: Cancel ongoing requestsCTRL + W
: Close the following windows:- Custom prompt
- Chat
- Chat with specific prompt
- Response Window
You can automatically run the script at startup by following the steps below:
- Copy
LLM AutoHotkey Assistant.ahk
- Enter
shell:startup
at the File Explorer address bar and pressenter
.
- Right-click β‘
Paste shortcut
Edit the prompts
array in the script to add your own prompts.
prompts := [{
promptName: "Your Prompt Name",
menuText: "&1 - Menu Text",
systemPrompt: "Your system prompt",
APIModels: "model-name",
copyAsMarkdown: true,
isAutoPaste: true,
isCustomPrompt: true,
customPromptInitialMessage: "Initial message that will show on Custom Prompt window",
tags: ["&tag1", "&tag2"],
skipConfirmation: true
}]
The name of the prompt. This will also be shown in the tooltip, Send message to
, Activate
, Minimize
, and Close
menus. In addition, this will also show in the Response Window title together with the chosen API model.
The name of the prompt that will appear when your press the hotkey to bring up the menu. The ampersand (&
) is a shortcut key and indicates that by pressing the character next to it after bringing up the menu, the prompt will be selected.
Note
You can have duplicate shortcut keys for the prompts. Pressing the shortcut key will highlight the first prompt, and pressing the shortcut key again will highlight the second prompt. Pressing Enter
afterwards will select the prompt and initiate the request.
This will be the initial prompt and will set the tone and context of the conversation.
Long prompts can be divided into new lines to improve readability.
prompts := [{
promptName: "Multi-line prompt example",
menuText: "Multi-line prompt example",
systemPrompt: "
(
This prompt is broken down into multiple lines.
Here is the second sentence.
And the third one.
As long as the prompt is inside the quotes and the opening and closing parenthesis,
it will be valid.
)",
APIModels: "
(
google/gemini-2.0-flash-thinking-exp:free
)"
}]
The API model that will be used to process the prompt.
Get your desired model from the OpenRouter models website, click on the clipboard icon beside the name, and paste it here.
Some example models:
openai/o3-mini-high
anthropic/claude-3.5-sonnet
google/gemini-2.0-flash-001
deepseek/deepseek-r1
In addition, you can also append :online
to any model so that it will have the capability to do a web search. Check here to learn how it works.
openai/o3-mini-high:online
anthropic/claude-3.5-sonnet:online
google/gemini-2.0-flash-001:online
deepseek/deepseek-r1:online
To enable multi-model functionality, you can specify different API models, separating them with a comma and a space:
prompts := [{
promptName: "Deep thinking multi-model custom prompt",
menuText: "&1 - Deep thinking multi-model custom prompt",
systemPrompt: "You are a helpful assistant. Follow the instructions that I will provide or answer any questions that I will ask. My first query is the following:",
APIModels: "perplexity/r1-1776, openai/o3-mini-high, anthropic/claude-3.7-sonnet:thinking, google/gemini-2.0-flash-thinking-exp:free",
isCustomPrompt: true,
customPromptInitialMessage: "This is a message template."
}]
You can also enter them into new lines for better readability:
prompts := [{
promptName: "Deep thinking multi-model custom prompt",
menuText: "&1 - Deep thinking multi-model custom prompt",
systemPrompt: "You are a helpful assistant. Follow the instructions that I will provide or answer any questions that I will ask. My first query is the following:",
APIModels: "
(
perplexity/r1-1776,
openai/o3-mini-high,
anthropic/claude-3.7-sonnet:thinking,
google/gemini-2.0-flash-thinking-exp:free
)",
isCustomPrompt: true,
customPromptInitialMessage: "This is a message template."
}]
After selecting the prompt with multiple API models, it will enable the Send message to
, Activate
, Minimize
, and Close
menu options after pressing the backtick hotkey.
Since this app uses OpenRouter.ai service, you get access to the latest models as soon as they're available.
Tip
Feeling overwhelmed by the number of models to choose from? Take a look at OpenRouter.ai's ranking page to discover the best models for each task. You can also find benchmarks across various models at LiveBench.ai.
Your prompt will be processed by a meta-model and routed to one of dozens of models, optimizing for the best possible output. To use it, just enter openrouter/auto
in the APIModel
field.
Setting copyAsMarkdown: true
will enable the Copy
button in the Response Window to copy content in Markdown format. This is especially useful for responses that need markdown content such as codes for programming.
If youβd rather copy the response as plain text or HTML-formatted text (default behavior), simply remove this setting.
Setting isAutoPaste: true
will automatically paste the model's response in Markdown format. Remove this if you don't need auto-paste functionality.
Note
The app will automatically disable the Auto Paste functionality if more than one model is set, and will show the Response Window instead.
Default behavior of copied content between isAutoPaste: true
and Copy
:
Setting | Format |
---|---|
Copy button from the Response Window |
HTML |
isAutoPaste: true |
Markdown |
Setting isCustomPrompt: true
will allow the prompt to show an input box to write custom prompts. Remove this if you don't need Custom Prompt functionality.
An optional message that you can set to be displayed when the Custom Prompt window is shown. Remove this if you don't want to show a message whenever you open the Custom Prompt.
Initial.message.mp4
Tip
You can also split a long message into a series of multiple lines. See Splitting a long prompt into a series of multiple lines for more info.
Important
Make sure to add a comma at the end of the line before the Auto Paste, Custom Prompt, copyAsMarkdown
, etc. functionality:
Enabling this feature will sort and group the prompts by their tags.
For example, this will show &1 - Gemini, GPT-4o, Claude
to both &Custom prompts
and &Multi-models
sub menus:
prompts := [{
promptName: "Multi-model custom prompt",
menuText: "&1 - Gemini, GPT-4o, Claude",
systemPrompt: "System prompt",
APIModels: "google/gemini-2.0-flash-thinking-exp:free, openai/gpt-4o, anthropic/claude-3.7-sonnet",
isCustomPrompt: true,
customPromptInitialMessage: "How can I leverage the power of AI in my everyday tasks?",
tags: ["&Custom prompts", "&Multi-models"]
}, {
promptName: "Auto-paste custom prompt",
menuText: "&5 - Auto-paste custom prompt",
systemPrompt: "You are a helpful assistant. Follow the instructions that I will provide or answer any questions that I will ask.",
APIModels: "google/gemini-2.0-flash-thinking-exp:free",
isCustomPrompt: true,
isAutoPaste: true,
tags: ["&Custom prompts", "&Auto paste"]
}]
Setting skipConfirmation: true
will skip confirmation messages when closing the following windows:
- Custom prompt
- Chat
- Chat with specific prompt
- Response Window
Do you have prompts and settings you'd like to share? Check here to share your prompts!
The app does not collect logs, prompts, or copied text. It simply bundles up the conversation between you and your chosen API model and sends the request to OpenRouter.ai. See their privacy policy here. Adjust your OpenRouter privacy settings here.
4 temporary files are written to the temp
folder (C:\Users\username\AppData\Local\Temp
) after selecting a prompt for each API model:
chatHistoryJSONRequest
contains the conversation between you and the model.
cURLOutput
contains the model's response.
responseWindowData
contains the data needed for Response Window to display and interact with the model's response.
cURLCommand
contains the cURL command that will be executed to the API.
These files will be created after you select a prompt and will be deleted when any of the following actions are performed:
- Pasting the response when
isAutoPaste: true
is set - Pressing the
ESC
key after selecting a prompt but before receiving the model's response (for example, if the Response Window has not yet opened) - Closing the Response Window
Yes, you can use your own keys, but there is a caveat: You must use OpenRouter's API keys in the app, then configure your provider's API settings on the Integrations page. This ensures OpenRouter will prioritize using your key.
More information here.
The usage costs varies per model. The model's input/output token price is indicated below its name.
Tip
Search for free models to avoid any charges on your credits when using the app. These free models are particularly helpful when you want to explore the appβs features or experiment with different models.
See OpenRouter's documentation for their limits.
Note
Negative credit balance: If your account has a negative credit balance, you may receive 402 errors, even when using free models. Add credits to bring your balance above zero to resolve this and regain access.
I'm uncertain if it will work, as I don't have a local AI setup on my machine to test it myself. However, it's highly likely to work if your local AI uses the same format as the OpenAI SDK
. OpenRouter relies on the OpenAI SDK
for request processing. I followed the OpenRouter documentation to configure the app to connect to their API.
To understand how the app sends and receives requests through the OpenRouter API, open the Config.ahk
file in the lib
folder. If you successfully set up the app to connect to your local LLM, please let me know, and I will update this information.
Yes, you can modify the LLM AutoHotkey Assistant.ahk
script to point to your portable AutoHotkey64.exe
file. Just change this line:
Run("lib\Response Window.ahk " "`"" dataObjToJSONStrFile)
to
Run('"C:\path\to\AutoHotkey64.exe" "' A_ScriptDir '\lib\Response Window.ahk" "' dataObjToJSONStrFile '"')
Thanks to @WhazZzZzup25 for testing this out.
Check out their documentation to learn more about their service.
- Timestamp messages in Chat History
- File upload (e.g.
md
,txt
, images, etc.) - Have an option to select an area of the screen to automatically upload to Response Window as image
- Importing and exporting conversations
- Conversation log viewer
- Delete individual messages
Contributions are welcome! Feel free to report bugs and suggest features.
- AutoXYWH - Move control automatically when GUI resizes (converted to v2 by Relayer and code improvements by autoexec)
- The-CoDingman/WebViewToo - Allows for use of the WebView2 Framework within AHK to create Web-based GUIs
- GroggyOtter/jsongo_AHKv2 - JSON support for AHKv2 written completely in AHK
- nperovic/DarkMsgBox - Apply dark theme to your built-in MsgBox and InputBox
- nperovic/SystemThemeAwareToolTip - Make your ToolTip style conform to the current system theme
- nperovic/ToolTipEx - Enable the ToolTip to track the mouse cursor smoothly and permit the ToolTip to be moved by dragging
- overflowy/chat-key - Supercharge your productivity with ChatGPT and AutoHotkey π
- htadashi/GPT3-AHK - An AutoHotKey script that enables you to use GPT3 in any input field on your computer
- ecornell/ai-tools-ahk - AI Tools - AutoHotkey - Enable global hotkeys to run custom OpenAI prompts on text in any window.
- kdalanon/ChatGPT-AutoHotkey-Utility - An AutoHotkey script that uses ChatGPT API to process text.
This project uses CSS files from highlight.js.
Copyright (c) 2006, Ivan Sagalaev. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
-
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
-
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
-
Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.