Simple plugin to enable interactions with AI models such as OpenAI ChatGPT, Anthropic Claude, OpenAI DALL·E, OpenAI Whisper, and local LLMs via Ollama directly from your Obsidian notes.
The current available features of this plugin are:
- 🤖 Text assistant with OpenAI GPTs, Anthropic Claude models, and local LLMs via Ollama,
- 🖼 Image generation with DALL·E3 and DALL·E2,
- 🗣 Speech to text with Whisper.
- Claude Sonnet 3.5 and GPT-4o are available.
- New OpenAI model GPT-4o-mini is now available.
- Added support for local LLMs using Ollama.
You have two commands to interact with the text assistant:
- Chat mode,
- Prompt mode.
Chat Mode | Prompt Mode |
---|---|
Chat with the AI assistant from your Vault to generate content for your notes. From the chat, you can click on any interaction to copy it directly to your clipboard. You can also copy the whole conversation. Chat mode now allows you to upload images to interact with GPT4-Vision or Claude models.
Prompt mode allows you to use a selected piece of text from your note as input for the assistant. From here you can ask the assistant to translate, summarize, generate code etc.
Generate images for your notes.
In the result window, select the images you want to keep.
They will automatically be downloaded to your vault and their path copied to your clipboard.
Then, you can paste the images anywhere in your notes.
Launch the Speech to Text command and start dictating your notes.
The transcript will be immediately added to your note at your cursor location.
- Model choice: choice of the text model. Currently
gpt-3.5-turbo
,gpt-4-turbo
,gpt-4
,gpt-4o-mini
, Claude models, and local LLMs via Ollama are supported. - Maximum number of tokens in the generated answer
- Replace or Add below: In prompt mode, after having selected text from your note and enter your prompt, you can decide to replace your text by the assistant answer or to paste it below.
- Ollama API Address: If using Ollama, specify the API address (default is http://localhost:11434) and the model (llama3.1, gemma2, mistral-nemo).
- You can switch between DALL·E3 and DALL·E2,
- Change the default folder of generated images.
- The model used is Whisper,
- You can change the default language to improve the accuracy and latency of the model. If you leave it empty, the model will automatically detect it.
You can install the AI Assistant directly from the Obsidian community plugins.
cd path/to/vault/.obsidian/plugins
git clone https://github.com/qgrail/obsidian-ai-assistant.git && cd obsidian-ai-assistant
npm install && npm run build
- Open Obsidian Preferences -> Community plugins
- Refresh Installed plugins and activate AI Assistant.