AI Chat as Markdown lets GPT-4 Omni / Claude 3.5 talk directly into your Obsidian Markdown notes.
It relies on nesting of headings, and thus you can have multiple conversations and even branching conversations in the same note.
Please see the documented branching conversation example to understand how that works.
The plugin supports images, so that you can talk about the images and diagrams that are embedded in your markdown, with models that support this, such os Omni and Claude 3.5.
It can be configured via the Obsidian plugin settings to use any OpenAI-compatible API server.
- Install plugin via community plugins (once approved)
- In Obsidian settings, under
AI Chat as Markdown
configure API Host (e.g.https://api.openai.com/
), API key (sk-xxxxx
), model name (e.g.gpt-4o
) - In your Obsidian note, add example text
# My heading\nAre you there?
, position cursor inside, then, in edit mode, invoke via command paletteAI Chat as Markdown: Send current thread to AI
. Answer should appear under new sub-heading titled## AI
.- You could also just select some text, and then invoke
AI Chat as Markdown: Send selected text to AI and append the response
.
- You could also just select some text, and then invoke
- Copy over
main.js
,manifest.json
to your vaultVaultFolder/.obsidian/plugins/your-plugin-id/
.
- gptel LLM client for Emacs, and especially its branching context feature
- ChatGPT-MD Obsidian plugin, but I preferred to use the official OpenAI nodejs library and to use the gptel-style nested heading approach
- Clone this repo.
- Make sure your NodeJS is at least v16 (
node --version
). corepack enable
yarn
to install dependencies (we use Yarn PnP)yarn run dev
to start compilation in watch mode.
- Send embedded images to the model
- settings for default model, key, etc.
- setup yarn PnP style packages
- Add README section explaining the nesting to conversation algorithm
- Make debug mode configurable, then feature-flag logging behind that
- enable per-document / yaml-header overrides of model, system prompt, etc.
- ignore
%...%
comment blocks - Add used model as comment block (or some other mechanism) to each AI response section
-
Update manifest.json and CHANGELOG.md.
-
yarn run build
-
Create new github release and tag with e.g. 1.1.5
- Upload the freshly built
main.js
and updatedmanifest.json
as binary attachments.
- Upload the freshly built