Skip to content

Supports OpenAI, Mistral, ollama, oobagooba and more • Multi-user chat • Vision support • Streamed responses • 200 lines of code 🔥

License

Notifications You must be signed in to change notification settings

kristianernst/discord-llm-chatbot

 
 

Repository files navigation

llmcord.py

Code style: black

Talk to LLMs with your friends!

Features

  • Elegant chat system

    Mention (@) the bot and it will reply to your message. Reply to the bot's message to continue from that point. Build conversations with reply chains!

    You can reply to any of the bot's messages to continue from wherever you want. Or reply to your friend's message (and @ the bot) to ask a question about it. There are no limits to this functionality.

    Additionally, you can seamlessly move any conversation into a thread. When you @ the bot in a thread it will remember the conversation attached outside of it.

  • Choose your LLM

    Supports models from OpenAI API, Mistral API, ollama and many more thanks to LiteLLM.

    Also supports:
       - oobabooga/text-generation-webui
       - Jan
       - LM Studio

  • Vision support

    The bot can see image attachments when you choose a vision model.

  • Streamed responses

    The bot's responses are dynamically generated and turn green when complete.

And more...

  • Easily set a custom personality (aka system prompt)
  • DM the bot for private access (no @ required)
  • User identity aware
  • Fully asynchronous
  • 1 Python file, ~200 lines of code

Instructions

Before you start, install Python and clone this git repo.

  1. Install Python requirements:
pip install -r requirements.txt
  1. Create a copy of .env.example named .env and set it up:
Setting Instructions
DISCORD_BOT_TOKEN Create a new Discord application at discord.com/developers/applications and generate a token under the Bot tab. Also enable MESSAGE CONTENT INTENT.
LLM For LiteLLM supported providers (OpenAI API, Mistral API, ollama, etc.), follow the LiteLLM instructions for its model name formatting.

For Jan, set to local/<MODEL_NAME> where <MODEL_NAME> is the name of the model you have loaded.

For oobabooga and LM Studio, set to local/model regardless of the model you have loaded.
CUSTOM_SYSTEM_PROMPT Write practically anything you want to customize the bot's behavior!
ALLOWED_CHANNEL_IDS Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels.
ALLOWED_ROLE_IDS Discord role IDs that can use the bot, separated by commas. Leave blank to allow everyone. Specifying at least one role also disables DMs.
MAX_IMAGES The maximum number of image attachments allowed in a single message. Only applicable when using a vision model.
(Default: 5)
MAX_MESSAGES The maximum number of messages allowed in a reply chain.
(Default: 20)
LOCAL_SERVER_URL The URL of your local API server. Only applicable when using oobabooga, Jan or LM Studio (aka when LLM starts with local/).
(Default: http://localhost:5000/v1)
OPENAI_API_KEY Only required if you choose an OpenAI API model. Generate an OpenAI API key at platform.openai.com/account/api-keys. You must also add a payment method to your OpenAI account at platform.openai.com/account/billing/payment-methods.
MISTRAL_API_KEY Only required if you choose a Mistral API model. Generate a Mistral API key at console.mistral.ai/user/api-keys. You must also add a payment method to your Mistral account at console.mistral.ai/billing.

OPENAI_API_KEY and MISTRAL_API_KEY are provided as examples. Add more as needed for other providers.

  1. Invite the bot to your Discord server with this URL (replace <CLIENT_ID> with your Discord application's client ID found under the OAuth2 tab):
https://discord.com/api/oauth2/authorize?client_id=<CLIENT_ID>&permissions=412317273088&scope=bot
  1. Run the bot:
python llmcord.py

Notes

  • Vision support is currently limited to gpt-4-vision-preview from OpenAI API. Support for local vision models like llava is planned.

  • Currently only OpenAI API supports the name property in user messages, therefore only OpenAI API models are user identity aware (with the exception of gpt-4-vision-preview which also doesn't support it yet). I tried the alternate approach of prepending user's names in the message content but this doesn't seem to work well with all models.

  • I'm interested in using chromadb to enable asking the bot about ANYTHING in the current channel without having to reply to it. I spent time prototyping this but couldn't get to something I'm happy with.

  • PRs are welcome :)

Star History

Star History Chart

About

Supports OpenAI, Mistral, ollama, oobagooba and more • Multi-user chat • Vision support • Streamed responses • 200 lines of code 🔥

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.6%
  • Dockerfile 3.4%