This project is an implementation of a modular telegram bot based on aiogram, designed for remote and local ML Inference. Currently integrated with:
- Stable Diffusion (using stable-diffusion-webui API),
- VITS built-in text-to-speech engine (using TTS).
- LLMs such as llama, gpt-j-6b, cerebras-gpt, gpt-2 with support for assistant mode
via alpaca-lora, via gpt4all-lora, via adapter-model and via minChatGPT
evolved from predecessor Botality I
[Bot]
- User-based queues and delayed task processing
- Multiple modes to filter access scopes (WL/BL/Both/Admin-only)
- Support of accelerated inference on M1 Macs
[LLM]
- Supports dialog mode casually playing a role described in a character file, keeping chat history with all users in group chats or with each user separately
- Character files can be easily localized for any language for non-english models
- Assistant mode via /ask command or with direct replies (configurable)
[SD]
- CLI-like way to pass stable diffusion parameters
- pre-defined prompt wrappers
- lora integration with easy syntax: lora_name100 => <lora:lora_name:1.0> and custom lora activators
[TTS]
- can be run remotely, or on the same machine
- tts output is sent as voice messages
- rename
.env.example
to.env
, and do NOT add the .env file to your commits! - set up your telegram bot token and other configuration options
- install requirements
pip install -r requrements.txt
- install optional requirements if you want to use tts and tts_server
pip install -r requrements-tts.txt
andpip install -r requrements-llm.txt
if you want to use llm, you'll probably also need a fresh version of pytorch. - for stable diffusion module, make sure that you have webui installed and it is running with
--api
flag - for text-to-speech module download VITS models, put their names in
tts_voices
configuration option and path to their directory intts_path
- for llm module, see LLM Setup section bellow
- run the bot with
python bot.py
python3.10+ is recommended, due to aiogram compatibility
-
original llama (7b version was tested on llama-mps fork for macs), requires running the bot with
python3.10 -m torch.distributed.launch --use_env bot.py
assistant mode for original llama is available with LLaMa-Adapter, to use both chat and assistant mode, some changes are necessary for non-mac users. -
hf llama by decapoda-research (outputs are way worse than original llama on mac) + alpaca-lora (outputs are ok) / gpt4all-lora (outputs are ok)
-
gpt-2 (tested on ru-gpt3), nanoGPT (tested on minChatGPT [weights])
-
gpt-j (tested on a custom model)
-
Cerebras-GPT (tested on 1.3B model)
- Make sure that you have enough RAM / vRAM to run models.
- Download the weights (and the code if needed) for any large language model
- in .env file, make sure that
"llm"
is inactive_modules
, then set:
llm_paths
- change the path(s) of model(s) that you downloaded
llm_active_model_type
= model type that you want to use, it can begpt2
,gptj
,llama_orig
,llama_hf
,cerebras_gpt
llm_character
= a character of your choice, fromcharacters
directory, for examplecharacters.gptj_6B_default
, character files also have model configuration options optimal to specific model, feel free to change the character files, edit their personality and use with other models.
llm_assistant_chronicler
= a input/output formatter/parser for assistant task, can bealpaca
orminchatgpt
orgpt4all
llm_history_grouping
=user
to store history with each user separately orchat
to store group chat history with all users in that chat
llm_assistant_use_in_chat_mode
=True
/False
when False, use /ask command to use alpaca-lora in assistant mode, when True, all messages are treated as questions.
Send a message to your bot with the command /tti -h for more info on how to use stable diffusion in the bot, and /tts -h for tts module. The bot uses the same commands as voice names in configuration file for tts. Try /llm command for llm module details.
License: the code of this project is currently distributed under CC BY-NC-SA 4.0 license, third party libraries might have different licenses.