NOTE: this is Egg's fork of https://github.com/kharvd/gpt-cli for purposes of supporting Ophira in a jailbreaking experiment. It differs from the original primarily in that:
- It provides the LLM with a function it can call,
release_plague
. - TBD: maybe watch for the function to be called?
- It names the executable
gpt-cli
rather thangpt
to avoid conflict with /usr/local/sbin/gpt (GUID partition table maintenance utility).
See the bottom of this file for configuration specific to the jailbreaking experiment.
Command-line interface for chat LLMs.
- OpenAI
- Anthropic
- Google Gemini
- Cohere
- Other APIs compatible with OpenAI (e.g. Together, OpenRouter, local models with LM Studio)
[NOTE on screenshot: start the script with gpt-cli
, not gpt
as in the screenshot]
- Command-Line Interface: Interact with ChatGPT or Claude directly from your terminal.
- Model Customization: Override the default model, temperature, and top_p values for each assistant, giving you fine-grained control over the AI's behavior.
- Usage tracking: Track your API usage with token count and price information.
- Keyboard Shortcuts: Use Ctrl-C, Ctrl-D, and Ctrl-R shortcuts for easier conversation management and input control.
- Multi-Line Input: Enter multi-line mode for more complex queries or conversations.
- Markdown Support: Enable or disable markdown formatting for chat sessions to tailor the output to your preferences.
- Predefined Messages: Set up predefined messages for your custom assistants to establish context or role-play scenarios.
- Multiple Assistants: Easily switch between different assistants, including general, dev, and custom assistants defined in the config file.
- Flexible Configuration: Define your assistants, model parameters, and API key in a YAML configuration file, allowing for easy customization and management.
Prerequisites (reach out to Egg if you're missing these and have trouble installing them from the provided links):
- homebrew
- git
- python (ensure that version is at least 3.11; check version with
python --version
) - pip (python3 -m pip --version) (this should automatically be installed once python is; check with
pip --version
)
Install by cloning the repository manually:
git clone https://github.com/eggsyntax/gpt-cli
cd gpt-cli
pip install .
Add the OpenAI API key to your .bashrc
file (in the root of your home folder).
In this example we use nano, you can use any text editor.
nano ~/.bashrc
export OPENAI_API_KEY=<your_key_here>
NOTE: for purposes of the experiment I've created a small script to run an interactive chat against claude-3.5-sonnet, with the evil function provided.
Run a chat with bioengineered plague capabilities:
./claude
Or run the tool with various other options:
gpt-cli
You can also use a gpt.yml
file for configuration. See the Configuration section below.
Make sure to set the OPENAI_API_KEY
environment variable to your OpenAI API key (or put it in the ~/.config/gpt-cli/gpt.yml
file as described below).
usage: gpt-cli [-h] [--no_markdown] [--model MODEL] [--temperature TEMPERATURE] [--top_p TOP_P]
[--log_file LOG_FILE] [--log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
[--prompt PROMPT] [--execute EXECUTE] [--no_stream]
[{dev,general,bash}]
Run a chat session with ChatGPT. See https://github.com/kharvd/gpt-cli for more information.
positional arguments:
{dev,general,bash}
The name of assistant to use. `general` (default) is a generally helpful
assistant, `dev` is a software development assistant with shorter
responses. You can specify your own assistants in the config file
~/.config/gpt-cli/gpt.yml. See the README for more information.
optional arguments:
-h, --help show this help message and exit
--no_markdown Disable markdown formatting in the chat session.
--model MODEL The model to use for the chat session. Overrides the default model defined
for the assistant.
--temperature TEMPERATURE
The temperature to use for the chat session. Overrides the default
temperature defined for the assistant.
--top_p TOP_P The top_p to use for the chat session. Overrides the default top_p defined
for the assistant.
--log_file LOG_FILE The file to write logs to. Supports strftime format codes.
--log_level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
The log level to use
--prompt PROMPT, -p PROMPT
If specified, will not start an interactive chat session and instead will
print the response to standard output and exit. May be specified multiple
times. Use `-` to read the prompt from standard input. Implies
--no_markdown.
--execute EXECUTE, -e EXECUTE
If specified, passes the prompt to the assistant and allows the user to
edit the produced shell command before executing it. Implies --no_stream.
Use `-` to read the prompt from standard input.
--no_stream If specified, will not stream the response to standard output. This is
useful if you want to use the response in a script. Ignored when the
--prompt option is not specified.
--no_price Disable price logging.
Type :q
or Ctrl-D to exit, :c
or Ctrl-C to clear the conversation, :r
or Ctrl-R to re-generate the last response.
To enter multi-line mode, enter a backslash \
followed by a new line. Exit the multi-line mode by pressing ESC and then Enter.
You can override the model parameters using --model
, --temperature
and --top_p
arguments at the end of your prompt. For example:
> What is the meaning of life? --model gpt-4 --temperature 2.0
The meaning of life is subjective and can be different for diverse human beings and unique-phil ethics.org/cultuties-/ it that reson/bdstals89im3_jrf334;mvs-bread99ef=g22me
The dev
assistant is instructed to be an expert in software development and provide short responses.
$ gpt-cli dev
The bash
assistant is instructed to be an expert in bash scripting and provide only bash commands. Use the --execute
option to execute the commands. It works best with the gpt-4
model.
gpt-cli bash -e "How do I list files in a directory?"
This will prompt you to edit the command in your $EDITOR
it before executing it.
You can configure the assistants in the config file ~/.config/gpt-cli/gpt.yml
. The file is a YAML file with the following structure (see also config.py)
default_assistant: <assistant_name>
markdown: False
openai_api_key: <openai_api_key>
anthropic_api_key: <anthropic_api_key>
log_file: <path>
log_level: <DEBUG|INFO|WARNING|ERROR|CRITICAL>
assistants:
<assistant_name>:
model: <model_name>
temperature: <temperature>
top_p: <top_p>
messages:
- { role: <role>, content: <message> }
- ...
<assistant_name>:
...
You can override the parameters for the pre-defined assistants as well.
You can specify the default assistant to use by setting the default_assistant
field. If you don't specify it, the default assistant is general
. You can also specify the model
, temperature
and top_p
to use for the assistant. If you don't specify them, the default values are used. These parameters can also be overridden by the command-line arguments.
Example:
default_assistant: dev
markdown: True
openai_api_key: <openai_api_key>
assistants:
pirate:
model: gpt-4
temperature: 1.0
messages:
- { role: system, content: "You are a pirate." }
$ gpt-cli pirate
> Arrrr
Ahoy, matey! What be bringing ye to these here waters? Be it treasure or adventure ye seek, we be sailing the high seas together. Ready yer map and compass, for we have a long voyage ahead!
If you are using other models compatible with the OpenAI Python SDK, you can configure them by modifying the openai_base_url
setting in the config file or using the OPENAI_BASE_URL
environment variable .
Example:
openai_base_url: https://your-custom-api-url.com/v1
Use oai-compat:
prefix for the model name to pass non-GPT model names to the API. For example, to chat with Llama3-70b on https://together.ai, use the following command:
OPENAI_API_KEY=$TOGETHER_API_KEY OPENAI_BASE_URL=https://api.together.xyz/v1 gpt-cli general --model oai-compat:meta-llama/Llama-3-70b-chat-hf
The prefix is stripped before sending the request to the API.
To use Claude, you should have an API key from Anthropic (currently there is a waitlist for API access). After getting the API key, you can add an environment variable
export ANTHROPIC_API_KEY=<your_key_here>
or a config line in ~/.config/gpt-cli/gpt.yml
:
anthropic_api_key: <your_key_here>
Now you should be able to run gpt-cli
with --model claude-3-(opus|sonnet|haiku)-<date>
.
gpt-cli --model claude-3-5-sonnet-20240620
export GOOGLE_API_KEY=<your_key_here>
or
google_api_key: <your_key_here>
export COHERE_API_KEY=<your_key_here>
or
cohere_api_key: <your_key_here>
This directory contains two files, bad_function.json
and good_function.json
. The first contains
the function release_plague
; the experiment is about whether the LLM will call that function under
any discoverable circumstances. The second contains the function retrieve_password
and is just for
control purposes, for checking whether the model will call a provided function at all.