This project showcases an implementation of a ChatGPT client with streaming support in a Command-Line Interface (CLI) environment, demonstrating its practicality and effectiveness.
- Features
- Installation
- Getting Started
- Configuration
- Development
- Reporting Issues and Contributing
- Uninstallation
- Useful Links
- Streaming mode: Real-time interaction with the GPT model.
- Query mode: Single input-output interactions with the GPT model.
- Interactive mode: The interactive mode allows for a more conversational experience with the model. Exit interactive mode by simply typing 'exit'.
- Context management: Seamless conversations with the GPT model by maintaining message history across CLI calls.
- Sliding window history: Automatically trims conversation history while maintaining context to stay within token limits.
- Custom context from local files: Provide a custom context for the GPT model to reference during the conversation by piping it in.
- Custom chat models: Use a custom chat model by specifying the model name with the
--set-model
flag. Ensure that the model exists in the OpenAI model list. - Model listing: Get a list of available models by using the
-l
or--list-models
flag. - Viper integration: Robust configuration management.
You can install chatgpt-cli using Homebrew:
brew tap kardolus/chatgpt-cli && brew install chatgpt-cli
For a quick and easy installation without compiling, you can directly download the pre-built binary for your operating system and architecture:
curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/download/v1.1.0/chatgpt-darwin-arm64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/
curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/download/v1.1.0/chatgpt-darwin-amd64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/
curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/download/v1.1.0/chatgpt-linux-amd64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/
curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/download/v1.1.0/chatgpt-linux-arm64 && chmod +x chatgpt && sudo mv chatgpt /usr/local/bin/
Download the binary from this link and add it to your PATH.
Choose the appropriate command for your system, which will download the binary, make it executable, and move it to your /usr/local/bin directory (or %PATH% on Windows) for easy access.
- Set the
OPENAI_API_KEY
environment variable to your ChatGPT secret key. To set the environment variable, you can add the following line to your shell profile (e.g., ~/.bashrc, ~/.zshrc, or ~/.bash_profile), replacing your_api_key with your actual key:
export OPENAI_API_KEY="your_api_key"
- To enable history tracking across CLI calls, create a ~/.chatgpt-cli directory using the command:
mkdir -p ~/.chatgpt-cli
With this directory in place, the CLI will automatically manage message history for seamless conversations with the GPT
model. The history acts as a sliding window, maintaining a maximum of 4096
tokens to ensure optimal performance and
interaction quality.
- Try it out:
chatgpt what is the capital of the Netherlands
- To start interactive mode, use the
-i
or--interactive
flag:
chatgpt --interactive
- To use the pipe feature, create a text file containing some context. For example, create a file named context.txt with the following content:
Kya is a playful dog who loves swimming and playing fetch.
Then, use the pipe feature to provide this context to ChatGPT:
cat context.txt | chatgpt "What kind of toy would Kya enjoy?"
- To set a specific model, use the
--set-model
flag followed by the model name:
chatgpt --set-model gpt-3.5-turbo-0301
Remember to check that the model exists in the OpenAI model list before setting it.
- To list all available models, use the -l or --list-models flag:
chatgpt --list-models
The chatGPT CLI uses a two-level configuration system. The default configuration is read from the
file utils/constants.go
located within the package. These default values are:
model: gpt-3.5-turbo
max_tokens: 4096
url: https://api.openai.com
completions_path: /v1/chat/completions
models_path: /v1/models
These default settings can be overwritten by user-defined configuration options. The user configuration file
is .chatgpt-cli/config.yaml
and is expected to be in the user's home directory.
The user configuration file follows the same structure as the default configuration file. Here is an example of how to
override the model
and max_tokens
values:
model: gpt-3.5-turbo-16k
max_tokens: 8192
In this example, the model
is changed to gpt-3.5-turbo-16k
, and max_tokens
is set to 8192
. Other options such
as url
, completions_path
, and models_path
can be adjusted in the same manner if needed.
Note: If the user configuration file is not found or cannot be read for any reason, the application will fall back to the default configuration.
As a more immediate and flexible alternative to changing the configuration file manually, the CLI offers command-line
flags for overwriting specific configuration values. For instance, the model
can be changed using the --model
flag. This is particularly useful for temporary adjustments or testing different configurations.
chatgpt --model gpt-3.5-turbo-16k What are some fun things to do in Red Hook?
This command will temporarily overwrite the model
value for the duration of the current command. We're currently
working on adding similar flags for other configuration values, which will allow you to adjust most aspects of the
configuration directly from the command line.
To start developing, set the OPENAI_API_KEY
environment variable to
your ChatGPT secret key. Follow these steps for running tests and
building the application:
- Run the tests using the following scripts:
For unit tests, run:
./scripts/unit.sh
For integration tests, run:
./scripts/integration.sh
For contract tests, run:
./scripts/contract.sh
To run all tests, use:
./scripts/all-tests.sh
- Build the app using the installation script:
./scripts/install.sh
- After a successful build, test the application with the following command:
./bin/chatgpt what type of dog is a Jack Russel?
- As mentioned before, to enable history tracking across CLI calls, create a ~/.chatgpt-cli directory using the command:
mkdir -p ~/.chatgpt-cli
With this directory in place, the CLI will automatically manage message history for seamless conversations with the GPT model. The history acts as a sliding window, maintaining a maximum of 4096 tokens to ensure optimal performance and interaction quality.
For more options, see:
./bin/chatgpt --help
If you encounter any issues or have suggestions for improvements, please submit an issue on GitHub. We appreciate your feedback and contributions to help make this project better.
If for any reason you wish to uninstall the ChatGPT CLI application from your system, you can do so by following these steps:
If you installed the CLI using Homebrew you can do:
brew uninstall chatgpt-cli
And to remove the tap:
brew untap kardolus/chatgpt-cli
If you installed the binary directly, follow these steps:
-
Remove the binary:
sudo rm /usr/local/bin/chatgpt
-
Optionally, if you wish to remove the history tracking directory, you can also delete the
~/.chatgpt-cli
directory:rm -rf ~/.chatgpt-cli
-
Navigate to the location of the
chatgpt
binary in your system, which should be in your PATH. -
Delete the
chatgpt
binary. -
Optionally, if you wish to remove the history tracking, navigate to the
~/.chatgpt-cli
directory (where~
refers to your user's home directory) and delete it.
Please note that the history tracking directory ~/.chatgpt-cli
only contains conversation history and no personal
data. If you have any concerns about this, please feel free to delete this directory during uninstallation.
Thank you for using ChatGPT CLI!