Skip to content

Commit

Permalink
Merge pull request jucasoliveira#69 from petrgazarov/enable-engine-an…
Browse files Browse the repository at this point in the history
…d-temperature-options

Enable --engine and --temperature options; fix docs
  • Loading branch information
jucasoliveira authored Jul 10, 2023
2 parents 95ccbd0 + 08617cd commit d566ee2
Show file tree
Hide file tree
Showing 5 changed files with 44 additions and 92 deletions.
16 changes: 3 additions & 13 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# CONTRIBUTING

## 😎 Contribute your first spec in < 3 minutes

Use the steps below:
Expand All @@ -23,18 +25,6 @@ Use the steps below:

```

3. On your terminal, type `npm run chat`. Your terminalGPT will start. 😊
3. On your terminal, type `npm run tgpt -- chat`. Your terminalGPT will start. 😊

<br>

## Extra / Remove from your computer

'npx terminalgpt' doesn't install the terminalgpt package, instead it downloads the package to your pc and directly executes it from the cache.

You can find the package using

`ls ~/.npm/_npx/*/node_modules`

To delete the package, you can use

`rm -r ~/.npm/_npx/*/node_modules/terminalgpt`
89 changes: 27 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,27 +14,25 @@
Get GPT-like chatGPT on your terminal
</p>

> Note this doesn't use OpenAI [ChatGPT](https://openai.com/blog/chatgpt/), it uses [text-davinci-003](https://platform.openai.com/docs/models/davinci) model (by default)
![Screenshot 2023-01-05 at 09 24 10](https://user-images.githubusercontent.com/11979969/210746185-69722c94-b073-4863-82bc-b662236c8305.png)

<p align="center">
<a href="https://www.producthunt.com/posts/terminalgpt?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-terminalgpt" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=373888&theme=light" alt="terminalGPT - Use&#0032;OpenAi&#0032;like&#0032;chatGPT&#0044;&#0032;on&#0032;your&#0032;terminal | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>

</p>

# Pre-requisite
## Prerequisites

You'll need to have your own `OpenAi` apikey to operate this package.

1. Go to <https://beta.openai.com>
2. Select you profile menu and go to `View API Keys`
1. Go to <https://platform.openai.com>
2. Select your profile menu and go to `View API Keys`
3. Select `+ Create new secret key`
4. Copy generated key

# Get Started
# Installation

# Using tgpt
Install terminalGPT globally:

```bash
npm -g install terminalgpt
Expand All @@ -46,87 +44,50 @@ or
yarn global add terminalgpt
```

## Run
## Start chat

```bash
tgpt chat
```

ps.: If it is your first time running it, it will ask for open AI key , `paste generated key from pre-requisite steps`
PS: If it is your first time running it, it will ask for open AI key, **paste generated key from pre-requisite steps**.

## Options

## Changing engine and temperature
### Change engine and temperature

```bash
tgpt chat --engine "text-davinci-002" --temperature 0.7
tgpt chat --engine "gpt-4" --temperature 0.7
```

### Changing api key
Note this library uses [Chat Completions API](https://platform.openai.com/docs/api-reference/chat).
The `engine` parameter is the same as the `model` parameter in the API. The default value is `gpt-3.5-turbo`.

It you are not satisfy or added a wrong api key , run
### Use markdown

```bash
tgpt delete
tgpt chat --markdown
```

# Using with npx

```bash
npx terminalgpt
```
## Change or delete api key

## Run
It you are not satisfied or added a wrong api key, run

```bash
npx terminalgpt chat
tgpt delete
```

ps.: If it is your first time running it, it will ask for open AI key , `paste generated key from pre-requisite steps`

### Changing engine and temperature
## Using with npx

```bash
npx terminalgpt chat --engine "text-davinci-002" --temperature 0.7
npx terminalgpt
```

### Changing api key

If you are not satisfied or entered a wrong api key, run

```
npx terminalgpt delete
```bash
npx terminalgpt <command>
```

## 😎 Contribute your first spec in < 3 minutes

Use the steps below:

<br/>

**Steps**

1. Click [here](https://github.com/jucasoliveira/terminalGPT/fork) to fork this repo.

2. Clone your forked repo and create an example spec

```bash
# Replace `YOUR_GITHUB_USERNAME` with your own github username
git clone https://github.com/YOUR_GITHUB_USERNAME/terminalGPT.git terminalGPT
cd terminalGPT

# Add jucasoliveira/terminalGPT as a remote
git remote add upstream https://github.com/jucasoliveira/terminalGPT.git

# Install packages
npm install
```

3. On your terminal, type `npm run chat`. Your terminalGPT will start. 😊

<br>

## Extra / Remove from your computer

`npx terminalgpt` doesn't install the terminalgpt package, instead it downloads the package to your pc and directly executes it from the cache.
Note `npx terminalgpt` doesn't install the terminalgpt package, instead it downloads the package to your computer and directly executes it from the cache.

You can find the package using

Expand All @@ -136,6 +97,10 @@ To delete the package, you can use

`rm -r ~/.npm/_npx/*/node_modules/terminalgpt`

## Contributing

Refer to CONTRIBUTING.md 😎

## ✨ Contributors

<a href="https://github.com/jucasoliveira/terminalGPT/graphs/contributors">
Expand Down
7 changes: 4 additions & 3 deletions bin/gpt.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ const { loadWithRocketGradient } = require("./gradient");
const { getContext, addContext } = require("./context");


const generateCompletion = async (apiKey, prompt) => {
const generateCompletion = async (apiKey, prompt, opts) => {
try {

const configuration = new Configuration({
Expand All @@ -18,8 +18,9 @@ const generateCompletion = async (apiKey, prompt) => {
addContext({"role": "system", "content": "Read the context, when returning the answer ,always wrapping block of code exactly within triple backticks"});

const request = await openai.createChatCompletion({
model:"gpt-3.5-turbo",
messages:getContext(),
model: opts.engine || "gpt-3.5-turbo",
messages: getContext(),
temperature: opts.temperature ? Number(opts.temperature) : 1,
})
.then((res) => {
addContext(res.data.choices[0].message);
Expand Down
9 changes: 2 additions & 7 deletions bin/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,8 @@ const { deleteApiKey } = require("./encrypt");

commander
.command("chat")
.option("-e, --engine <engine>", "GPT-3 model to use")
.option("-e, --engine <engine>", "GPT model to use")
.option("-t, --temperature <temperature>", "Response temperature")
.option(
"-f,--finetunning <finetunning>",
"Opt in to pretrain the model with a prompt"
)
.option("-l,--limit <limit>", "The limit of prompts to train the model with")
.option("-m,--markdown", "Show markdown in the terminal")
.usage(`"<project-directory>" [options]`)
.action(async (opts) => {
Expand All @@ -44,7 +39,7 @@ commander
case "clear":
return process.stdout.write("\x1Bc");
default:
generateResponse(apiKey, prompt, response, opts.markdown);
generateResponse(apiKey, prompt, response, opts);
return;
}
};
Expand Down
15 changes: 8 additions & 7 deletions bin/utils.js
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,12 @@ const checkBlockOfCode = async (text, prompt) => {
}
};

const generateResponse = async (apiKey, prompt, response, isShowMarkdown = false) => {
const generateResponse = async (apiKey, prompt, response, opts) => {
try {
const request = await generateCompletion(
apiKey,
response.value
response.value,
opts
);

if (request == undefined || !request?.content) {
Expand All @@ -70,17 +71,17 @@ const generateResponse = async (apiKey, prompt, response, isShowMarkdown = false
// map all choices to text
const getText = [request.content];

console.log(`${chalk.cyan("GPT-3: ")}`);
console.log(`${chalk.cyan("GPT: ")}`);

if (isShowMarkdown) {
if (opts.isShowMarkdown) {
console.log(marked.parse(getText[0]))
checkBlockOfCode(getText[0], prompt);
} else {
// console log each character of the text with a delay and then call prompt when it finished
let i = 0;
const interval = setInterval(() => {
if (i < getText[0].length) {
process.stdout.write(marked.parse(getText[0][i]));
if (i < getText.length) {
process.stdout.write(marked.parse(getText[i]));
i++;
} else {
clearInterval(interval);
Expand Down Expand Up @@ -109,7 +110,7 @@ const generateResponse = async (apiKey, prompt, response, isShowMarkdown = false
case "yes":
default:
// call the function again
generateResponse(apiKey, prompt, response);
generateResponse(apiKey, prompt, response, opts);
break;
}
}
Expand Down

0 comments on commit d566ee2

Please sign in to comment.