Skip to content

Commit

Permalink
Merge pull request OpenInterpreter#1089 from tyfiero/Interactive-loca…
Browse files Browse the repository at this point in the history
…l-mode

Interactive local mode. Fixes errors from 0.2.1 change
  • Loading branch information
KillianLucas authored Mar 18, 2024
2 parents 553a588 + fe2355c commit 6d12384
Show file tree
Hide file tree
Showing 8 changed files with 374 additions and 70 deletions.
8 changes: 4 additions & 4 deletions docs/language-models/local-models/janai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,19 +14,19 @@ To run Open Interpreter with Jan.ai, follow these steps:

4. Click the 'Advanced' button under the GENERAL section, and toggle on the "Enable API Server" option. This will start a local server that you can use to interact with your model.

5. Now we fire up Open Interpreter with this custom model. To do so, run this command, but replace `<model_name>` with the name of the model you downloaded:
5. Now we fire up Open Interpreter with this custom model. Either run `interpreter --local` in the terminal to set it up interactively, or run this command, but replace `<model_id>` with the id of the model you downloaded:

<CodeGroup>

```bash Terminal
interpreter --api_base http://localhost:1337/v1 --model <model_name>
interpreter --api_base http://localhost:1337/v1 --model <model_id>
```

```python Python
from interpreter import interpreter

interpreter.offline = True # Disables online features like Open Procedures
interpreter.llm.model = "<model-name>"
interpreter.llm.model = "<model_id>"
interpreter.llm.api_base = "http://localhost:1337/v1 "

interpreter.chat()
Expand All @@ -39,7 +39,7 @@ If your model can handle a longer context window than the default 3000, you can
<CodeGroup>

```bash Terminal
interpreter --api_base http://localhost:1337/v1 --model <model_name> --context_window 5000
interpreter --api_base http://localhost:1337/v1 --model <model_id> --context_window 5000
```

```python Python
Expand Down
6 changes: 4 additions & 2 deletions docs/language-models/local-models/llamafile.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@
title: LlamaFile
---

To use LlamaFile with Open Interpreter, you'll need to download the model and start the server by running the file in the terminal. You can do this with the following commands:
The easiest way to get started with local models in Open Interpreter is to run `interpreter --local` in the terminal, select LlamaFile, then go through the interactive set up process. This will download the model and start the server for you. If you choose to do it manually, you can follow the instructions below.

To use LlamaFile manually with Open Interpreter, you'll need to download the model and start the server by running the file in the terminal. You can do this with the following commands:

```bash
# Download Mixtral
Expand All @@ -22,4 +24,4 @@ chmod +x mixtral-8x7b-instruct-v0.1.Q5_K_M-server.llamafile
interpreter --api_base https://localhost:8080/v1
```

Please note that if you are using a Mac with Apple Silicon, you'll need to have Xcode installed.
Please note that if you are using a Mac with Apple Silicon, you'll need to have Xcode installed.
8 changes: 6 additions & 2 deletions docs/language-models/local-models/lm-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,13 @@ for a more detailed guide check out [this video by Mike Bird](https://www.youtub

Once the server is running, you can begin your conversation with Open Interpreter.

(When you run the command `interpreter --local`, the steps above will be displayed.)
(When you run the command `interpreter --local` and select LMStudio, these steps will be displayed.)

<Info>Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000. If your model has different requirements, [set these parameters manually.](/settings#language-model)</Info>
<Info>
Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000.
If your model has different requirements, [set these parameters
manually.](/settings#language-model)
</Info>

# Python

Expand Down
2 changes: 1 addition & 1 deletion docs/language-models/local-models/ollama.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ To run Ollama with Open interpreter:
ollama run <model-name>
```

4. It will likely take a while to download, but once it does, we are ready to use it with Open Interpreter.
4. It will likely take a while to download, but once it does, we are ready to use it with Open Interpreter. You can either run `interpreter --local` to set it up interactively in the terminal, or do it manually:

<CodeGroup>

Expand Down
18 changes: 13 additions & 5 deletions docs/settings/all-settings.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,7 @@ title: All Settings

<CardGroup cols={3}>

<Card
title="Language Model Settings"
icon="microchip"
href="#language-model"
>
<Card title="Language Model Settings" icon="microchip" href="#language-model">
Set your `model`, `api_key`, `temperature`, etc.
</Card>

Expand Down Expand Up @@ -304,6 +300,18 @@ interpreter --version

</CodeGroup>

### Open Local Models Directory

Opens the models directory. All downloaded Llamafiles are saved here.

<CodeGroup>

```bash Terminal
interpreter --local_models
```

</CodeGroup>

### Open Profiles Directory

Opens the profiles directory. New yaml profile files can be added to this directory.
Expand Down
Loading

0 comments on commit 6d12384

Please sign in to comment.