Skip to content

Commit

Permalink
Merge pull request OpenInterpreter#1074 from aj47/lm-studio-llamafile
Browse files Browse the repository at this point in the history
Change docs as --local no longer uses LM Studio
  • Loading branch information
KillianLucas authored Mar 14, 2024
2 parents a619575 + fc4bd57 commit 4c24c34
Show file tree
Hide file tree
Showing 6 changed files with 50 additions and 32 deletions.
18 changes: 12 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,15 +202,23 @@ interpreter.llm.model = "gpt-3.5-turbo"

#### Terminal

Open Interpreter uses [LM Studio](https://lmstudio.ai/) to connect to local language models (experimental).
Open Interpreter can use OpenAI-compatible server to run models locally. (LM Studio, jan.ai, ollama etc)

Simply run `interpreter` in local mode from the command line:
Simply run `interpreter` with the api_base URL of your inference server (for LM studio it is `http://localhost:1234/v1` by default):

```shell
interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key"
```

Alternatively you can use Llamafile without installing any third party software just by running

```shell
interpreter --local
```

**You will need to run LM Studio in the background.**
for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)

**How to run LM Studio in the background.**

1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it.
2. Select a model then click **↓ Download**.
Expand All @@ -219,13 +227,11 @@ interpreter --local

Once the server is running, you can begin your conversation with Open Interpreter.

(When you run the command `interpreter --local`, the steps above will be displayed.)

> **Note:** Local mode sets your `context_window` to 3000, and your `max_tokens` to 1000. If your model has different requirements, set these parameters manually (see below).
#### Python

Our Python package gives you more control over each setting. To replicate `--local` and connect to LM Studio, use these settings:
Our Python package gives you more control over each setting. To replicate and connect to LM Studio, use these settings:

```python
from interpreter import interpreter
Expand Down
20 changes: 13 additions & 7 deletions docs/README_JA.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,15 +199,23 @@ interpreter.llm.model = "gpt-3.5-turbo"

### ローカルのモデルを実行する

Open Interpreter は、ローカルの言語モデルへの接続に [LM Studio](https://lmstudio.ai/) を実験的に使用しています。
Open Interpreter は、OpenAI 互換サーバーを使用してモデルをローカルで実行できます。 (LM Studio、jan.ai、ollam など)

コマンドラインから `interpreter` をローカルモードで実行するだけです:
推論サーバーの api_base URL を指定して「interpreter」を実行するだけです (LM Studio の場合、デフォルトでは「http://localhost:1234/v1」です)。

```shell
interpreter --local
```シェル
インタープリター --api_base "http://localhost:1234/v1" --api_key "fake_key"
```

あるいは、サードパーティのソフトウェアをインストールせずに、単に実行するだけで Llamafile を使用することもできます。

```シェル
インターピーター --local
```

**バックグラウンドで LM Studio を実行する必要があります。**
より詳細なガイドについては、[Mike Bird によるこのビデオ](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H) をご覧ください。

**LM Studioをバックグラウンドで使用する方法**

1. [https://lmstudio.ai/](https://lmstudio.ai/)からダウンロードして起動します。
2. モデルを選択し、**↓ ダウンロード** をクリックします。
Expand All @@ -216,8 +224,6 @@ interpreter --local

サーバーが稼働を開始したら、Open Interpreter との会話を開始できます。

`interpreter --local` コマンドを実行した際にも、上記の手順が表示されます。)

> **注意:** ローカルモードでは、`context_window` を 3000 に、`max_tokens` を 1000 に設定します。モデルによって異なる要件がある場合、これらのパラメータを手動で設定してください(下記参照)。
#### コンテキストウィンドウ、最大トークン数
Expand Down
19 changes: 13 additions & 6 deletions docs/README_VN.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,15 +195,23 @@ interpreter.llm.model = "gpt-3.5-turbo"

### Chạy Open Interpreter trên máy cục bộ

Open Interpreter sử dụng [LM Studio](https://lmstudio.ai/) để kết nối tới các mô hình cục bộ (thử nghiệm).
Open Interpreter có thể sử dụng máy chủ tương thích với OpenAI để chạy các mô hình cục bộ. (LM Studio, jan.ai, ollama, v.v.)

Cơ bản chạy `interpreter` trong chế độ cục bộ từ command line:
Chỉ cần chạy `interpreter` với URL api_base của máy chủ suy luận của bạn (đối với LM studio, nó là `http://localhost:1234/v1` theo mặc định):

```shell
interpreter --local
``` vỏ
trình thông dịch --api_base "http://localhost:1234/v1" --api_key "fake_key"
```

**Bạn sẽ cần chạy LM Studio trong nền.**
Ngoài ra, bạn có thể sử dụng Llamafile mà không cần cài đặt bất kỳ phần mềm bên thứ ba nào chỉ bằng cách chạy

``` vỏ
thông dịch viên --local
```

để biết hướng dẫn chi tiết hơn, hãy xem [video này của Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)

**Để chạy LM Studio ở chế độ nền.**

1. Tải [https://lmstudio.ai/](https://lmstudio.ai/) và khởi động.
2. Chọn một mô hình rồi nhấn **↓ Download**.
Expand All @@ -212,7 +220,6 @@ interpreter --local

Một khi server chạy, bạn có thể bắt đầu trò chuyện với Open Interpreter.

(Khi bạn chạy lệnh `interpreter --local`, các bước ở dưới sẽ được hiện ra.)

> **Lưu ý:** Chế độ cục bộ chỉnh `context_window` của bạn tới 3000, và `max_tokens` của bạn tới 600. Nếu mô hình của bạn có các yêu cầu khác, thì hãy chỉnh các tham số thủ công (xem bên dưới).
Expand Down
16 changes: 11 additions & 5 deletions docs/language-models/local-models/lm-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,23 @@
title: LM Studio
---

# Terminal
Open Interpreter can use OpenAI-compatible server to run models locally. (LM Studio, jan.ai, ollama etc)

By default, Open Interpreter's terminal interface uses [LM Studio](https://lmstudio.ai/) to connect to local language models.
Simply run `interpreter` with the api_base URL of your inference server (for LM studio it is `http://localhost:1234/v1` by default):

Simply run `interpreter` in local mode from the command line:
```shell
interpreter --api_base "http://localhost:1234/v1" --api_key "fake_key"
```

Alternatively you can use Llamafile without installing any third party software just by running

```shell
interpreter --local
```

**You will need to run LM Studio in the background.**
for a more detailed guide check out [this video by Mike Bird](https://www.youtube.com/watch?v=CEs51hGWuGU?si=cN7f6QhfT4edfG5H)

**How to run LM Studio in the background.**

1. Download [https://lmstudio.ai/](https://lmstudio.ai/) then start it.
2. Select a model then click **↓ Download**.
Expand All @@ -31,7 +37,7 @@ Compared to the terminal interface, our Python package gives you more granular c

You can point `interpreter.llm.api_base` at any OpenAI compatible server (including one running locally).

For example, to replicate [`--local` mode](/language-model-setup/local-models/overview) and connect to [LM Studio](https://lmstudio.ai/), use these settings:
For example, to connect to [LM Studio](https://lmstudio.ai/), use these settings:

```python
from interpreter import interpreter
Expand Down
7 changes: 0 additions & 7 deletions interpreter/core/respond.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,13 +115,6 @@ def respond(interpreter):
raise Exception(
"Error occurred. "
+ str(e)
+ """
If you're running `interpreter --local`, please make sure LM Studio's local server is running.
If LM Studio's local server is running, please try a language model with a different architecture.
"""
)
else:
raise
Expand Down
2 changes: 1 addition & 1 deletion interpreter/terminal_interface/start_terminal_interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ def start_terminal_interface(interpreter):
{
"name": "local",
"nickname": "l",
"help_text": "experimentally run the LLM locally via LM Studio (this changes many more settings than `--offline`)",
"help_text": "experimentally run the LLM locally via Llamafile (this changes many more settings than `--offline`)",
"type": bool,
},
{
Expand Down

0 comments on commit 4c24c34

Please sign in to comment.