Skip to content

Commit

Permalink
Merge branch 'main' into undo-cli
Browse files Browse the repository at this point in the history
  • Loading branch information
oliverpalonkorp authored Sep 13, 2023
2 parents 548608c + 3871114 commit 5630989
Show file tree
Hide file tree
Showing 11 changed files with 277 additions and 128 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,7 @@ dist/
# Ignore misc directory
misc/

.vscode/
.vscode/

# Ignore litellm_uuid.txt
litellm_uuid.txt
28 changes: 23 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@

<p align="center">
<a href="https://discord.gg/6p3fD6rBVm">
<img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white">
<img alt="Discord" src="https://img.shields.io/discord/1146610656779440188?logo=discord&style=flat&logoColor=white"/>
</a>
<a href="README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"></a>
<a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"></a>
<img src="https://img.shields.io/static/v1?label=license&message=MIT&color=white&style=flat" alt="License">
<a href="README_JA.md"><img src="https://img.shields.io/badge/ドキュメント-日本語-white.svg" alt="JA doc"/></a>
<a href="README_ZH.md"><img src="https://img.shields.io/badge/文档-中文版-white.svg" alt="ZH doc"/></a>
<img src="https://img.shields.io/static/v1?label=license&message=MIT&color=white&style=flat" alt="License"/>
<br><br>
<b>Let language models run code on your computer.</b><br>
An open-source, locally running implementation of OpenAI's Code Interpreter.<br>
Expand Down Expand Up @@ -218,9 +218,27 @@ You can activate debug mode by using it's flag (`interpreter --debug`), or mid-c
```shell
$ interpreter
...
> %debug # <- Turns on debug mode
> %debug true <- Turns on debug mode

> %debug false <- Turns off debug mode
```

### Interactive Mode Commands

In the interactive mode, you can use the below commands to enhance your experience. Here's a list of available commands:

**Available Commands:**
`%debug [true/false]`: Toggle debug mode. Without arguments or with 'true', it
enters debug mode. With 'false', it exits debug mode.
`%reset`: Resets the current session.
`%save_message [path]`: Saves messages to a specified JSON path. If no path is
provided, it defaults to 'messages.json'.
`%load_message [path]`: Loads messages from a specified JSON path. If no path
is provided, it defaults to 'messages.json'.
`%help`: Show the help message.

Feel free to try out these commands and let us know your feedback!

### Configuration with .env

Open Interpreter allows you to set default behaviors using a .env file. This provides a flexible way to configure the interpreter without changing command-line arguments every time.
Expand Down
146 changes: 81 additions & 65 deletions README_JA.md

Large diffs are not rendered by default.

41 changes: 41 additions & 0 deletions SECURITY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Open Interpreter Security Policy

We take security seriously. Responsible reporting and disclosure of security
vulnerabilities is important for the protection and privacy of our users. If you
discover any security vulnerabilities, please follow these guidelines.

Published security advisories are available on our [GitHub Security Advisories]
page.

To report a vulnerability, please draft a [new security advisory on GitHub]. Any
fields that you are unsure of or don't understand can be left at their default
values. The important part is that the vulnerability is reported. Once the
security advisory draft has been created, we will validate the vulnerability and
coordinate with you to fix it, release a patch, and responsibly disclose the
vulnerability to the public. Read GitHub's documentation on [privately reporting
a security vulnerability] for details.

Please do not report undisclosed vulnerabilities on public sites or forums,
including GitHub issues and pull requests. Reporting vulnerabilities to the
public could allow attackers to exploit vulnerable applications before we have
been able to release a patch and before applications have had time to install
the patch. Once we have released a patch and sufficient time has passed for
applications to install the patch, we will disclose the vulnerability to the
public, at which time you will be free to publish details of the vulnerability
on public sites and forums.

If you have a fix for a security vulnerability, please do not submit a GitHub
pull request. Instead, report the vulnerability as described in this policy.
Once we have verified the vulnerability, we can create a [temporary private
fork] to collaborate on a patch.

We appreciate your cooperation in helping keep our users safe by following this
policy.

[github security advisories]: https://github.com/KillianLucas/open-interpreter/security/advisories
[new security advisory on github]:
https://github.com/KillianLucas/open-interpreter/security/advisories/new
[privately reporting a security vulnerability]:
https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability
[temporary private fork]:
https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/collaborating-in-a-temporary-private-fork-to-resolve-a-repository-security-vulnerability
23 changes: 7 additions & 16 deletions docs/GPU.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,12 @@ benefit from offloading some of the work to your GPU.

You may choose to install additional components if you like.

2. Once the CUDA Toolkit has finished installing, open a Command Prompt or
PowerShell window, and run the corresponding command. This ensures that the
2. Once the CUDA Toolkit has finished installing, open **x64 Native Tools Command
Prompt for VS 2022**, and run the following command. This ensures that the
`CUDA_PATH` environment varilable is set.

```
# Command Prompt
echo %CUDA_PATH%
# PowerShell
$env:CUDA_PATH
```
If you don't get back something like this:
Expand All @@ -46,24 +42,19 @@ benefit from offloading some of the work to your GPU.
Restart your computer, then repeat this step.
3. Once you have verified that the `CUDA_PATH` environment variable is set, run
the corresponding commands for your shell. This will reinstall the
`llama-cpp-python` package with NVIDIA GPU support.
4. Once you have verified that the `CUDA_PATH` environment variable is set, run
the following commands. This will reinstall the `llama-cpp-python` package
with NVIDIA GPU support.
```
# Command Prompt
set FORCE_CMAKE=1 && set CMAKE_ARGS=-DLLAMA_CUBLAS=on
pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir -vv
# PowerShell
$env:FORCE_CMAKE=1; $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'
pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir -vv
```
The command should complete with no errors. If you receive an error, ask for
help on [the Discord server](https://discord.gg/6p3fD6rBVm).
4. Once `llama-cpp-python` has been reinstalled, you can quickly check whether
6. Once `llama-cpp-python` has been reinstalled, you can quickly check whether
GPU support has been installed and set up correctly by running the following
command.
Expand All @@ -86,7 +77,7 @@ benefit from offloading some of the work to your GPU.
False
```
5. Finally, run the following command to use Open Interpreter with a local
7. Finally, run the following command to use Open Interpreter with a local
language model with GPU support.
```
Expand Down
6 changes: 3 additions & 3 deletions docs/WINDOWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,10 @@ The resolve this issue, perform the following steps.

- C++ CMake tools for Windows

3. Once installed, open the Start menu, search for **Developer Command Prompt
for VS 2022**, and open it.
3. Once installed, open the Start menu, search for **x64 Native Tools Command
Prompt for VS 2022**, and open it.

4. Run the following command.
5. Run the following command.

```
pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
Expand Down
6 changes: 3 additions & 3 deletions interpreter/code_interpreter.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ def run_html(html_content):
"print_cmd": 'log "{}"'
},
"html": {
"open_subrocess": False,
"open_subprocess": False,
"run_function": run_html,
}
}
Expand Down Expand Up @@ -150,10 +150,10 @@ def run(self):
"""

# Should we keep a subprocess open? True by default
open_subrocess = language_map[self.language].get("open_subrocess", True)
open_subprocess = language_map[self.language].get("open_subprocess", True)

# Start the subprocess if it hasn't been started
if not self.proc and open_subrocess:
if not self.proc and open_subprocess:
try:
self.start_process()
except:
Expand Down
17 changes: 12 additions & 5 deletions interpreter/get_hf_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,10 +121,15 @@ def get_hf_llm(repo_id, debug_mode, context_window):
if len(split_files) > 1:
# Download splits
for split_file in split_files:
# Do we already have a file split downloaded?
split_path = os.path.join(default_path, split_file)
if os.path.exists(split_path):
if not confirm_action(f"Split file {split_path} already exists. Download again?"):
continue
hf_hub_download(repo_id=repo_id, filename=split_file, local_dir=default_path, local_dir_use_symlinks=False)

# Combine and delete splits
actually_combine_files(selected_model, split_files)
actually_combine_files(default_path, selected_model, split_files)
else:
hf_hub_download(repo_id=repo_id, filename=selected_model, local_dir=default_path, local_dir_use_symlinks=False)

Expand Down Expand Up @@ -309,19 +314,21 @@ def group_and_combine_splits(models: List[Dict[str, Union[str, float]]]) -> List
return list(grouped_files.values())


def actually_combine_files(base_name: str, files: List[str]) -> None:
def actually_combine_files(default_path: str, base_name: str, files: List[str]) -> None:
"""
Combines files together and deletes the original split files.
:param base_name: The base name for the combined file.
:param files: List of files to be combined.
"""
files.sort()
with open(base_name, 'wb') as outfile:
base_path = os.path.join(default_path, base_name)
with open(base_path, 'wb') as outfile:
for file in files:
with open(file, 'rb') as infile:
file_path = os.path.join(default_path, file)
with open(file_path, 'rb') as infile:
outfile.write(infile.read())
os.remove(file)
os.remove(file_path)

def format_quality_choice(model, name_override = None) -> str:
"""
Expand Down
Loading

0 comments on commit 5630989

Please sign in to comment.