Skip to content

Commit

Permalink
simplify readme (All-Hands-AI#366)
Browse files Browse the repository at this point in the history
* simplify readme

* Update config.toml.template

* Update vite.config.ts (All-Hands-AI#372)

* Update vite.config.ts

* Update frontend/vite.config.ts

---------

Co-authored-by: Robert Brennan <[email protected]>

* remove old langchains infra

* remove refs to OPENAI_API_KEY

* simplify opendevin readme

---------

Co-authored-by: Engel Nyst <[email protected]>
  • Loading branch information
rbren and enyst authored Mar 30, 2024
1 parent 11ed011 commit 6bd566d
Show file tree
Hide file tree
Showing 7 changed files with 18 additions and 100 deletions.
33 changes: 6 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,26 +23,18 @@ OpenDevin is still a work in progress. But you can run the alpha version to see
* [NodeJS](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) >= 14.8

### Installation
First, make sure Docker is running:
```bash
docker ps # this should exit successfully
```

Then pull our latest image [here](https://github.com/opendevin/OpenDevin/pkgs/container/sandbox)
First, pull our latest sandbox image [here](https://github.com/opendevin/OpenDevin/pkgs/container/sandbox)
```bash
docker pull ghcr.io/opendevin/sandbox
```

Then copy `config.toml.template` to `config.toml`. Add an API key to `config.toml`.
(See below for how to use different models.)
Then copy `config.toml.template` to `config.toml`. Add an OpenAI API key to `config.toml`,
or see below for how to use different models.
```toml
OPENAI_API_KEY="..."
WORKSPACE_DIR="..."
LLM_API_KEY="sk-..."
```

Next, start the backend.
We manage python packages and the virtual environment with `pipenv`.
Make sure you have python >= 3.10.
Next, start the backend:
```bash
python -m pip install pipenv
python -m pipenv install -v
Expand All @@ -56,6 +48,7 @@ cd frontend
npm install
npm start
```
You'll see OpenDevin running at localhost:3001

### Picking a Model
We use LiteLLM, so you can run OpenDevin with any foundation model, including OpenAI, Claude, and Gemini.
Expand All @@ -79,20 +72,6 @@ And you can customize which embeddings are used for the vector database storage:
LLM_EMBEDDING_MODEL="llama2" # can be "llama2", "openai", "azureopenai", or "local"
```

### Running the app
You should be able to run the backend now
```bash
uvicorn opendevin.server.listen:app --port 3000
```
Then in a second terminal:
```bash
cd frontend
npm install
npm run start -- --port 3001
```

You'll see OpenDevin running at localhost:3001

### Running on the Command Line
You can run OpenDevin from your command line:
```bash
Expand Down
20 changes: 0 additions & 20 deletions agenthub/langchains_agent/Dockerfile

This file was deleted.

19 changes: 0 additions & 19 deletions agenthub/langchains_agent/build-and-run.sh

This file was deleted.

2 changes: 1 addition & 1 deletion config.toml.template
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# This is a template. Run `cp config.toml.template config.toml` to use it.

OPENAI_API_KEY="<YOUR OPENAI API KEY>"
LLM_API_KEY="<YOUR OPENAI API KEY>"
WORKSPACE_DIR="./workspace"
1 change: 0 additions & 1 deletion frontend/vite.config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ export default defineConfig({
base: "",
plugins: [react(), viteTsconfigPaths()],
server: {
// this sets a default port to 3000
port: 3001,
},
});
39 changes: 9 additions & 30 deletions opendevin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,16 @@

This is a Python package that contains all the shared abstraction (e.g., Agent) and components (e.g., sandbox, web browser, search API, selenium).

## Sandbox component
See the [main README](../README.md) for instructions on how to run OpenDevin from the command line.

Run the docker-based sandbox interactive:
## Sandbox Image
```bash
docker build -f opendevin/sandbox/Dockerfile -t opendevin/sandbox:v0.1 .
```

## Sandbox Runner

Run the docker-based interactive sandbox:

```bash
mkdir workspace
Expand All @@ -17,31 +24,3 @@ Example screenshot:

<img width="868" alt="image" src="https://github.com/OpenDevin/OpenDevin/assets/38853559/8dedcdee-437a-4469-870f-be29ca2b7c32">


## How to run

1. Build the sandbox image local. If you want to use specific image tags, please also fix the variable in code, in code default image tag is `latest`.
```bash
docker build -f opendevin/sandbox/Dockerfile -t opendevin/sandbox:v0.1 .
```

Or you can pull the latest image [here](https://github.com/opendevin/OpenDevin/pkgs/container/sandbox):
```bash
docker pull ghcr.io/opendevin/sandbox
```

2. Set the `OPENAI_API_KEY`, please find more details [here](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety). Also, choose the model you want. Default is `gpt-4-0125-preview`
```bash
export OPENAI_API_KEY=xxxxxxx
```

3. Install the requirement package.
```bash
pip install -r requirements.txt
```
If you still meet problem like `ModuleNotFoundError: No module named 'agenthub'`, try to add the `opendevin` root path into `PATH` env.

4. Run following cmd to start.
```bash
PYTHONPATH=`pwd` python ./opendevin/main.py -d ./workspace -t "write a bash script that prints hello world"
```
4 changes: 2 additions & 2 deletions opendevin/server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This is a WebSocket server that executes tasks using an agent.
Create a `.env` file with the contents

```sh
OPENAI_API_KEY=<YOUR OPENAI API KEY>
LLM_API_KEY=<YOUR OPENAI API KEY>
```

Install requirements:
Expand Down Expand Up @@ -36,7 +36,7 @@ websocat ws://127.0.0.1:3000/ws
## Supported Environment Variables

```sh
OPENAI_API_KEY=sk-... # Your OpenAI API Key
LLM_API_KEY=sk-... # Your OpenAI API Key
LLM_MODEL=gpt-4-0125-preview # Default model for the agent to use
WORKSPACE_DIR=/path/to/your/workspace # Default path to model's workspace
```
Expand Down

0 comments on commit 6bd566d

Please sign in to comment.