Skip to content

Commit

Permalink
Add link to LiteLLM to make-setup (All-Hands-AI#614)
Browse files Browse the repository at this point in the history
* Update Makefile

* fix tab

* add note to readme
  • Loading branch information
rbren authored Apr 3, 2024
1 parent 1c6f046 commit 310cd70
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 3 deletions.
9 changes: 6 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -42,14 +42,17 @@ run:
# Setup config.toml
setup-config:
@echo "Setting up config.toml..."
@read -p "Enter your LLM API key: " llm_api_key; \
echo "LLM_API_KEY=\"$$llm_api_key\"" >> $(CONFIG_FILE).tmp
@read -p "Enter your LLM Model name [default: $(DEFAULT_MODEL)]: " llm_model; \
@read -p "Enter your LLM Model name (see docs.litellm.ai/docs/providers for full list) [default: $(DEFAULT_MODEL)]: " llm_model; \
llm_model=$${llm_model:-$(DEFAULT_MODEL)}; \
echo "LLM_MODEL=\"$$llm_model\"" >> $(CONFIG_FILE).tmp

@read -p "Enter your LLM API key: " llm_api_key; \
echo "LLM_API_KEY=\"$$llm_api_key\"" >> $(CONFIG_FILE).tmp

@read -p "Enter your workspace directory [default: $(DEFAULT_WORKSPACE_DIR)]: " workspace_dir; \
workspace_dir=$${workspace_dir:-$(DEFAULT_WORKSPACE_DIR)}; \
echo "WORKSPACE_DIR=\"$$workspace_dir\"" >> $(CONFIG_FILE).tmp

@mv $(CONFIG_FILE).tmp $(CONFIG_FILE)

# Help
Expand Down
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,14 @@ Getting started with the OpenDevin project is incredibly easy. Follow these simp
make setup-config
```

You'll need to choose your LLM model as part of this step. By default, we use OpenAI's gpt-4, but you can
use Anthropic's Claude, ollama, or any other LLM provider supported by LiteLLM. See the full
list of supported models at [docs.litellm.ai/docs/providers](https://docs.litellm.ai/docs/providers).
(Note: alternative models can be hard to work with. We will make LLM-specific available documentation available soon.
If you've gotten OpenDevin working with a model other than OpenAI's GPT models,
please [add your setup instructions here](https://github.com/OpenDevin/OpenDevin/issues/417).)
### 3. Run the Application
- **Run the Application:** Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
Expand Down

0 comments on commit 310cd70

Please sign in to comment.