Ensure that you have the Ollama server up and running. For detailed startup instructions, refer to the here
This guide assumes you've started ollama with ollama serve
. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified.
Ollama model names can be found here. For a small example, you can use
the codellama:7b
model. Bigger models will generally perform better.
ollama pull codellama:7b
you can check which models you have downloaded like this:
~$ ollama list
NAME ID SIZE MODIFIED
codellama:7b 8fdf8f752f6e 3.8 GB 6 weeks ago
mistral:7b-instruct-v0.2-q4_K_M eb14864c7427 4.4 GB 2 weeks ago
starcoder2:latest f67ae0f64584 1.7 GB 19 hours ago
Use the instructions in README.md to start OpenDevin using Docker.
But when running docker run
, you'll need to add a few more arguments:
--add-host host.docker.internal=host-gateway \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://host.docker.internal:11434" \
For example:
# The directory you want OpenDevin to modify. MUST be an absolute path!
export WORKSPACE_DIR=$(pwd)/workspace
docker run \
--add-host host.docker.internal=host-gateway \
-e LLM_API_KEY="ollama" \
-e LLM_BASE_URL="http://host.docker.internal:11434" \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR \
-v $WORKSPACE_DIR:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
ghcr.io/opendevin/opendevin:main
You should now be able to connect to http://localhost:3000/
In the OpenDevin UI, click on the Settings wheel in the bottom-left corner.
Then in the Model
input, enter ollama/codellama:7b
, or the name of the model you pulled earlier.
If it doesn’t show up in a dropdown, that’s fine, just type it in. Click Save when you’re done.
And now you're ready to go!