NOTE: This Github Action is deprecated. It was born out of a lack of alternatives, but since pydantic now maintains a Ollama Github Action, this one is not needed anymore. Check out The Pydantic alternative instead.
A GitHub Action to easily install and run Ollama models in your workflow. Supports Linux, macOS, and Windows runners.
- 🚀 Cross-platform support (Linux, macOS, Windows)
- 🔄 Automatic installation and setup
- 🎯 Run specific models
- ⚡ Fast model pulling and execution
Runner OS | Architecture | Status |
---|---|---|
Ubuntu 20.04+ | x86_64 | ✅ Fully Supported |
macOS 11+ | x86_64, ARM64 | ✅ Fully Supported |
Windows Server 2019+ | x86_64 | ✅ Fully Supported |
- name: Serve Ollama Model
uses: phil65/ollama-github-action@v1
with:
model: "smollm2:135m"
Input | Description | Required | Default |
---|---|---|---|
model |
Ollama model to use (e.g., llama2, codellama, mistral) | Yes | smollm2:135m |
Output | Description |
---|---|
server-url |
URL of the running Ollama server (http://localhost:11434) |
status |
Status of the Ollama server (running/failed) |
- Runs natively using the official installer
- Installs via Homebrew
- Supports both Intel and Apple Silicon
- Uses the latest release from GitHub
- Custom installation path at C:\ollama
jobs:
serve-model:
runs-on: ubuntu-latest
steps:
- name: Start Ollama Server
id: ollama # Required to reference outputs
uses: phil65/ollama-github-action@v1
with:
model: "smollm2:135m"
# Example: Use the Ollama server in subsequent steps
- name: Use Ollama
run: |
echo "Server URL: ${{ steps.ollama.outputs.server-url }}"
echo "Server Status: ${{ steps.ollama.outputs.status }}"
# Example API call
curl "${{ steps.ollama.outputs.server-url }}/api/generate" \
-d '{
"model": "smollm2:135m",
"prompt": "What is GitHub Actions?"
}'
The Ollama server will:
- Start automatically when the action runs
- Remain running for subsequent workflow steps
- Be automatically cleaned up when the job completes
Note: If you need to stop the server manually in your workflow, you can use:
- name: Stop Ollama Server
if: always() # Ensures cleanup even if previous steps fail
run: |
pkill ollama || true
The following environment variables are available during the workflow:
OLLAMA_HOST
: localhostOLLAMA_PORT
: 11434
- name: Generate Text
run: |
curl "${{ steps.ollama.outputs.server-url }}/api/generate" \
-d '{
"model": "smollm2:135m",
"prompt": "Write a hello world program",
"stream": false
}'
- name: List Models
run: |
curl "${{ steps.ollama.outputs.server-url }}/api/tags"
Memory Issues
- Use a runner with more RAM
- Try a smaller model
- Close unnecessary processes
Enable debug logging by setting:
env:
ACTIONS_STEP_DEBUG: true
- The server is accessible only on localhost
- Model files are stored in the runner's temporary space
- Cleanup is automatic after workflow completion
- No sensitive data is persisted between runs
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.