Skip to content

Commit

Permalink
Update domain name references in docs and install script (ollama#2435)
Browse files Browse the repository at this point in the history
  • Loading branch information
jmorganca authored Feb 9, 2024
1 parent 42b797e commit 1c8435f
Show file tree
Hide file tree
Showing 13 changed files with 43 additions and 40 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Get up and running with large language models locally.

### macOS

[Download](https://ollama.ai/download/Ollama-darwin.zip)
[Download](https://ollama.com/download/Ollama-darwin.zip)

### Windows

Expand All @@ -19,7 +19,7 @@ Coming soon! For now, you can install Ollama on Windows via WSL2.
### Linux & WSL2

```
curl https://ollama.ai/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
```

[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)
Expand All @@ -35,15 +35,15 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla

## Quickstart

To run and chat with [Llama 2](https://ollama.ai/library/llama2):
To run and chat with [Llama 2](https://ollama.com/library/llama2):

```
ollama run llama2
```

## Model library

Ollama supports a list of open-source models available on [ollama.ai/library](https://ollama.ai/library 'ollama model library')
Ollama supports a list of open-source models available on [ollama.com/library](https://ollama.com/library 'ollama model library')

Here are some example open-source models that can be downloaded:

Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Create new models or modify models already in the library using the Modelfile. L

Import models using source model weights found on Hugging Face and similar sites by referring to the **[Import Documentation](./import.md)**.

Installing on Linux in most cases is easy using the script on Ollama.ai. To get more detail about the install, including CUDA drivers, see the **[Linux Documentation](./linux.md)**.
Installing on Linux in most cases is easy using the script on [ollama.com/download](ollama.com/download). To get more detail about the install, including CUDA drivers, see the **[Linux Documentation](./linux.md)**.

Many of our users like the flexibility of using our official Docker Image. Learn more about using Docker with Ollama using the **[Docker Documentation](https://hub.docker.com/r/ollama/ollama)**.

Expand Down
6 changes: 3 additions & 3 deletions docs/import.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,9 +123,9 @@ ollama run example "What is your favourite condiment?"

Publishing models is in early alpha. If you'd like to publish your model to share with others, follow these steps:

1. Create [an account](https://ollama.ai/signup)
1. Create [an account](https://ollama.com/signup)
2. Run `cat ~/.ollama/id_ed25519.pub` to view your Ollama public key. Copy this to the clipboard.
3. Add your public key to your [Ollama account](https://ollama.ai/settings/keys)
3. Add your public key to your [Ollama account](https://ollama.com/settings/keys)

Next, copy your model to your username's namespace:

Expand All @@ -139,7 +139,7 @@ Then push the model:
ollama push <your username>/example
```

After publishing, your model will be available at `https://ollama.ai/<your username>/example`.
After publishing, your model will be available at `https://ollama.com/<your username>/example`.

## Quantization reference

Expand Down
11 changes: 7 additions & 4 deletions docs/linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,11 @@
## Install

Install Ollama running this one-liner:

>
```bash
curl https://ollama.ai/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
```

## Manual install
Expand All @@ -15,7 +17,7 @@ curl https://ollama.ai/install.sh | sh
Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:

```bash
sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama
```

Expand Down Expand Up @@ -75,13 +77,13 @@ sudo systemctl start ollama
Update ollama by running the install script again:

```bash
curl https://ollama.ai/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
```

Or by downloading the ollama binary:

```bash
sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama
```

Expand Down Expand Up @@ -110,6 +112,7 @@ sudo rm $(which ollama)
```

Remove the downloaded models and Ollama service user and group:

```bash
sudo rm -r /usr/share/ollama
sudo userdel ollama
Expand Down
10 changes: 5 additions & 5 deletions docs/modelfile.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,13 +67,13 @@ To use this:

More examples are available in the [examples directory](../examples).

### `Modelfile`s in [ollama.ai/library][1]
### `Modelfile`s in [ollama.com/library][1]

There are two ways to view `Modelfile`s underlying the models in [ollama.ai/library][1]:
There are two ways to view `Modelfile`s underlying the models in [ollama.com/library][1]:

- Option 1: view a details page from a model's tags page:
1. Go to a particular model's tags (e.g. https://ollama.ai/library/llama2/tags)
2. Click on a tag (e.g. https://ollama.ai/library/llama2:13b)
1. Go to a particular model's tags (e.g. https://ollama.com/library/llama2/tags)
2. Click on a tag (e.g. https://ollama.com/library/llama2:13b)
3. Scroll down to "Layers"
- Note: if the [`FROM` instruction](#from-required) is not present,
it means the model was created from a local file
Expand Down Expand Up @@ -225,4 +225,4 @@ MESSAGE assistant yes
- the **`Modelfile` is not case sensitive**. In the examples, uppercase instructions are used to make it easier to distinguish it from arguments.
- Instructions can be in any order. In the examples, the `FROM` instruction is first to keep it easily readable.
[1]: https://ollama.ai/library
[1]: https://ollama.com/library
2 changes: 1 addition & 1 deletion docs/tutorials/nvidia-jetson.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Prerequisites:

Here are the steps:

- Install Ollama via standard Linux command (ignore the 404 error): `curl https://ollama.ai/install.sh | sh`
- Install Ollama via standard Linux command (ignore the 404 error): `curl https://ollama.com/install.sh | sh`
- Stop the Ollama service: `sudo systemctl stop ollama`
- Start Ollama serve in a tmux session called ollama_jetson and reference the CUDA libraries path: `tmux has-session -t ollama_jetson 2>/dev/null || tmux new-session -d -s ollama_jetson
'LD_LIBRARY_PATH=/usr/local/cuda/lib64 ollama serve'`
Expand Down
2 changes: 1 addition & 1 deletion examples/jupyter-notebook/ollama.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"outputs": [],
"source": [
"# Download and run the Ollama Linux install script\n",
"!curl https://ollama.ai/install.sh | sh\n",
"!curl -fsSL https://ollama.com/install.sh | sh\n",
"!command -v systemctl >/dev/null && sudo systemctl stop ollama"
]
},
Expand Down
20 changes: 10 additions & 10 deletions examples/kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,28 +2,28 @@

## Prerequisites

- Ollama: https://ollama.ai/download
- Ollama: https://ollama.com/download
- Kubernetes cluster. This example will use Google Kubernetes Engine.

## Steps

1. Create the Ollama namespace, daemon set, and service

```bash
kubectl apply -f cpu.yaml
```
```bash
kubectl apply -f cpu.yaml
```

1. Port forward the Ollama service to connect and use it locally

```bash
kubectl -n ollama port-forward service/ollama 11434:80
```
```bash
kubectl -n ollama port-forward service/ollama 11434:80
```

1. Pull and run a model, for example `orca-mini:3b`

```bash
ollama run orca-mini:3b
```
```bash
ollama run orca-mini:3b
```

## (Optional) Hardware Acceleration

Expand Down
2 changes: 1 addition & 1 deletion examples/langchain-python-rag-websummary/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# LangChain Web Summarization

This example summarizes the website, [https://ollama.ai/blog/run-llama2-uncensored-locally](https://ollama.ai/blog/run-llama2-uncensored-locally)
This example summarizes the website, [https://ollama.com/blog/run-llama2-uncensored-locally](https://ollama.com/blog/run-llama2-uncensored-locally)

## Running the Example

Expand Down
2 changes: 1 addition & 1 deletion examples/langchain-python-rag-websummary/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from langchain.document_loaders import WebBaseLoader
from langchain.chains.summarize import load_summarize_chain

loader = WebBaseLoader("https://ollama.ai/blog/run-llama2-uncensored-locally")
loader = WebBaseLoader("https://ollama.com/blog/run-llama2-uncensored-locally")
docs = loader.load()

llm = Ollama(model="llama2")
Expand Down
4 changes: 2 additions & 2 deletions examples/python-loganalysis/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,13 @@ You are a log file analyzer. You will receive a set of lines from a log file for
"""
```

This model is available at https://ollama.ai/mattw/loganalyzer. You can customize it and add to your own namespace using the command `ollama create <namespace/modelname> -f <path-to-modelfile>` then `ollama push <namespace/modelname>`.
This model is available at https://ollama.com/mattw/loganalyzer. You can customize it and add to your own namespace using the command `ollama create <namespace/modelname> -f <path-to-modelfile>` then `ollama push <namespace/modelname>`.

Then loganalysis.py scans all the lines in the given log file and searches for the word 'error'. When the word is found, the 10 lines before and after are set as the prompt for a call to the Generate API.

```python
data = {
"prompt": "\n".join(error_logs),
"prompt": "\n".join(error_logs),
"model": "mattw/loganalyzer"
}
```
Expand Down
12 changes: 6 additions & 6 deletions examples/typescript-mentors/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ You can also add your own character to be chosen at random when you ask a questi
```bash
ollama pull stablebeluga2:70b-q4_K_M
```

2. Create a new character:

```bash
npm run charactergen "Lorne Greene"
```
Expand All @@ -41,15 +41,15 @@ You can also add your own character to be chosen at random when you ask a questi
3. Now you can create a model with this command:

```bash
ollama create <YourNamespace>/lornegreene -f lornegreene/Modelfile
ollama create <username>/lornegreene -f lornegreene/Modelfile
```

`YourNamespace` is whatever name you set up when you signed up at [https://ollama.ai/signup](https://ollama.ai/signup).
`username` is whatever name you set up when you signed up at [https://ollama.com/signup](https://ollama.com/signup).

4. To add this to your mentors, you will have to update the code as follows. On line 8 of `mentors.ts`, add an object to the array, replacing `<YourNamespace>` with the namespace you used above.
4. To add this to your mentors, you will have to update the code as follows. On line 8 of `mentors.ts`, add an object to the array, replacing `<username>` with the username you used above.

```bash
{ns: "<YourNamespace>", char: "Lorne Greene"}
{ns: "<username>", char: "Lorne Greene"}
```

## Review the Code
Expand Down
2 changes: 1 addition & 1 deletion scripts/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ if [ -n "$NEEDS" ]; then
fi

status "Downloading ollama..."
curl --fail --show-error --location --progress-bar -o $TEMP_DIR/ollama "https://ollama.ai/download/ollama-linux-$ARCH"
curl --fail --show-error --location --progress-bar -o $TEMP_DIR/ollama "https://ollama.com/download/ollama-linux-$ARCH"

for BINDIR in /usr/local/bin /usr/bin /bin; do
echo $PATH | grep -q $BINDIR && break || continue
Expand Down

0 comments on commit 1c8435f

Please sign in to comment.