Skip to content

Commit

Permalink
config
Browse files Browse the repository at this point in the history
  • Loading branch information
Mac0q committed Apr 10, 2024
1 parent 8d75f72 commit 998bfc1
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 7 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,9 @@ UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be insta
git clone https://github.com/microsoft/UFO.git
cd UFO
# install the requirements
python install.py
python setup.py
# If you want to use the Qwen and Ollama as your LLMs, run the python file with options
python install.py -qwen -ollama
python setup.py -qwen -ollama
```

### ⚙️ Step 2: Configure the LLMs
Expand Down
9 changes: 4 additions & 5 deletions model_worker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ The lite version of the prompt is not fully optimized. To achieve better results
### If you use QWEN as the Agent

1. QWen (Tongyi Qianwen) is a LLM developed by Alibaba. Go to [QWen](https://dashscope.aliyun.com/) and register an account and get the API key. More details can be found [here](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key?spm=a2c4g.11186623.0.0.7b5749d72j3SYU) (in Chinese).
2. Install the required packages dashscope or run the `install.py` with `-qwen` options.
2. Install the required packages dashscope or run the `setup.py` with `-qwen` options.
```bash
pip install dashscope
```
Expand All @@ -23,7 +23,7 @@ You can find the model name in the [QWen LLM model list](https://help.aliyun.com
We provide a short example to show how to configure the ollama in the following, which might change if ollama makes updates.

```bash title="install ollama and serve LLMs in local" showLineNumbers
## Install ollama on Linux & WSL2 or run the `install.py` with `-ollama` options
## Install ollama on Linux & WSL2 or run the `setup.py` with `-ollama` options
curl https://ollama.ai/install.sh | sh
## Run the serving
ollama serve
Expand All @@ -40,9 +40,8 @@ When serving LLMs via Ollama, it will by default start a server at `http://local
2. Add following configuration to `config.yaml`:
```json showLineNumbers
{
"API_TYPE": "ollama" ,
"API_BASE": "http://localhost:11434",
"API_KEY": "ARBITRARY_STRING",
"API_TYPE": "Ollama" ,
"API_BASE": "YOUR_ENDPOINT",
"API_MODEL": "YOUR_MODEL"
}
```
Expand Down
File renamed without changes.

0 comments on commit 998bfc1

Please sign in to comment.