Skip to content

Commit 998bfc1

Browse files
committedApr 10, 2024
config
1 parent 8d75f72 commit 998bfc1

File tree

3 files changed

+6
-7
lines changed

3 files changed

+6
-7
lines changed
 

‎README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -79,9 +79,9 @@ UFO requires **Python >= 3.10** running on **Windows OS >= 10**. It can be insta
7979
git clone https://github.com/microsoft/UFO.git
8080
cd UFO
8181
# install the requirements
82-
python install.py
82+
python setup.py
8383
# If you want to use the Qwen and Ollama as your LLMs, run the python file with options
84-
python install.py -qwen -ollama
84+
python setup.py -qwen -ollama
8585
```
8686

8787
### ⚙️ Step 2: Configure the LLMs

‎model_worker/README.md

+4-5
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ The lite version of the prompt is not fully optimized. To achieve better results
33
### If you use QWEN as the Agent
44

55
1. QWen (Tongyi Qianwen) is a LLM developed by Alibaba. Go to [QWen](https://dashscope.aliyun.com/) and register an account and get the API key. More details can be found [here](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key?spm=a2c4g.11186623.0.0.7b5749d72j3SYU) (in Chinese).
6-
2. Install the required packages dashscope or run the `install.py` with `-qwen` options.
6+
2. Install the required packages dashscope or run the `setup.py` with `-qwen` options.
77
```bash
88
pip install dashscope
99
```
@@ -23,7 +23,7 @@ You can find the model name in the [QWen LLM model list](https://help.aliyun.com
2323
We provide a short example to show how to configure the ollama in the following, which might change if ollama makes updates.
2424

2525
```bash title="install ollama and serve LLMs in local" showLineNumbers
26-
## Install ollama on Linux & WSL2 or run the `install.py` with `-ollama` options
26+
## Install ollama on Linux & WSL2 or run the `setup.py` with `-ollama` options
2727
curl https://ollama.ai/install.sh | sh
2828
## Run the serving
2929
ollama serve
@@ -40,9 +40,8 @@ When serving LLMs via Ollama, it will by default start a server at `http://local
4040
2. Add following configuration to `config.yaml`:
4141
```json showLineNumbers
4242
{
43-
"API_TYPE": "ollama" ,
44-
"API_BASE": "http://localhost:11434",
45-
"API_KEY": "ARBITRARY_STRING",
43+
"API_TYPE": "Ollama" ,
44+
"API_BASE": "YOUR_ENDPOINT",
4645
"API_MODEL": "YOUR_MODEL"
4746
}
4847
```

‎install.py ‎setup.py

File renamed without changes.

0 commit comments

Comments
 (0)
Please sign in to comment.