The lite version of the prompt is not fully optimized. To achieve better results, it is recommended that users adjust the prompt according to performance!!!
- Create an account on Google AI and get your API key.
- Install the required packages google-generativeai or install the
requirement.txt
with uncommenting the Gemini.
pip install -U google-generativeai==0.7.0
- Add following configuration to
config.yaml
:
{
"API_TYPE": "Gemini" ,
"API_KEY": "YOUR_KEY",
"API_MODEL": "YOUR_MODEL"
}
NOTE: API_MODEL
is the model name of Gemini LLM API.
You can find the model name in the Gemini LLM model list.
If you meet the 429 Resource has been exhausted (e.g. check quota).
, it may because the rate limit of your Gemini API.
- QWen (Tongyi Qianwen) is a LLM developed by Alibaba. Go to QWen and register an account and get the API key. More details can be found here (in Chinese).
- Install the required packages dashscope or install the
requirement.txt
with uncommenting the Qwen.
pip install dashscope
- Add following configuration to
config.yaml
:
{
"API_TYPE": "Qwen" ,
"API_KEY": "YOUR_KEY",
"API_MODEL": "YOUR_MODEL"
}
NOTE: API_MODEL
is the model name of QWen LLM API.
You can find the model name in the QWen LLM model list.
- Go to Ollama and follow the instructions to serve a LLM model on your local environment. We provide a short example to show how to configure the ollama in the following, which might change if ollama makes updates.
## Install ollama on Linux & WSL2
curl https://ollama.ai/install.sh | sh
## Run the serving
ollama serve
Open another terminal and run:
ollama run YOUR_MODEL
info
When serving LLMs via Ollama, it will by default start a server at http://localhost:11434
, which will later be used as the API base in config.yaml
.
- Add following configuration to
config.yaml
:
{
"API_TYPE": "Ollama" ,
"API_BASE": "YOUR_ENDPOINT",
"API_MODEL": "YOUR_MODEL"
}
NOTE: API_BASE
is the URL started in the Ollama LLM server and API_MODEL
is the model name of Ollama LLM, it should be same as the one you served before. In addition, due to model limitations, you can use lite version of prompt to have a taste on UFO which can be configured in config_dev.yaml
. Attention to the top note.
-
Start a server with your model, which will later be used as the API base in
config.yaml
. -
Add following configuration to
config.yaml
:
{
"API_TYPE": "custom_model" ,
"API_BASE": "YOUR_ENDPOINT",
"API_KEY": "YOUR_KEY",
"API_MODEL": "YOUR_MODEL"
}
NOTE: You should create a new Python script <custom_model>.py in the ufo/llm folder like the format of the .py, which needs to inherit BaseService
as the parent class, as well as the __init__
and chat_completion
methods. At the same time, you need to add the dynamic import of your file in the get_service
method of BaseService
.