Skip to content

Commit

Permalink
recommend qwen-max in the readme
Browse files Browse the repository at this point in the history
  • Loading branch information
JianxinMa committed Jan 17, 2024
1 parent 141bdf3 commit 3bf984d
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 5 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ pip install fastapi uvicorn "openai<1.0.0" "pydantic>=2.3.0" sse_starlette
# -c to specify any open-source model listed at https://huggingface.co/Qwen
# --server-name 0.0.0.0 allows other machines to access your service.
# --server-name 127.0.0.1 only allows the machine deploying the model to access the service.
python openai_api.py --server-name 0.0.0.0 --server-port 7905 -c Qwen/Qwen-14B-Chat
python openai_api.py --server-name 0.0.0.0 --server-port 7905 -c Qwen/Qwen-72B-Chat
```

## Developing Your Own Agent
Expand Down Expand Up @@ -192,7 +192,7 @@ If you are using DashScope's model service, then please execute the following co
# - qwen-7b/14b/72b-chat (the same as the open-sourced 7B/14B/72B-Chat model)
# - qwen-turbo, qwen-plus, qwen-max
# "YOUR_DASHSCOPE_API_KEY" is a placeholder. The user should replace it with their actual key.
python run_server.py --api_key YOUR_DASHSCOPE_API_KEY --model_server dashscope --llm qwen-72b-chat --workstation_port 7864
python run_server.py --api_key YOUR_DASHSCOPE_API_KEY --model_server dashscope --llm qwen-max --workstation_port 7864
```

If you are using your own model service instead of DashScope, then please execute the following command:
Expand Down
5 changes: 2 additions & 3 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ pip install fastapi uvicorn "openai<1.0.0" "pydantic>=2.3.0" sse_starlette
# - 通过 -c 参数指定模型版本,支持 https://huggingface.co/Qwen 上列出的开源模型
# - 指定 --server-name 0.0.0.0 将允许其他机器访问您的模型服务
# - 指定 --server-name 127.0.0.1 则只允许部署模型的机器自身访问该模型服务
python openai_api.py --server-name 0.0.0.0 --server-port 7905 -c Qwen/Qwen-14B-Chat
python openai_api.py --server-name 0.0.0.0 --server-port 7905 -c Qwen/Qwen-72B-Chat
```

## 快速开发
Expand Down Expand Up @@ -179,8 +179,7 @@ while True:
# - qwen-7b/14b/72b-chat (与开源的Qwen-7B/14B/72B-Chat相同模型)
# - qwen-turbo, qwen-plus, qwen-max
# 您需要将YOUR_DASHSCOPE_API_KEY替换为您的真实API-KEY。
export DASHSCOPE_API_KEY=YOUR_DASHSCOPE_API_KEY
python run_server.py --model_server dashscope --llm qwen-7b-chat --workstation_port 7864
python run_server.py --api_key YOUR_DASHSCOPE_API_KEY --model_server dashscope --llm qwen-max --workstation_port 7864
```

如果您没有在使用DashScope、而是部署了自己的模型服务的话,请执行以下命令:
Expand Down

0 comments on commit 3bf984d

Please sign in to comment.