Skip to content

Commit

Permalink
update the documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
JianxinMa committed Apr 15, 2024
1 parent 2d79fd7 commit e84b103
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 8 deletions.
2 changes: 0 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,5 @@ This project is currently under active development, and backward compatibility m

> [!Warning]
> <div align="center">
> <b>
> The code interpreter is not sandboxed, and it executes code in your own environment. Please do not ask Qwen to perform dangerous tasks, and do not directly use the code interpreter for production purposes.
> </b>
> </div>
2 changes: 0 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,5 @@ BrowserQwen 是一款基于 Qwen-Agent 构建的浏览器助手应用程序。

> [!Warning]
> <div align="center">
> <b>
> 代码解释器未进行沙盒隔离,会在部署环境中执行代码。请避免向Qwen发出危险指令,切勿将该代码解释器直接用于生产目的。
> </b>
> </div>
4 changes: 2 additions & 2 deletions browser_qwen.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ If you are using your own model service instead of DashScope, then please execut

```bash
# Specify the model service, and start the database service.
# Example: Assuming Qwen/Qwen1.5-72B-Chat is deployed at http://localhost:8000 using vLLM, you can specify the model service as:
# --llm Qwen/Qwen1.5-72B-Chat --model_server http://localhost:8000/v1 --api_key EMPTY
# Example: Assuming Qwen1.5-72B-Chat is deployed at http://localhost:8000/v1 using vLLM, you can specify the model service as:
# --llm Qwen1.5-72B-Chat --model_server http://localhost:8000/v1 --api_key EMPTY
python run_server.py --llm {MODEL} --model_server {API_BASE} --workstation_port 7864 --api_key {API_KEY}
```

Expand Down
4 changes: 2 additions & 2 deletions browser_qwen_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ python run_server.py --llm qwen-max --model_server dashscope --workstation_port

```bash
# 指定模型服务,并启动数据库服务。
# 示例: 假设Qwen/Qwen1.5-72B-Chat已经通过vLLM部署于http://localhost:8000,则可用以下参数指定模型服务:
# --llm Qwen/Qwen1.5-72B-Chat --model_server http://localhost:8000/v1 --api_key EMPTY
# 示例: 假设Qwen1.5-72B-Chat已经通过vLLM部署于http://localhost:8000/v1,则可用以下参数指定模型服务:
# --llm Qwen1.5-72B-Chat --model_server http://localhost:8000/v1 --api_key EMPTY
python run_server.py --llm {MODEL} --model_server {API_BASE} --workstation_port 7864 --api_key {API_KEY}
```

Expand Down

0 comments on commit e84b103

Please sign in to comment.