Skip to content

Commit

Permalink
how-to/deploy: add cdc deployment through tiup (pingcap#2938)
Browse files Browse the repository at this point in the history
* add cdc deployment through tiup

* fix

* fix ci

* fix ci again

* address comments

* add an anchor link

* address comments

Co-authored-by: yikeke <[email protected]>
  • Loading branch information
lichunzhu and yikeke authored May 6, 2020
1 parent fa3bfa3 commit f54ddef
Show file tree
Hide file tree
Showing 3 changed files with 213 additions and 22 deletions.
128 changes: 109 additions & 19 deletions how-to/deploy/orchestrated/tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,6 +300,7 @@ category: how-to
- [场景 1:单机单实例](#场景-1单机单实例)
- [场景 2:单机多实例](#场景-2单机多实例)
- [场景 3:通过 TiDB Binlog 同步到下游](#场景-3通过-tidb-binlog-同步到下游)
- [场景 4:通过 TiCDC 同步到下游](#场景-4通过-ticdc-同步到下游)
### 场景 1:单机单实例
Expand Down Expand Up @@ -525,6 +526,16 @@ tiflash_servers:
# syncer.to.port: 3306
# - host: 10.0.1.19
# cdc_servers:
# - host: 10.0.1.20
# ssh_port: 22
# port: 8300
# deploy_dir: "/tidb-deploy/cdc-8300"
# log_dir: "/tidb-deploy/cdc-8300/log"
# numa_node: "0,1"
# - host: 10.0.1.21
# - host: 10.0.1.22
monitoring_servers:
- host: 10.0.1.4
# ssh_port: 22
Expand Down Expand Up @@ -1081,6 +1092,80 @@ alertmanager_servers:
- host: 10.0.1.4
```
### 场景 4:通过 TiCDC 同步到下游
#### 部署需求
设置默认部署目录 `/tidb-deploy` 和数据目录 `/tidb-data`,需要启动 TiCDC,可在 TiCDC 集群部署完成后[通过 `cdc cli` 创建同步任务](/reference/tools/ticdc/deploy.md#第-2-步创建同步任务)。
#### 拓扑信息
| 实例 |个数| 物理机配置 | IP | 配置 |
| :-- | :-- | :-- | :-- | :-- |
| TiKV | 3 | 16 VCore 32 GB | 10.0.1.1 <br> 10.0.1.2 <br> 10.0.1.3 | 默认端口配置 |
|TiDB | 3 | 16 VCore 32 GB | 10.0.1.7 <br> 10.0.1.8 <br> 10.0.1.9 | 默认端口配置 |
| PD | 3 | 4 VCore 8 GB | 10.0.1.4 <br> 10.0.1.5 <br> 10.0.1.6 | 默认端口配置 |
| TiFlash | 1 | 32 VCore 64 GB | 10.0.1.10 | 默认端口 <br> 自定义部署目录,配置 data_dir 参数为 `/data1/tiflash/data,/data2/tiflash/data`,进行[多盘部署](/reference/tiflash/configuration.md#多盘部署) |
| CDC| 3 |8 VCore 16GB |10.0.1.6<br>10.0.1.7<br>10.0.1.8 | 默认端口配置 |
#### 配置文件模版 topology.yaml
> **注意:**
>
> - 配置文件模版时,如无需自定义端口或者目录,仅修改 IP 即可。
>
> - [部署 TiFlash](/reference/tiflash/deploy.md) 需要在 topology.yaml 配置文件中将 `replication.enable-placement-rules` 设置为 `true`,以开启 PD 的 [Placement Rules](/how-to/configure/placement-rules.md) 功能。
>
> - tiflash_servers 实例级别配置 `"-host"` 目前只支持 ip,不支持域名。
>
> - TiFlash 具体的参数配置介绍可参考 [TiFlash 参数配置](#tiflash-参数)。
{{< copyable "shell-regular" >}}
```shell
cat topology.yaml
```
```yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
server_configs:
pd:
replication.enable-placement-rules: true
pd_servers:
- host: 10.0.1.4
- host: 10.0.1.5
- host: 10.0.1.6
tidb_servers:
- host: 10.0.1.7
- host: 10.0.1.8
- host: 10.0.1.9
tikv_servers:
- host: 10.0.1.1
- host: 10.0.1.2
- host: 10.0.1.3
tiflash_servers:
- host: 10.0.1.10
data_dir: /data1/tiflash/data,/data2/tiflash/data
cdc_servers:
- host: 10.0.1.6
- host: 10.0.1.7
- host: 10.0.1.8
monitoring_servers:
- host: 10.0.1.4
grafana_servers:
- host: 10.0.1.4
alertmanager_servers:
- host: 10.0.1.4
```
## 执行部署命令
### 部署命令介绍
Expand Down Expand Up @@ -1418,6 +1503,7 @@ tiup cluster destroy tidb-test
| PD | peer_port | 2380 | PD 集群节点间通信端口 |
| Pump | port | 8250 | Pump 通信端口 |
|Drainer|port|8249|Drainer 通信端口|
| CDC | port | 8300 | CDC 通信接口 |
| Prometheus | port | 9090 | Prometheus 服务通信端口 |
| Node_exporter | node_exporter_port | 9100 | TiDB 集群每个节点的系统信息上报通信端口 |
| Blackbox_exporter | blackbox_exporter_port | 9115 | Blackbox_exporter 通信端口,用于 TiDB 集群端口监控 |
Expand Down Expand Up @@ -1564,6 +1650,7 @@ v4.0.0-beta 2020-03-13T12:43:55.508190493+08:00 linux/amd64,darwi
v4.0.0-beta.1 2020-03-13T12:30:08.913759828+08:00 linux/amd64,darwin/amd64
v4.0.0-beta.2 2020-03-18T22:52:00.830626492+08:00 linux/amd64,darwin/amd64
v4.0.0-rc YES 2020-04-17T01:22:03+08:00 linux/amd64,darwin/amd64
v4.0.0-rc.1 2020-04-29T01:03:31+08:00 darwin/amd64,linux/amd64,linux/arm64
nightly 2020-04-18T08:54:10+08:00 darwin/amd64,linux/amd64
```
Expand All @@ -1579,25 +1666,28 @@ tiup list
```log
Available components (Last Modified: 2020-02-27T15:20:35+08:00):
Name Installed Platforms Description
---- --------- --------- -----------
tidb YES(v4.0.0-rc) darwin/amd64,linux/amd64 TiDB is an open source distributed HTAP database compatible with the MySQL protocol
tikv YES(v4.0.0-rc) darwin/amd64,linux/amd64 Distributed transactional key-value database, originally created to complement TiDB
pd YES(v4.0.0-rc) darwin/amd64,linux/amd64 PD is the abbreviation for Placement Driver. It is used to manage and schedule the TiKV cluster
playground YES(v0.0.5) darwin/amd64,linux/amd64 Bootstrap a local TiDB cluster
client darwin/amd64,linux/amd64 A simple mysql client to connect TiDB
prometheus darwin/amd64,linux/amd64 The Prometheus monitoring system and time series database.
tpc darwin/amd64,linux/amd64 A toolbox to benchmark workloads in TPC
package darwin/amd64,linux/amd64 A toolbox to package tiup component
grafana linux/amd64,darwin/amd64 Grafana is the open source analytics & monitoring solution for every database
alertmanager darwin/amd64,linux/amd64 Prometheus alertmanager
blackbox_exporter darwin/amd64,linux/amd64 Blackbox prober exporter
node_exporter darwin/amd64,linux/amd64 Exporter for machine metrics
pushgateway darwin/amd64,linux/amd64 Push acceptor for ephemeral and batch jobs
tiflash linux/amd64 The TiFlash Columnar Storage Engine
drainer linux/amd64 The drainer componet of TiDB binlog service
pump linux/amd64 The pump componet of TiDB binlog service
cluster YES(v0.4.6) linux/amd64,darwin/amd64 Deploy a TiDB cluster for production
Name Installed Platforms Description
---- --------- --------- -----------
tidb darwin/amd64,linux/amd64,linux/arm64 TiDB is an open source distributed HTAP database compatible with the MySQL protocol
tikv darwin/amd64,linux/amd64,linux/arm64 Distributed transactional key-value database, originally created to complement TiDB
pd darwin/amd64,linux/amd64,linux/arm64 PD is the abbreviation for Placement Driver. It is used to manage and schedule the TiKV cluster
playground darwin/amd64,linux/amd64 Bootstrap a local TiDB cluster
client darwin/amd64,linux/amd64 A simple mysql client to connect TiDB
prometheus darwin/amd64,linux/amd64,linux/arm64 The Prometheus monitoring system and time series database.
package darwin/amd64,linux/amd64 A toolbox to package tiup component
grafana darwin/amd64,linux/amd64,linux/arm64 Grafana is the open source analytics & monitoring solution for every database
alertmanager darwin/amd64,linux/amd64,linux/arm64 Prometheus alertmanager
blackbox_exporter darwin/amd64,linux/amd64,linux/arm64 Blackbox prober exporter
node_exporter darwin/amd64,linux/amd64,linux/arm64 Exporter for machine metrics
pushgateway darwin/amd64,linux/amd64,linux/arm64 Push acceptor for ephemeral and batch jobs
drainer darwin/amd64,linux/amd64,linux/arm64 The drainer componet of TiDB binlog service
pump darwin/amd64,linux/amd64,linux/arm64 The pump componet of TiDB binlog service
cluster YES(v0.6.0) darwin/amd64,linux/amd64 Deploy a TiDB cluster for production
mirrors darwin/amd64,linux/amd64 Build a local mirrors and download all selected components
bench darwin/amd64,linux/amd64 Benchmark database with different workloads
doc darwin/amd64,linux/amd64 Online document for TiDB
ctl darwin/amd64,linux/amd64,linux/arm64
cdc darwin/amd64,linux/amd64,linux/arm64
```
### 如何检测 NTP 服务是否正常
Expand Down
6 changes: 3 additions & 3 deletions how-to/scale/with-tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ category: how-to

TiDB 集群可以在不影响线上服务的情况下进行扩容和缩容。

本文介绍如何使用 TiUP 扩容缩容集群中的 TiDB、TiKV、PD 或者 TiFlash 节点。如未安装 TiUP,可参考[升级文档中的步骤](/how-to/upgrade/using-tiup.md#2-在中控机器上安装-tiup),将集群 import 到 TiUP 环境中,再进行扩容缩容。
本文介绍如何使用 TiUP 扩容缩容集群中的 TiDB、TiKV、PD、TiCDC 或者 TiFlash 节点。如未安装 TiUP,可参考[升级文档中的步骤](/how-to/upgrade/using-tiup.md#2-在中控机器上安装-tiup),将集群 import 到 TiUP 环境中,再进行扩容缩容。

你可以通过 `tiup cluster list` 查看当前的集群名称列表。

Expand All @@ -27,7 +27,7 @@ TiDB 集群可以在不影响线上服务的情况下进行扩容和缩容。

> **注意:**
>
> 添加 TiKV 和 PD 节点和添加 TiDB 节点的步骤类似。
> 添加 TiKV、PD、TiCDC 节点和添加 TiDB 节点的步骤类似。
### 1.1 编写扩容拓扑配置

Expand Down Expand Up @@ -126,7 +126,7 @@ tiup cluster display <cluster-name>

> **注意:**
>
> 移除 TiDB 和 PD 节点和移除 TiKV 节点的步骤类似。
> 移除 TiDB、PD、TiCDC 节点和移除 TiKV 节点的步骤类似。
### 3.1 查看节点 ID 信息

Expand Down
101 changes: 101 additions & 0 deletions reference/tools/ticdc/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,107 @@ category: reference

## 第 1 步:部署 TiCDC 集群

本节介绍了在不同场景下如何安装部署 TiCDC,包括以下场景:

- [使用 TiUP 全新部署 TiCDC](#使用-tiup-全新部署-ticdc)
- [使用 TiUP 在原有 TiDB 集群上新增 TiCDC 组件](#使用-tiup-在原有-tidb-集群上新增-ticdc-组件)
- [手动在原有 TiDB 集群上新增 TiCDC 组件](#手动在原有-tidb-集群上新增-ticdc-组件)

### 使用 TiUP 全新部署 TiCDC

TiUP cluster 是适用于 TiDB 4.0 及以上版本的部署工具,部署运行 TiCDC 必须使用 TiDB v4.0.0-rc.1 或更新版本,部署流程如下:

1. 参考 [TiUP 部署文档](/how-to/deploy/orchestrated/tiup.md)安装 TiUP。

2. 安装 TiUP cluster 组件

{{< copyable "shell-regular" >}}

```shell
tiup cluster
```

3. 编写 topology 配置文件,保存为 `topology.yaml`

可以参考[全量的配置文件模版](https://github.com/pingcap-incubator/tiup-cluster/blob/master/examples/topology.example.yaml)。

除了部署 TiDB 集群的配置,需要额外在 `cdc_servers` 下配置 CDC 服务器所在的 ip(目前只支持 ip,不支持域名)。

{{< copyable "" >}}

```ini
pd_servers:
- host: 172.19.0.101
- host: 172.19.0.102
- host: 172.19.0.103
tidb_servers:
- host: 172.19.0.101
tikv_servers:
- host: 172.19.0.101
- host: 172.19.0.102
- host: 172.19.0.103
cdc_servers:
- host: 172.19.0.101
- host: 172.19.0.102
- host: 172.19.0.103
```

4. 按照 TiUP 部署流程完成集群部署的剩余步骤,包括:

部署 TiDB 集群,其中 test 为集群名:

{{< copyable "shell-regular" >}}

```shell
tiup cluster deploy test v4.0.0-rc.1 topology.yaml -i ~/.ssh/id_rsa
```

启动 TiDB 集群:

{{< copyable "shell-regular" >}}

```shell
tiup cluster start test
```

5. 查看集群状态

{{< copyable "shell-regular" >}}

```shell
tiup cluster display test
```

### 使用 TiUP 在原有 TiDB 集群上新增 TiCDC 组件

1. 首先确认当前 TiDB 的版本支持 TiCDC,否则需要先升级 TiDB 集群至 4.0.0 rc.1 或更新版本。

2. 参考 [扩容 TiDB/TiKV/PD 节点](/how-to/scale/with-tiup.md#1-扩容-tidbtikvpd-节点) 章节对 TiCDC 进行部署。
示例的扩容配置文件为:

```shell
vi scale-out.yaml
```

```
cdc_servers:
- host: 10.0.1.5
- host: 10.0.1.6
- host: 10.0.1.7
```
随后执行扩容命令即可:
{{< copyable "shell-regular" >}}
```shell
tiup cluster scale-out <cluster-name> scale-out.yaml
```

### 手动在原有 TiDB 集群上新增 TiCDC 组件

假设 PD 集群有一个可以提供服务的 PD 节点(client URL 为 `10.0.10.25:2379`)。若要部署三个 TiCDC 节点,可以按照以下命令启动集群。只需要指定相同的 PD 地址,新启动的节点就可以自动加入 TiCDC 集群。

{{< copyable "shell-regular" >}}
Expand Down

0 comments on commit f54ddef

Please sign in to comment.