Skip to content

Commit f0a69d4

Browse files
authoredOct 21, 2024
pd: add note for member leader_priority (pingcap#19165)
1 parent 58fef76 commit f0a69d4

4 files changed

+17
-3
lines changed
 

‎dr-multi-replica.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ In this example, TiDB contains five replicas and three regions. Region 1 is the
122122

123123
> **Note:**
124124
>
125-
> The greater the priority number, the higher the probability that this node becomes the leader.
125+
> In all available PD nodes, the node with the highest priority number becomes the leader.
126126

127127
3. Create placement rules and fix the primary replica of the test table to region 1:
128128

‎pd-control.md

+14
Original file line numberDiff line numberDiff line change
@@ -471,6 +471,20 @@ Success!
471471
......
472472
```
473473
474+
Specify the priority of PD leader:
475+
476+
```bash
477+
member leader_priority pd-1 4
478+
member leader_priority pd-2 3
479+
member leader_priority pd-3 2
480+
member leader_priority pd-4 1
481+
member leader_priority pd-5 0
482+
```
483+
484+
> **Note:**
485+
>
486+
> In all available PD nodes, the node with the highest priority number becomes the leader.
487+
474488
### `operator [check | show | add | remove]`
475489
476490
Use this command to view and control the scheduling operation.

‎placement-rules-in-sql.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -409,7 +409,7 @@ You can specify a specific distribution of Leaders and Followers using constrain
409409
If you have specific requirements for the distribution of Raft Leaders among nodes, you can specify the placement policy using the following statement:
410410

411411
```sql
412-
CREATE PLACEMENT POLICY deploy221_primary_east1 LEADER_CONSTRAINTS="[+region=us-east-1]" FOLLOWER_CONSTRAINTS='{"+region=us-east-1": 1, "+region=us-east-2": 2, "+region=us-west-1: 1}';
412+
CREATE PLACEMENT POLICY deploy221_primary_east1 LEADER_CONSTRAINTS="[+region=us-east-1]" FOLLOWER_CONSTRAINTS='{"+region=us-east-1": 1, "+region=us-east-2": 2, "+region=us-west-1": 1}';
413413
```
414414

415415
After this placement policy is created and attached to the desired data, the Raft Leader replicas of the data will be placed in the `us-east-1` region specified by the `LEADER_CONSTRAINTS` option, while other replicas of the data will be placed in regions specified by the `FOLLOWER_CONSTRAINTS` option. Note that if the cluster fails, such as a node outage in the `us-east-1` region, a new Leader will still be elected from other regions, even if these regions are specified in `FOLLOWER_CONSTRAINTS`. In other words, ensuring service availability takes the highest priority.

‎three-data-centers-in-two-cities-deployment.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ In the deployment of three AZs in two regions, to optimize performance, you need
197197
>
198198
> Starting from TiDB v5.2, the `label-property` configuration is not supported by default. To set the replica policy, use the [placement rules](/configure-placement-rules.md).
199199

200-
- Configure the priority of PD. To avoid the situation where the PD leader is in another region (AZ3), you can increase the priority of local PD (in Seattle) and decrease the priority of PD in another region (San Francisco). The larger the number, the higher the priority.
200+
- Configure the priority of PD. To avoid the situation where the PD leader is in another region (AZ3), you can increase the priority of local PD (in Seattle) and decrease the priority of PD in another region (San Francisco). The larger the number, the higher the priority. In all available PD nodes, the node with the highest priority number becomes the leader.
201201

202202
```bash
203203
member leader_priority PD-10 5

0 commit comments

Comments
 (0)
Please sign in to comment.