Skip to content

Commit

Permalink
Added diagrams for 410_Scaling
Browse files Browse the repository at this point in the history
  • Loading branch information
clintongormley committed Aug 7, 2014
1 parent 4e22c49 commit 909879d
Show file tree
Hide file tree
Showing 11 changed files with 1,346 additions and 0 deletions.
4 changes: 4 additions & 0 deletions 410_Scaling/15_Shard.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,10 @@ _replica_ shards later on in <<replica-shards>>.
One glorious day, the Internet discovers us, and a single node just can't keep up with
the traffic. We decide to add a second node. What happens?

[[img-one-shard]]
.An index with one shard has no scale factor
image::images/410_15_one_shard.png["An index with one shard has no scale factor"]

The answer is: nothing. Because we have only one shard, there is nothing to
put on the second node. We can't increase the number of shards in the index,
because the number of shards is an important element in the algorithm used to
Expand Down
4 changes: 4 additions & 0 deletions 410_Scaling/20_Overallocation.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,10 @@ point of view of our application, everything functions as it did before. The
application communicates with the index, not the shards, and there is still
only one index.

[[img-two-shard]]
.An index with two shards can take advantage of a second node
image::images/410_20_two_shards.png["An index with two shards can take advantage of a second node"]

This time, when we add a second node, Elasticsearch will automatically move
one shard from the first node to the second node and, once the relocation has
finished, each shard will have access to twice the computing power that it had
Expand Down
8 changes: 8 additions & 0 deletions 410_Scaling/35_Replica_shards.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,10 @@ POST /my_index/_settings
Having two primary shards, plus a replica of each primary, would give us a
total of four shards: one for each node.

[[img-four-nodes]]
.An index with two primary shards and one replica can scale out across four nodes
image::images/410_35_four_nodes.png["An index with two primary shards and one replica can scale out across four nodes"]

==== Balancing load with replicas

Search performance depends on the response times of the slowest node, so it is a good idea to try to balance out the load across all nodes. If we had
Expand All @@ -51,6 +55,10 @@ POST /my_index/_settings
As a bonus, we have also increased our availability. We can now afford to
lose two nodes and still have a copy of all of our data.

[[img-three-nodes]]
.Adjust the number of replicas to balance the load between nodes
image::images/410_35_three_nodes.png["Adjust the number of replicas to balance the load between nodes"]

NOTE: The fact that node 3 holds two replicas and no primaries is not
important. Replicas and primaries do the same amount of work, they just play
slightly different roles. There is no need to ensure that primaries are
Expand Down
Binary file added images/410_15_one_shard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/410_20_two_shards.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/410_35_four_nodes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/410_35_three_nodes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
182 changes: 182 additions & 0 deletions svg/410_15_one_shard.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 909879d

Please sign in to comment.