Skip to content

Commit

Permalink
Merge pull request cnp3#21 from gauone/master
Browse files Browse the repository at this point in the history
Fixed typos
  • Loading branch information
obonaventure authored Dec 16, 2019
2 parents feb750f + 94d4bf3 commit be5765e
Show file tree
Hide file tree
Showing 5 changed files with 11 additions and 11 deletions.
4 changes: 2 additions & 2 deletions principles/sharing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The first and more important resource inside a network is the link bandwidth. Th

A Full mesh network

The full-mesh is the most reliable and highest performing network to interconnect these five hosts. However, this network organization has two important drawbacks. First, if a network contains `n` hosts, then :math:`\frac{n\times(n-1)}{2}` links are required. If the network contains more than a few hosts, it becomes impossible to lay down the required physical links. Second, if the network contains `n` hosts, then each host must have :math:`n-1` interfaces to terminate :math:`n-1` links. This is beyond the capabilities of most hosts. Furthermore, if a new host is added to the network, new links have to be laid down and one interface has to be added to each participating host. However, full-mesh has the advantage of providing the lowest delay between the hosts and the best resiliency against link failures. In practice, full-mesh networks are rarely used expected when there are few network nodes and resiliency is key.
The full-mesh is the most reliable and highest performing network to interconnect these five hosts. However, this network organization has two important drawbacks. First, if a network contains `n` hosts, then :math:`\frac{n\times(n-1)}{2}` links are required. If the network contains more than a few hosts, it becomes impossible to lay down the required physical links. Second, if the network contains `n` hosts, then each host must have :math:`n-1` interfaces to terminate :math:`n-1` links. This is beyond the capabilities of most hosts. Furthermore, if a new host is added to the network, new links have to be laid down and one interface has to be added to each participating host. However, full-mesh has the advantage of providing the lowest delay between the hosts and the best resiliency against link failures. In practice, full-mesh networks are rarely used except when there are few network nodes and resiliency is key.

The second possible physical organization, which is also used inside computers to connect different extension cards, is the bus. In a bus network, all hosts are attached to a shared medium, usually a cable through a single interface. When one host sends an electrical signal on the bus, the signal is received by all hosts attached to the bus. A drawback of bus-based networks is that if the bus is physically cut, then the network is split into two isolated networks. For this reason, bus-based networks are sometimes considered to be difficult to operate and maintain, especially when the cable is long and there are many places where it can break. Such a bus-based topology was used in early Ethernet networks.

Expand Down Expand Up @@ -1033,7 +1033,7 @@ Our last example is a window of four segments. These segments are sent at :math:

.. index:: congestion window

From the above example, we can adjust the transmission rate by adjusting the sending window of a reliable transport protocol. A reliable transport protocol cannot send data faster than :math:`\frac{window}{rtt}` where :math:`window` is current sending window. To control the transmission rate, we introduce a `congestion window`. This congestion window limits the sending window. A any time, the sending window is restricted to :math:`\min(swin,cwin)`, where `swin` is the sending window and `cwin` the current `congestion window`. Of course, the window is further constrained by the receive window advertised by the remote peer. With the utilization of a congestion window, a simple reliable transport protocol that uses fixed size segments could implement `AIMD` as follows.
From the above example, we can adjust the transmission rate by adjusting the sending window of a reliable transport protocol. A reliable transport protocol cannot send data faster than :math:`\frac{window}{rtt}` where :math:`window` is the current sending window. To control the transmission rate, we introduce a `congestion window`. This congestion window limits the sending window. At any time, the sending window is restricted to :math:`\min(swin,cwin)`, where `swin` is the sending window and `cwin` the current `congestion window`. Of course, the window is further constrained by the receive window advertised by the remote peer. With the utilization of a congestion window, a simple reliable transport protocol that uses fixed size segments could implement `AIMD` as follows.

For the `Additive Increase` part our simple protocol would simply increase its `congestion window` by one segment every round-trip-time. The
`Multiplicative Decrease` part of `AIMD` could be implemented by halving the congestion window when congestion is detected. For simplicity, we assume that congestion is detected thanks to a binary feedback and that no segments are lost. We will discuss in more details how losses affect a real transport protocol like TCP.
Expand Down
4 changes: 2 additions & 2 deletions protocols/dnssec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ her implementation. From a security viewpoint there is a clear benefit
since the attacker needs to guess both the 16 bits `Identifier` and the
16 bits `UDP source port` to inject a fake DNS response. To generate all
possible DNS responses, the attacker would need to generate almost
:math:`2^32` different messages, which is excessive in today's networks.
:math:`2^{32}` different messages, which is excessive in today's networks.
Most DNS implementations use this second approach to prevent these cache
poisoning attacks.

Expand Down Expand Up @@ -275,7 +275,7 @@ Current DNS deployments allow resolvers to cache those negative answers
to reduce the load on the entire DNS :rfc:`2308`.

The simplest way to allow a DNSSEC server to return signed negative responses
would be for the serve to return a signed response that contains the
would be for the server to return a signed response that contains the
received query and some information indicating the error.
The client could then easily check the validity of the negative response.
Unfortunately, this would force the DNSSEC server to generate signatures
Expand Down
4 changes: 2 additions & 2 deletions protocols/http2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ Let us first examine how HTTP/2.0 structures the bytestream of the underlying co

The HTTP/2.0 Frame header

The information exchanged over an HTTP/2.0 session is composed of frames. A frame starts with a 9 bytes-long header that carries several types of information. The HTTP/2.0 frames have a variable length. The `Length` field of the header contains the length of the frame payload in bytes. As this field is encoded as a 24 bytes field, an HTTP/2.0 frame cannot be longer than :math:`2^{24}` bytes. It should be noted that :rfc:`7540` assumes a maximum size of :math:`2^{14}` bytes, i.e. 16,384 bytes for the HTTP/2.0 frame payload unless a longer maximum frame length has been negotiated at the beginning of the session using the HTTP/2.0 `Settings` frame that will be described later. The next field of the frames header indicates the frame type. The first frame types are `Data` which contains data from web objects and `Headers` containing HTTP/2.0 headers. When a client retrieves a web object from a server, it always receives an HTTP/2.0 `Headers` frame followed by an HTTP/2.0 `Data` frame. The `Headers` frame information contains essentially the same HTTP headers as the ones supported by HTTP/1.1, but those are encoded by leveraging a data compression technique that minimizes the number of bytes required to transmit them.
The information exchanged over an HTTP/2.0 session is composed of frames. A frame starts with a 9 bytes-long header that carries several types of information. The HTTP/2.0 frames have a variable length. The `Length` field of the header contains the length of the frame payload in bytes. As this field is encoded as a 24 bits field, an HTTP/2.0 frame cannot be longer than :math:`2^{24} -1` bytes. It should be noted that :rfc:`7540` assumes a maximum size of :math:`2^{14}` bytes, i.e. 16,384 bytes for the HTTP/2.0 frame payload unless a longer maximum frame length has been negotiated at the beginning of the session using the HTTP/2.0 `Settings` frame that will be described later. The next field of the frames header indicates the frame type. The first frame types are `Data` which contains data from web objects and `Headers` containing HTTP/2.0 headers. When a client retrieves a web object from a server, it always receives an HTTP/2.0 `Headers` frame followed by an HTTP/2.0 `Data` frame. The `Headers` frame information contains essentially the same HTTP headers as the ones supported by HTTP/1.1, but those are encoded by leveraging a data compression technique that minimizes the number of bytes required to transmit them.

Other frame types are described later. The `Flags` are used for some frame types and the `R` bit must be set to zero. The last important field of the `HTTP/2.0 Frame` header is the `Stream Identifier`. With HTTP/2.0, the bytestream of the underlying transport connection is divided in independent streams that are identified by an integer. The odd (resp. even) stream identifiers are managed by the client (resp. server). This enables the server (or the client) to multiplex data corresponding to different frames over a single bytestream.

Expand Down Expand Up @@ -200,7 +200,7 @@ The length of the HTTP/2.0 frames obviously affects how different web objects ca

The HTTP/2.0 streams can provide performance benefits, but they also increase the complexity of the implementations since an HTTP/2.0 receiver must be able to simultaneously process frames that correspond to different web objects. This complexity mainly resides on the client side. The HTTP/2.0 protocol includes several techniques that enable clients to manage the utilization of the HTTP/2.0 session.

The first frame that a client sends over an HTTP/2.0 session is the `Settings` frame. This is a control frame that indicates some parameters that the client proposes for this session. Several of these parameters are defined in :rfc:`7540`. The most important ones are probably the `SETTINGS_MAX_FRAME_SIZE` that specifies the maximum length of the HTTP/2.0 frames that this implementation supports and the `SETTINGS_MAX_CONCURRENT_STREAMS` that specifies the maximum number of parallel streams that this implementation can manage. The `SETTINGS_MAX_FRAME_SIZE` must be at least :math:`2^{14}` bytes but can go up to :math:`2^{24}-1` bytes. There is no minimum value for `SETTINGS_MAX_CONCURRENT_STREAMS`, but :rfc:`7540` recommends to support at least 100 different stream identifiers.
The first frame that a client sends over an HTTP/2.0 session is the `Settings` frame. This is a control frame that indicates some parameters that the client proposes for this session. Several of these parameters are defined in :rfc:`7540`. The most important ones are probably the `SETTINGS_MAX_FRAME_SIZE` that specifies the maximum length of the HTTP/2.0 frames that this implementation supports and the `SETTINGS_MAX_CONCURRENT_STREAMS` that specifies the maximum number of parallel streams that this implementation can manage. The `SETTINGS_MAX_FRAME_SIZE` must be at least :math:`2^{14}` bytes but can go up to :math:`2^{24} -1` bytes. There is no minimum value for `SETTINGS_MAX_CONCURRENT_STREAMS`, but :rfc:`7540` recommends to support at least 100 different stream identifiers.

By using multiple streams, the server can multiplex different web objects over the same underlying transport connection. However, these objects are only sent in response to requests from clients. There are some situations where the server might know in advance that the client will request a given object. It could speedup the transfer by sending it before having received a client request. This is the `push` feature of HTTP/2.0. A server can independently push web objects to a client without having received any request. This feature can only be used by the server if the client has enabled it by sending `SETTINGS_ENABLE_PUSH` in its `Settings` frame. A classical use case for this `push` feature is to enable a server to automatically send an object which cannot be cached by the client, such as a dynamic javascript code, when another web object that references it is requested. However, measurement studies indicate that very few web servers seem to have adopted this feature [ZWH2018]_.

Expand Down
8 changes: 4 additions & 4 deletions protocols/ssh.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,11 +49,11 @@ similar protocols [Ylonen1996]_. :term:`ssh` became quickly popular and system
administrators encouraged its usage. The original version of :term:`ssh`
was freely available. After a few years, his author created a company
to distribute it commercially, but other programmers continued to
develop an open-source version of :term`ssh` called
develop an open-source version of :term:`ssh` called
`OpenSSH <http://www.openssh.com>`_.
Over the years, :term:`ssh` evolved
and became a flexible applicable whose usage extends beyond remote
login to support features such as file transfers, protocol tunneling, ..
login to support features such as file transfers, protocol tunneling, ...
In this section, we only discuss the basic features of :term:`ssh` and explain
how it differs from :term:`telnet`. Entire books have been written to describe
:term:`ssh` in details [BS2005]_. An overview of the protocol
Expand Down Expand Up @@ -238,7 +238,7 @@ that are encoded according to the Binary Packet Protocol defined in
the message counter and the cleartext message.
The message counter is not transmitted,
but the recipient can easily recover its value. The ``MAC`` is computed as
:math:`mac = MAC(key, sequence_number || unencrypted_message)` where the
:math:`mac = MAC(key, sequence\_number || unencrypted\_message)` where the
key is the negotiated authentication key.

.. index:: HMAC
Expand All @@ -253,7 +253,7 @@ that are encoded according to the Binary Packet Protocol defined in
It works with any hash function (`H`) and a key (`K`). As an example, let
us consider HMAC with the SHA-1 hash function. SHA-1 uses 20 bytes
blocks and the block size will play an important role in the operation
of HMAC. We first require the key to as long as the block size. Since this
of HMAC. We first require the key to be as long as the block size. Since this
key is the output of the key generation algorithm, this is one parameter
of this algorithm.

Expand Down
2 changes: 1 addition & 1 deletion protocols/tls.rst
Original file line number Diff line number Diff line change
Expand Up @@ -410,7 +410,7 @@ To simplify both the design and the implementations, TLS 1.3 uses only a small n

By supporting only cipher suites that provide Perfect Forward Secrecy in TLS 1.3, the IETF aims at protecting the privacy of users against a wide range of attacks. However, this choice has resulted in intense debates in some enterprises. Some enterprises, notably in financial organizations, have deployed TLS, but wish to be able to decrypt TLS traffic for various security-related activities. These enterprises tried to lobby within the IETF to maintain RSA-based cipher suites that do not provide Perfect Forward Secrecy. Their arguments did not convince the IETF. Eventually, these enterprises moved to ETSI, another standardization body, and convinced them to adopt `entreprise TLS`, a variant of TLS 1.3 that does not provide Perfect Forward Secrecy [eTLS2018]_.

The TLS 1.3 handshake differs from the TLS 1.2 handshake in several ways. First the TLS 1.3 handshake requires a single round-trip-time when the client connects for the first time to a server. To achieve this, the TLS designers look at the TLS 1.2 handshake in details and found that the first round-trip-time is mainly used to select the set of cryptographic algorithms and the cryptographic exchange scheme that will be used over the TLS session. TLS 1.3 drastically simplifies this negotiation by requiring to use the Diffie Hellman can exchange with a small set of possible parameters. This means that the client can guess the parameters used by the server (i.e. the modulus, p and the base g) and immediately start the Diffie Hellman exchange. A simplified version of the TLS 1.3 handshake is shown in the figure below.
The TLS 1.3 handshake differs from the TLS 1.2 handshake in several ways. First the TLS 1.3 handshake requires a single round-trip-time when the client connects for the first time to a server. To achieve this, the TLS designers look at the TLS 1.2 handshake in details and found that the first round-trip-time is mainly used to select the set of cryptographic algorithms and the cryptographic exchange scheme that will be used over the TLS session. TLS 1.3 drastically simplifies this negotiation by requiring to use the Diffie Hellman exchange with a small set of possible parameters. This means that the client can guess the parameters used by the server (i.e. the modulus, p and the base g) and immediately start the Diffie Hellman exchange. A simplified version of the TLS 1.3 handshake is shown in the figure below.


.. msc::
Expand Down

0 comments on commit be5765e

Please sign in to comment.