diff --git a/docs/dev/table/connect.md b/docs/dev/table/connect.md index 2255f1f0dfc57..c05102cf7a91f 100644 --- a/docs/dev/table/connect.md +++ b/docs/dev/table/connect.md @@ -34,7 +34,7 @@ This page describes how to declare built-in table sources and/or table sinks and Dependencies ------------ -The following table list all available connectors and formats. Their mutual compatibility is tagged in the corresponding sections for [table connectors](connect.html#table-connectors) and [table formats](connect.html#table-formats). The following table provides dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles. +The following tables list all available connectors and formats. Their mutual compatibility is tagged in the corresponding sections for [table connectors](connect.html#table-connectors) and [table formats](connect.html#table-formats). The following tables provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles. {% if site.is_stable %} @@ -60,7 +60,7 @@ The following table list all available connectors and formats. Their mutual comp {% else %} -This table is only available for stable releases. +These tables are only available for stable releases. {% endif %} diff --git a/docs/dev/table/streaming/time_attributes.md b/docs/dev/table/streaming/time_attributes.md index 27208fb768ddb..101bad68b8082 100644 --- a/docs/dev/table/streaming/time_attributes.md +++ b/docs/dev/table/streaming/time_attributes.md @@ -30,7 +30,7 @@ Flink is able to process streaming data based on different notions of *time*. For more information about time handling in Flink, see the introduction about [Event Time and Watermarks]({{ site.baseurl }}/dev/event_time.html). -This pages explains how time attributes can be defined for time-based operations in Flink's Table API & SQL. +This page explains how time attributes can be defined for time-based operations in Flink's Table API & SQL. * This will be replaced by the TOC {:toc} diff --git a/docs/ops/deployment/yarn_setup.md b/docs/ops/deployment/yarn_setup.md index a3342d154dbc4..3d13e2db9b371 100644 --- a/docs/ops/deployment/yarn_setup.md +++ b/docs/ops/deployment/yarn_setup.md @@ -324,9 +324,9 @@ This section briefly describes how Flink and YARN interact. -The YARN client needs to access the Hadoop configuration to connect to the YARN resource manager and to HDFS. It determines the Hadoop configuration using the following strategy: +The YARN client needs to access the Hadoop configuration to connect to the YARN resource manager and HDFS. It determines the Hadoop configuration using the following strategy: -* Test if `YARN_CONF_DIR`, `HADOOP_CONF_DIR` or `HADOOP_CONF_PATH` are set (in that order). If one of these variables are set, they are used to read the configuration. +* Test if `YARN_CONF_DIR`, `HADOOP_CONF_DIR` or `HADOOP_CONF_PATH` are set (in that order). If one of these variables is set, it is used to read the configuration. * If the above strategy fails (this should not be the case in a correct YARN setup), the client is using the `HADOOP_HOME` environment variable. If it is set, the client tries to access `$HADOOP_HOME/etc/hadoop` (Hadoop 2) and `$HADOOP_HOME/conf` (Hadoop 1). When starting a new Flink YARN session, the client first checks if the requested resources (containers and memory) are available. After that, it uploads a jar that contains Flink and the configuration to HDFS (step 1).