Skip to content

Commit

Permalink
Update Documentation Feature Flags [1.3.0]
Browse files Browse the repository at this point in the history
  • Loading branch information
s1monw committed Jul 23, 2014
1 parent 3c9eac9 commit 1265b14
Show file tree
Hide file tree
Showing 22 changed files with 40 additions and 40 deletions.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-apostrophe-tokenfilter]]
=== Apostrophe Token Filter

coming[1.3.0]
added[1.3.0]

The `apostrophe` token filter strips all characters after an apostrophe,
including the apostrophe itself.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-classic-tokenfilter]]
=== Classic Token Filter

coming[1.3.0]
added[1.3.0]

The `classic` token filter does optional post-processing of
terms that are generated by the <<analysis-classic-tokenizer,`classic` tokenizer>>.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
A token filter of type `lowercase` that normalizes token text to lower
case.

Lowercase token filter supports Greek, Irish coming[1.3.0], and Turkish lowercase token
Lowercase token filter supports Greek, Irish added[1.3.0], and Turkish lowercase token
filters through the `language` parameter. Below is a usage example in a
custom analyzer

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,26 +11,26 @@ http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/

German::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html[`german_normalization`] coming[1.3.0]
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html[`german_normalization`] added[1.3.0]

Hindi::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizer.html[`hindi_normalization`] coming[1.3.0]
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizer.html[`hindi_normalization`] added[1.3.0]

Indic::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizer.html[`indic_normalization`] coming[1.3.0]
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizer.html[`indic_normalization`] added[1.3.0]

Kurdish (Sorani)::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizer.html[`sorani_normalization`] coming[1.3.0]
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizer.html[`sorani_normalization`] added[1.3.0]

Persian::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizer.html[`persian_normalization`]

Scandinavian::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`] coming[1.3.0],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`] coming[1.3.0]
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html[`scandinavian_normalization`] added[1.3.0],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html[`scandinavian_folding`] added[1.3.0]

22 changes: 11 additions & 11 deletions docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -65,15 +65,15 @@ http://snowball.tartarus.org/algorithms/danish/stemmer.html[*`danish`*]
Dutch::

http://snowball.tartarus.org/algorithms/dutch/stemmer.html[*`dutch`*],
http://snowball.tartarus.org/algorithms/kraaij_pohlmann/stemmer.html[`dutch_kp`] coming[1.3.0,Renamed from `kp`]
http://snowball.tartarus.org/algorithms/kraaij_pohlmann/stemmer.html[`dutch_kp`] added[1.3.0,Renamed from `kp`]

English::

http://snowball.tartarus.org/algorithms/porter/stemmer.html[*`english`*] coming[1.3.0,Returns the <<analysis-porterstem-tokenfilter,`porter_stem`>> instead of the <<analysis-snowball-tokenfilter,`english` Snowball token filter>>],
http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`] coming[1.3.0,Returns the <<analysis-kstem-tokenfilter,`kstem` token filter>>],
http://snowball.tartarus.org/algorithms/porter/stemmer.html[*`english`*] added[1.3.0,Returns the <<analysis-porterstem-tokenfilter,`porter_stem`>> instead of the <<analysis-snowball-tokenfilter,`english` Snowball token filter>>],
http://ciir.cs.umass.edu/pubfiles/ir-35.pdf[`light_english`] added[1.3.0,Returns the <<analysis-kstem-tokenfilter,`kstem` token filter>>],
http://www.medialab.tfe.umu.se/courses/mdm0506a/material/fulltext_ID%3D10049387%26PLACEBO%3DIE.pdf[`minimal_english`],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html[`possessive_english`],
http://snowball.tartarus.org/algorithms/english/stemmer.html[`porter2`] coming[1.3.0,Returns the <<analysis-snowball-tokenfilter,`english` Snowball token filter>> instead of the <<analysis-snowball-tokenfilter,`porter` Snowball token filter>>],
http://snowball.tartarus.org/algorithms/english/stemmer.html[`porter2`] added[1.3.0,Returns the <<analysis-snowball-tokenfilter,`english` Snowball token filter>> instead of the <<analysis-snowball-tokenfilter,`porter` Snowball token filter>>],
http://snowball.tartarus.org/algorithms/lovins/stemmer.html[`lovins`]

Finnish::
Expand All @@ -89,8 +89,8 @@ http://dl.acm.org/citation.cfm?id=318984[`minimal_french`]

Galician::

http://bvg.udc.es/recursos_lingua/stemming.jsp[*`galician`*] coming[1.3.0],
http://bvg.udc.es/recursos_lingua/stemming.jsp[`minimal_galician`] (Plural step only) coming[1.3.0]
http://bvg.udc.es/recursos_lingua/stemming.jsp[*`galician`*] added[1.3.0],
http://bvg.udc.es/recursos_lingua/stemming.jsp[`minimal_galician`] (Plural step only) added[1.3.0]

German::

Expand Down Expand Up @@ -127,7 +127,7 @@ http://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf[*`light_italian`*

Kurdish (Sorani)::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html[*`sorani`*] coming[1.3.0]
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html[*`sorani`*] added[1.3.0]

Latvian::

Expand All @@ -136,20 +136,20 @@ http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/
Norwegian (Bokmål)::

http://snowball.tartarus.org/algorithms/norwegian/stemmer.html[*`norwegian`*],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_norwegian`*] coming[1.3.0],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_norwegian`*] added[1.3.0],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html[`minimal_norwegian`]

Norwegian (Nynorsk)::

http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_nynorsk`*] coming[1.3.0],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html[`minimal_nynorsk`] coming[1.3.0]
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html[*`light_nynorsk`*] added[1.3.0],
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html[`minimal_nynorsk`] added[1.3.0]

Portuguese::

http://snowball.tartarus.org/algorithms/portuguese/stemmer.html[`portuguese`],
http://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181[*`light_portuguese`*],
http://www.inf.ufrgs.br/\~buriol/papers/Orengo_CLEF07.pdf[`minimal_portuguese`],
http://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`] coming[1.3.0]
http://www.inf.ufrgs.br/\~viviane/rslp/index.htm[`portuguese_rslp`] added[1.3.0]

Romanian::

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-classic-tokenizer]]
=== Classic Tokenizer

coming[1.3.0]
added[1.3.0]

A tokenizer of type `classic` providing grammar based tokenizer that is
a good tokenizer for English language documents. This tokenizer has
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/analysis/tokenizers/thai-tokenizer.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[analysis-thai-tokenizer]]
=== Thai Tokenizer

coming[1.3.0]
added[1.3.0]

A tokenizer of type `thai` that segments Thai text into words. This tokenizer
uses the built-in Thai segmentation algorithm included with Java to divide
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/index-modules/allocation.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ settings API.
[[disk]]
=== Disk-based Shard Allocation

coming[1.3.0] disk based shard allocation is enabled from version 1.3.0 onward
added[1.3.0] disk based shard allocation is enabled from version 1.3.0 onward

Elasticsearch can be configured to prevent shard
allocation on nodes depending on disk usage for the node. This
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/index-modules/store.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ See <<vm-max-map-count>>

[[default_fs]]
[float]
==== Hybrid MMap / NIO FS coming[1.3.0]
==== Hybrid MMap / NIO FS added[1.3.0]

The `default` type stores the shard index on the file system depending on
the file type by mapping a file into memory (mmap) or using Java NIO. Currently
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/mapping/fields/field-names-field.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[mapping-field-names-field]]
=== `_field_names`

coming[1.3.0]
added[1.3.0]

The `_field_names` field indexes the field names of a document, which can later
be used to search for documents based on the fields that they contain typically
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/mapping/transform.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[[mapping-transform]]
== Transform
coming[1.3.0]
added[1.3.0]

The document can be transformed before it is indexed by registering a
script in the `transform` element of the mapping. The result of the
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/modules/discovery/zen.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ configure the election to handle cases of slow or congested networks
(higher values assure less chance of failure). Once a node joins, it
will send a join request to the master (`discovery.zen.join_timeout`)
with a timeout defaulting at 20 times the ping timeout.
coming[1.3.0,Previously defaulted to 10 times the ping timeout].
added[1.3.0,Previously defaulted to 10 times the ping timeout].

Nodes can be excluded from becoming a master by setting `node.master` to
`false`. Note, once a node is a client node (`node.client` set to
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/modules/gateway.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ once all `gateway.recover_after...nodes` conditions are met.
The `gateway.expected_nodes` allows to set how many data and master
eligible nodes are expected to be in the cluster, and once met, the
`gateway.recover_after_time` is ignored and recovery starts.
Setting `gateway.expected_nodes` also defaults `gateway.recovery_after_time` to `5m` coming[1.3.0, before `expected_nodes`
Setting `gateway.expected_nodes` also defaults `gateway.recovery_after_time` to `5m` added[1.3.0, before `expected_nodes`
required `recovery_after_time` to be set]. The `gateway.expected_data_nodes` and `gateway.expected_master_nodes`
settings are also supported. For example setting:

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/modules/snapshots.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ should be restored as well as prevent global cluster state from being restored b
<<search-multi-index-type,multi index syntax>>. The `rename_pattern` and `rename_replacement` options can be also used to
rename index on restore using regular expression that supports referencing the original text as explained
http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
Set `include_aliases` to `false` to prevent aliases from being restored together with associated indices coming[1.3.0].
Set `include_aliases` to `false` to prevent aliases from being restored together with associated indices added[1.3.0].

[source,js]
-----------------------------------
Expand All @@ -211,7 +211,7 @@ persistent settings are added to the existing persistent settings.
[float]
=== Partial restore

coming[1.3.0]
added[1.3.0]

By default, entire restore operation will fail if one or more indices participating in the operation don't have
snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ next to the given cell.
[float]
==== Caching

coming[1.3.0]
added[1.3.0]

The result of the filter is not cached by default. The
`_cache` parameter can be set to `true` to turn caching on.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/query-dsl/filters/has-child-filter.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ The `has_child` filter also accepts a filter instead of a query:
[float]
==== Min/Max Children

coming[1.3.0]
added[1.3.0]

The `has_child` filter allows you to specify that a minimum and/or maximum
number of children are required to match for the parent doc to be considered
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/query-dsl/queries/has-child-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ inside the `has_child` query:
[float]
==== Min/Max Children

coming[1.3.0]
added[1.3.0]

The `has_child` query allows you to specify that a minimum and/or maximum
number of children are required to match for the parent doc to be considered
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/query-dsl/queries/mlt-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ running it against one or more fields.
}
--------------------------------------------------

coming[1.3.0,The ability to run the `mlt` query on multiple docs is only available from 1.3.0 onwards]
added[1.3.0,The ability to run the `mlt` query on multiple docs is only available from 1.3.0 onwards]

Additionally, More Like This can find documents that are "like" a set of
chosen documents. The syntax to specify one or more documents is similar to
Expand Down Expand Up @@ -79,12 +79,12 @@ Defaults to the `_all` field.
|`like_text` |The text to find documents like it, *required* if `ids` or `docs` are
not specified.

|`ids` or `docs` | coming[1.3.0] A list of documents following the same syntax as the
|`ids` or `docs` | added[1.3.0] A list of documents following the same syntax as the
<<docs-multi-get,Multi GET API>>. This parameter is *required* if
`like_text` is not specified. The texts are fetched from `fields` unless
specified in each `doc`, and cannot be set to `_all`.

|`include` | coming[1.3.0] When using `ids` or `docs`, specifies whether the documents should be
|`include` | added[1.3.0] When using `ids` or `docs`, specifies whether the documents should be
included from the search. Defaults to `false`.

|`exclude` | deprecated[1.3.0,Replaced by `include`] When using `ids` or `docs`, specifies whether
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@ http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html#UNIX_LINES

==== Collect mode

coming[1.3.0] Deferring calculation of child aggregations
added[1.3.0] Deferring calculation of child aggregations

For fields with many unique terms and a small number of required results it can be more efficient to delay the calculation
of child aggregations until the top parent-level aggs have been pruned. Ordinarily, all branches of the aggregation tree
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[search-aggregations-metrics-geobounds-aggregation]]
=== Geo Bounds Aggregation

coming[1.3.0]
added[1.3.0]

A metric aggregation that computes the bounding box containing all geo_point values for a field.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[search-aggregations-metrics-percentile-rank-aggregation]]
=== Percentile Ranks Aggregation

coming[1.3.0]
added[1.3.0]

A `multi-value` metrics aggregation that calculates one or more percentile ranks
over numeric values extracted from the aggregated documents. These values
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[search-aggregations-metrics-top-hits-aggregation]]
=== Top hits Aggregation

coming[1.3.0]
added[1.3.0]

A `top_hits` metric aggregator keeps track of the most relevant document being aggregated. This aggregator is intended
to be used as a sub aggregator, so that the top matching documents can be aggregated per bucket.
Expand Down

0 comments on commit 1265b14

Please sign in to comment.