Skip to content

Commit

Permalink
Removed inline stylings from headings
Browse files Browse the repository at this point in the history
  • Loading branch information
Chris Pappas committed Oct 8, 2014
1 parent 9607d3e commit 32273f6
Show file tree
Hide file tree
Showing 44 changed files with 71 additions and 71 deletions.
6 changes: 3 additions & 3 deletions 030_Data/05_Document.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ elements are:
`_type`:: The class of object that the document represents.
`_id`:: The unique identifier for the document.

==== `_index`
==== _index

An _index_ is like a ``database'' in a relational database -- it's the place
we store and index related data.
Expand All @@ -68,7 +68,7 @@ but for now we will let Elasticsearch create the index for us. All we have to
do is to choose an index name. This name must be lower case, cannot begin with an
underscore and cannot contain commas. Let's use `website` as our index name.

==== `_type`
==== _type

In applications, we use objects to represent ``things'' such as a user, a blog
post, a comment, or an email. Each object belongs to a _class_ which defines
Expand All @@ -93,7 +93,7 @@ automatically.
A `_type` name can be lower or upper case, but shouldn't begin with an
underscore, or contain commas. We shall use `blog` for our type name.

==== `_id`
==== _id

The _id_ is a string that, when combined with the `_index` and `_type`,
uniquely identifies a document in Elasticsearch. When creating a new document,
Expand Down
8 changes: 4 additions & 4 deletions 050_Search/05_Empty_search.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ The response (edited for brevity) looks something like this:
--------------------------------------------------


==== `hits`
==== hits

The most important section of the response is `hits`, which contains the
`total` number of documents that matched our query, and a `hits` array
Expand All @@ -67,12 +67,12 @@ equally relevant, hence the neutral `_score` of `1` for all results.
The `max_score` value is the highest `_score` of any document that matches our
query.

==== `took`
==== took

The `took` value tells us how many milliseconds the entire search request took
to execute.

==== `shards`
==== shards

The `_shards` element tells us the `total` number of shards that were involved
in the query and, of them, how many were `successful` and how many `failed`.
Expand All @@ -82,7 +82,7 @@ of the same shard, there would be no copies of that shard available to respond
to search requests. In this case, Elasticsearch would report the shard as
`failed`, but continue to return results from the remaining shards.

==== `timeout`
==== timeout

The `timed_out` value tells us whether the query timed out or not. By
default, search requests do not timeout. If low response times are more
Expand Down
2 changes: 1 addition & 1 deletion 050_Search/20_Query_string.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ match. All conditions without a `+` or `-` are optional -- the more that match,
the more relevant the document.

[[all-field-intro]]
==== The `_all` field
==== The _all field

This simple search returns all documents which contain the word `"mary"`:

Expand Down
18 changes: 9 additions & 9 deletions 054_Query_DSL/70_Important_clauses.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ just a few which you will use frequently. We will discuss them in much greater
detail in <<search-in-depth>> but below we give you a quick introduction to
the most important queries and filters.

==== `term` filter
==== term filter

The `term` filter is used to filter by exact values, be they numbers, dates,
booleans, or `not_analyzed` exact value string fields:
Expand All @@ -19,7 +19,7 @@ booleans, or `not_analyzed` exact value string fields:
--------------------------------------------------
// SENSE: 054_Query_DSL/70_Term_filter.json

==== `terms` filter
==== terms filter

The `terms` filter is the same as the `term` filter, but allows you
to specify multiple values to match. If the field contains any of
Expand All @@ -31,7 +31,7 @@ the specified values, then the document matches:
--------------------------------------------------
// SENSE: 054_Query_DSL/70_Terms_filter.json

==== `range` filter
==== range filter

The `range` filter allows you to find numbers or dates which fall into
the specified range:
Expand All @@ -58,7 +58,7 @@ The operators that it accepts are:
`lte`:: less than or equal to


==== `exists` and `missing` filters
==== exists and missing filters

The `exists` and `missing` filters are used to find documents where the
specified field either has one or more values (`exists`) or doesn't have any
Expand All @@ -78,7 +78,7 @@ IS_NULL` (`exists`)in SQL:
These filters are frequently used to apply a condition only if a field is
present, and to apply a different condition if it is missing.

==== `bool` filter
==== bool filter

The `bool` filter is used to combine multiple filter clauses using
Boolean logic. It accepts three parameters:
Expand Down Expand Up @@ -107,7 +107,7 @@ of filter clauses:
// SENSE: 054_Query_DSL/70_Bool_filter.json


==== `match_all` query
==== match_all query

The `match_all` query simply matches all documents. It is the default
query which is used if no query has been specified.
Expand All @@ -123,7 +123,7 @@ This query is frequently used in combination with a filter, for instance to
retrieve all emails in the inbox folder. All documents are considered to be
equally relevant, so they all receive a neutral `_score` of `1`.

==== `match` query
==== match query

The `match` query should be the standard query that you reach for whenever
you want to query for a full text or exact value in almost any field.
Expand Down Expand Up @@ -160,7 +160,7 @@ looks for the words that are specified. This means that it is safe to expose
to your users via a search field -- you control what fields they can query and
it is not prone to throwing syntax errors.

==== `multi_match` query
==== multi_match query

The `multi_match` query allows to run the same `match` query on multiple
fields:
Expand All @@ -176,7 +176,7 @@ fields:
--------------------------------------------------
// SENSE: 054_Query_DSL/70_Multi_match_query.json

==== `bool` query
==== bool query

The `bool` query, like the `bool` filter, is used to combine multiple
query clauses. However, there are some differences. Remember that while
Expand Down
8 changes: 4 additions & 4 deletions 060_Distributed_Search/15_Search_options.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
There are a few optional query-string parameters which can influence the
search process:

==== `preference`
==== preference

The `preference` parameter allows you to control which shards or nodes are
used to handle the search request. It accepts values like: `_primary`,
Expand Down Expand Up @@ -34,7 +34,7 @@ like the user's session ID.
****

==== `timeout`
==== timeout

By default, the coordinating node waits to receive a response from all shards.
If one node is having trouble, it could slow down the response to all search
Expand Down Expand Up @@ -66,7 +66,7 @@ hardware failure -- this will also be reflected in the `_shards` section of
the response.

[[search-routing]]
==== `routing`
==== routing

In <<routing-value>> we explained how a custom `routing` parameter could be
provided at index time to ensure that all related documents, such as the
Expand All @@ -83,7 +83,7 @@ This technique comes in useful when designing very large search systems and we
discuss it in detail in <<scale>>.

[[search-type]]
==== `search_type`
==== search_type

While `query_then_fetch` is the default search type, other search types can
be specified for particular purposes, as:
Expand Down
2 changes: 1 addition & 1 deletion 070_Index_Mgmt/31_Metadata_source.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[source-field]]
==== Metadata: `_source` field
==== Metadata: _source field
By default, Elasticsearch stores the JSON string representing the
document body in the `_source` field. Like all stored fields, the `_source`
Expand Down
2 changes: 1 addition & 1 deletion 070_Index_Mgmt/32_Metadata_all.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[all-field]]
==== Metadata: `_all` field
==== Metadata: _all field
In <<search-lite>> we introduced the `_all` field: a special field that
indexes the values from all other fields as one big string. The `query_string`
Expand Down
4 changes: 2 additions & 2 deletions 070_Index_Mgmt/40_Custom_Dynamic_Mapping.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ are settings that you can use to customise these rules to better
suit your data.

[[date-detection]]
==== `date_detection`
==== date_detection

When Elasticsearch encounters a new string field, it checks to see if the
string contains a recognisable date, like `"2014-01-01"`. If it looks
Expand Down Expand Up @@ -64,7 +64,7 @@ with the {ref}/mapping-root-object-type.html#_dynamic_date_formats[`dynamic_date
====

[[dynamic-templates]]
==== `dynamic_templates`
==== dynamic_templates

With `dynamic_templates`, you can take complete control over the
mapping that is generated for newly detected fields. You
Expand Down
2 changes: 1 addition & 1 deletion 075_Inside_a_shard/40_Near_real_time.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ image::images/elas_1105.png["The buffer contents have been written to a segment,


[[refresh-api]]
==== `refresh` API
==== refresh API

In Elasticsearch, this lightweight process of writing and opening a new
segment is called a _refresh_. By default, every shard is refreshed
Expand Down
2 changes: 1 addition & 1 deletion 075_Inside_a_shard/50_Persistent_changes.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ the document, in real-time.


[[flush-api]]
==== `flush` API
==== flush API

The action of performing a commit and truncating the translog is known in
Elasticsearch as a _flush_. Shards are flushed automatically every 30
Expand Down
2 changes: 1 addition & 1 deletion 075_Inside_a_shard/60_Segment_merging.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ TIP: See <<segments-and-merging>> for advice about tuning merging for your use
case.

[[optimize-api]]
==== `optimize` API
==== optimize API

The `optimize` API is best described as the _forced merge_ API. It forces a
shard to be merged down to the number of segments specified in the
Expand Down
2 changes: 1 addition & 1 deletion 100_Full_Text_Search/05_Match_query.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[match-query]]
=== The `match` query
=== The match query

The `match` query is the ``go-to'' query -- the first query that you should
reach for whenever you need to query any field. It is a high-level _full-text
Expand Down
2 changes: 1 addition & 1 deletion 100_Full_Text_Search/20_How_match_uses_bool.asciidoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
=== How `match` uses `bool`
=== How match uses bool

By now, you have probably realized that <<match-multi-word,multi-word `match`
queries>> simply wrap the generated `term` queries in a `bool` query. With the
Expand Down
2 changes: 1 addition & 1 deletion 110_Multi_Field_Search/15_Best_field.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ give preference to a single field that contain *both* of the words we are
looking for, rather than the same word repeated in different fields.
[[dis-max-query]]
==== `dis_max` query
==== dis_max query
Instead of the `bool` query, we can use the `dis_max` or _Disjunction Max
Query_. Disjunction means ``or'' (while conjunction means ``and'') so the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ the `body` field to rank higher than documents that match on just one field,
but this isn't the case. Remember: the `dis_max` query simply uses the
`_score` from the *single* best matching clause.

==== `tie_breaker`
==== tie_breaker

It is possible, however, to also take the `_score` from the other matching
clauses into account, by specifying the `tie_breaker` parameter.
Expand Down
2 changes: 1 addition & 1 deletion 110_Multi_Field_Search/25_Multi_match_query.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[multi-match-query]]
=== `multi_match` query
=== multi_match query

The `multi_match` query provides us with a convenient shorthand way of running
the same query against multiple fields.
Expand Down
2 changes: 1 addition & 1 deletion 110_Multi_Field_Search/35_Entity_search.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ combine the scores of all matching fields:
}
--------------------------------------------------

==== Problems with the `most_fields` approach
==== Problems with the most_fields approach

The `most_fields` approach to entity search has some problems which are not
immediately obvious:
Expand Down
2 changes: 1 addition & 1 deletion 110_Multi_Field_Search/45_Custom_all.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[custom-all]]
=== Custom `_all` fields
=== Custom _all fields
In <<all-field>> we explained that the special `_all` field indexes the values
from all other fields as one big string. Having all fields indexed into one
Expand Down
2 changes: 1 addition & 1 deletion 130_Partial_Matching/10_Prefix_query.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[prefix-query]]
=== `prefix` query
=== prefix query

To find all postcodes beginning with `W1` we could use a simple `prefix`
query:
Expand Down
2 changes: 1 addition & 1 deletion 130_Partial_Matching/15_WildcardRegexp.asciidoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
=== `wildcard` and `regexp` queries
=== wildcard and regexp queries

The `wildcard` query is a low-level term-based query similar in nature to the
`prefix` query, but it allows you to specify a pattern instead of just a prefix.
Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/20_Query_time_boosting.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ GET /docs_2014_*/_search <1>
in `docs_2014_09` by `2`, and any other matching indices will have
a neutral boost of `1`.

==== `t.getBoost()`
==== t.getBoost()

These boost values are represented in the <<practical-scoring-function>> by
the `t.getBoost()` element. Boosts are not applied at the level that they
Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/30_Not_quite_not.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ the company by excluding `tree` or `crumble`? Sometimes, `must_not` can be
too strict.

[[boosting-query]]
==== `boosting` query
==== boosting query

The {ref}query-dsl-boosting-query.html[`boosting` query] solves this problem.
It allows us to still include results which appear to be about the fruit or
Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/35_Ignoring_TFIDF.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ the more the better. If a feature is present it should score `1`, and if it
isn't, `0`.

[[constant-score-query]]
==== `constant_score` query
==== constant_score query

Enter the {ref}query-dsl-constant-score-query.html[`constant_score`] query.
This query can wrap either a query or a filter, and assigns a score of
Expand Down
2 changes: 1 addition & 1 deletion 170_Relevance/40_Function_score_query.asciidoc
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[function-score-query]]
=== `function_score` query
=== function_score query

The {ref}query-dsl-function-score-query.html[`function_score` query] is the
ultimate tool for taking control of the scoring process. It allows you to
Expand Down
8 changes: 4 additions & 4 deletions 170_Relevance/45_Popularity.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ votes will reset the score to zero.
image::images/elas_1701.png[Linear popularity based on an original `_score` of `2.0`]


==== `modifier`
==== modifier

A better way to incorporate popularity is to smooth out the `votes` value
with some `modifier`. In other words, we want the first few votes to count a
Expand Down Expand Up @@ -111,7 +111,7 @@ The available modifiers are: `none` (the default) , `log`, `log1p`, `log2p`,
about them in the
{ref}query-dsl-function-score-query.html#_field_value_factor[`field_value_factor` documentation].

==== `factor`
==== factor

The strength of the popularity effect can be increased or decreased by
multiplying the value in the `votes` field by some number, called the
Expand Down Expand Up @@ -152,7 +152,7 @@ decreases the effect:
image::images/elas_1703.png[Logarithmic popularity with different factors]


==== `boost_mode`
==== boost_mode

Perhaps multiplying the full text score by the result of the
`field_value_factor` function still has too large an effect. We can control
Expand Down Expand Up @@ -202,7 +202,7 @@ The formula for the above request now looks like this:
image::images/elas_1704.png["Combining popularity with `sum`"]


==== `max_boost`
==== max_boost

Finally, we can cap the maximimum effect that the function can have using the
`max_boost` parameter:
Expand Down
Loading

0 comments on commit 32273f6

Please sign in to comment.