Elasticsearch provides a few built-in analyzers. Here’s a breakdown of each and where best to use it.
No Analyzer
Disabling the Analyzer will store your data with no tokens and can only be searched by exact matches of the full text in the document.
Best Use
Useful for single words and short phrases where searching is not necessary, for instance to populate a dropdown
Standard Analyzer
The standard analyzer divides text into terms on word boundaries, as defined by the Unicode Text Segmentation algorithm. It removes most punctuation, lowercases terms, and supports removing stop words.
Best Use
Useful for most applications where stop words and exact phrases are not required in searches.
Simple Analyzer
The simple analyzer divides text into terms whenever it encounters a character which is not a letter. It lowercases all terms.
Best Use
Useful for searches that do not want to consider numbers, UTF characters,
Whitespace Analyzer
The whitespace analyzer divides text into terms whenever it encounters any whitespace character. It does not lowercase terms.
Best Use
TBD
Stop Analyzer
The stop analyzer is like the simple analyzer, but also supports removal of stop words. The English stop words are:
a, an, and, are, as, at, be, but, by, for, if, in, into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with.
Best Use
TBD
Keyword Analyzer
The keyword analyzer is a “noop” analyzer that accepts whatever text it is given and outputs the exact same text as a single term.
Best Use
TBD
Pattern Analyzer
The pattern analyzer uses a regular expression to split the text into terms. It supports lower-casing and stop words.
Best Use
TBD
Language Analyzers
Elasticsearch provides many language-specific analyzers like english or french.
Best Use
TBD
Fingerprint Analyzer
The fingerprint analyzer is a specialist analyzer which creates a fingerprint which can be used for duplicate detection.
Best Use
TBD