Elasticsearch中的segment理解

在Elasticsearch中, 需要搞清楚几个名词,如segment/doc/term/token/shard/index等, 其实segment/doc/term/token都是lucene中的概念。这样有助于更深入的了解和使用ES。

document: 索引和搜索的主要数据载体,对应写入到ES中的一个doc。

field: document中的各个字段。

term: 词项,搜索时的一个单位,代表文本中的某个词。

token: 词条,词项(term)在字段(field)中的一次出现,包括词项的文本、开始和结束的位移、类型等信息。

Lucene内部使用的是倒排索引的数据结构, 将词项(term)映射到文档(document)。

例如:某3个文档,假设某个字段的文本如下

ElasticSearch Server (文档1)

Matering ElasticSearch (文档2)

Apache solr 4 Cookbook (文档3)

term 次数 doc id
4 1 3
Apache 1 3
Cookbook 1 3
ElasticSearch 2 1,2
Matering 1 1
Server 1 1
solr 1 3


index: 在ES中类似数据库中db

shard: 

A "shard" is an instance of Lucene. It is a fully functional search engine in its own right. An "index" could consist of a single shard, but generally consists of several shards, to allow the index to grow and to be split over several machines.

A "primary shard" is the main home for a document. A "replica shard" is a copy of the primary shard that provides (1) failover in case the primary dies and (2) increased read throughput

segment:

 

Each shard contains multiple "segments", where a segment is an inverted index. A search in a shard will search each segment in turn, then combine their results into the final results for that shard.

While you are indexing documents, Elasticsearch collects them in memory (and in the transaction log, for safety) then every second or so, writes a new small segment to disk, and "refreshes" the search.

This makes the data in the new segment visible to search (ie they are "searchable"), but the segment has not been fsync'ed to disk, so is still at risk of data loss.

Every so often, Elasticsearch will "flush", which means fsync'ing the segments, (they are now "committed") and clearing out the transaction log, which is no longer needed because we know that the new data has been written to disk.

The more segments there are, the longer each search takes. So Elasticsearch will merge a number of segments of a similar size ("tier") into a single bigger segment, through a background merge process. Once the new bigger segment is written, the old segments are dropped. This process is repeated on the bigger segments when there are too many of the same size.

Segments are immutable. When a document is updated, it actually just marks the old document as deleted, and indexes a new document. The merge process also expunges these old deleted documents.

参考:

https://www.elastic.co/guide/en/elasticsearch/reference/current/glossary.html

http://stackoverflow.com/questions/15426441/understanding-segments-in-elasticsearch

猜你喜欢

转载自weitao1026.iteye.com/blog/2395809