Elasticsearch5.4集群(一)安装部署

ES版本升级

生产环境用的是1.7,Elasticsearch5.x在性能上有了很大的提升,计划升级到5.4,先在线下部署验证,过程中发现很多配置项都改了,各种报错。
一定要看官方文档:https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html,网上找到的很多都过时的低版本配置,要么不对要么不全,还需要注意下breaking changes(不兼容变更项)。

节点功能分离

master、data、client分开部署,规划存储容量、内存、CPU(权威指南:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html)。master节点不存储数据,内存和cpu都比较低,一般部署3个master保证可靠。不建议用一块很大的磁盘,建议用多块SSD,可以并发IO。不是clien的节点可以不开启http服务的节点(http.enabled: false)同时也不要安装head, bigdesk, marvel等监控插件,这样保证data节点服务器只需处理创建/更新/删除/查询索引数据等操作。

内存分配

内存不要超过32G,java版本使用jdk1.8(We recommend installing Java version 1.8.0_131 or later),es最大分配-Xms32766m -Xmx32766m(内存对象指针压缩技术),GC使用G1不要设置-Xmn(参照垃圾回收配置:http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html)。物理机上的内存之分一半给ES,另一半给Lucene,否则性能会受影响。128G内存的机器最多部署2个es实例。

系统配置修改

禁用swap(在一些内核版本,swappness=0会引发OOM)。vim /etc/sysctl.conf
修改 vm.swappiness=1,设置vm.max_map_count=262144(否则启动会报错:max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144])

vm.swappiness=1
vm.max_map_count=262144
立即生效需执行 sysctl -p
修改es用户限制,不能用root启动Elasticsearch,比如使用tomcat用户(用户重新登录生效)
vim /etc/security/limits.conf
tomcat soft memlock unlimited
tomcat hard memlock unlimited

安装Elasticsearch5.4.1

安装步骤参照(https://www.elastic.co/downloads/elasticsearch),我们把es安装到/usr/local/目录下

cd /usr/local/
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.1.tar.gz
sha1sum elasticsearch-5.4.1.tar.gz 
tar -xzf elasticsearch-5.4.1.tar.gz
cd elasticsearch-5.4.1/
建立软链接:
ln -s /usr/local/elasticsearch-5.4.1 elasticsearch
更新安装目录/usr/local/elasticsearch/bin/elasticsearch启动脚本,修改查找jvm配置文件顺序,加入"$LEIDA_ES_HOME"/config/jvm.options,让虚拟机参数优先使用实例自己的配置,因为我们一个物理机上会启用多个es实例
if [ -z "$ES_JVM_OPTIONS" ]; then
    for jvm_options in "$LEIDA_ES_HOME"/config/jvm.options \
	                   "$ES_HOME"/config/jvm.options \
                      /etc/elasticsearch/jvm.options; do
        if [ -r "$jvm_options" ]; then
            ES_JVM_OPTIONS=$jvm_options
            break
        fi
    done
fi
修改JVM参数的堆大小和G1GC方式(vim /usr/local/elasticsearch/config/jvm.options):
-Xms32766m
-Xmx32766m

## GC configuration
#-XX:+UseConcMarkSweepGC
#-XX:CMSInitiatingOccupancyFraction=75
#-XX:+UseCMSInitiatingOccupancyOnly

## G1GC
-XX:+UseG1GC
-XX:MaxGCPauseMillis=800
-XX:ParallelGCThreads=15
-XX:ConcGCThreads=4

mmseg分词器

下载elasticsearch-analysis-mmseg-5.4.1.zip
下载地址:https://github.com/medcl/elasticsearch-analysis-mmseg/releases
(elasticsearch.yml不支持参数path.plugins参数,如果一台机器上启多个实例,想配置每个实例自己的插件,比如master节点不加载head插件,需要改下源码读取路径和check验证,默认是读取安装路径下的/usr/local/elasticsearch/plugins/)

cd /usr/local/elasticsearch/plugins/
unzip elasticsearch-analysis-mmseg-5.4.1.zip -d elasticsearch-analysis-mmseg-5.4.1

创建master

实例配置存放在/data/es/es_[tcpport]下

cd /data/es
mkdir -p es_9300/bin
mkdir -p es_9300/config
mkdir -p es_9300/data
mkdir -p es_9300/logs

cp /usr/local/elasticsearch/config/log4j2.properties /data/es/es_9300/config/log4j2.properties
cp /usr/local/elasticsearch/config/jvm.options /data/es/es_9300/config/jvm.options
修改JVM堆大小,vim /data/es/es_9300/config/jvm.options,安装目录的默认-Xms32766m是给datanode用的,master的堆需要改小。

cp /usr/local/elasticsearch/config/elasticsearch.yml /data/es/es_9300/config/elasticsearch.yml
修改elasticsearch.yml如下, IP、端口和存储路径需要自己替换,其他2个master的配置类似。为避免脑裂,discovery.zen.minimum_master_nodes设为2(非生产环境验证优先保证可用这里设为1)
bootstrap.memory_lock: true
bootstrap.system_call_filter: false

cluster.name: loganalysis
node.attr.box_type: master
node.name: master_1
node.master: true
node.data: false

discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ["master1ip:9300","master2ip:9300","master3ip:9300"]
discovery.zen.fd.ping_timeout: 50s
discovery.zen.fd.ping_retries: 6

cluster.routing.allocation.node_initial_primaries_recoveries: 6
cluster.routing.allocation.node_concurrent_recoveries: 6
cluster.routing.allocation.cluster_concurrent_rebalance: 4

network.host: master1ip
http.port: 9200
transport.tcp.port: 9300
path.conf: /data/es/es_9300/config
path.data: /data/es/es_9300/data
path.logs: /data/es/es_9300/logs

http.cors.enabled: true
http.cors.allow-origin: /.*/
http.cors.allow-credentials: true

启动脚本

因为一个物理机上只安装了一个es,但是要启用多个es实例,需要区分config文件的路径,把别的脚本拷贝过来改成我们的leidaES.sh,启动master

cd /data/es/es_9300/bin
./leidaES.sh start
elasticsearch's real path : /usr/local/elasticsearch-5.4.1
Starting elasticsearch es_9300:                            [  OK  ]


脚本内容其实就是加了些验证和start、stop参数,停止命令./leidaES.sh stop
#!/bin/sh
# 
# init.d / servicectl compatibility (openSUSE)
#
if [ -f /etc/rc.status ]; then
    . /etc/rc.status
    rc_reset
fi

#
# Source function library.
#
if [ -f /etc/rc.d/init.d/functions ]; then
    . /etc/rc.d/init.d/functions
fi

MAX_OPEN_FILES=600000
# max locked memory   (kbytes, -l) 32G=1024*1024*32
MAX_LOCKED_MEMORY=33554432
# vim /etc/sysctl.conf add vm.max_map_count=262144
#MAX_MAP_COUNT=262144

ES_USER="tomcat"
ES_GROUP="tomcat"
JAVA_HOME=/usr/local/java/jdk1.8
ES_HOME="/usr/local/elasticsearch"

SCRIPT="$0"
# determine leida elasticsearch home
LEIDA_ES_HOME=`dirname "$SCRIPT"`/..
# make LEIDA ELASTICSEARCH_HOME absolute
LEIDA_ES_HOME=`cd "$LEIDA_ES_HOME"; pwd`
ES_NODE=`echo $LEIDA_ES_HOME|awk -F'/' '{print $NF}'`

CONF_DIR="${LEIDA_ES_HOME}/config"
WORK_DIR="/tmp/elasticsearch"
CONF_FILE="${CONF_DIR}/elasticsearch.yml"

#export ES_HOME
#export JAVA_HOME
export LEIDA_ES_HOME

exec="$ES_HOME/bin/elasticsearch"
if [ -f "$exec" ]; then
    chmod 755 "$exec"
fi
prog="elasticsearch"
pidfile="$LEIDA_ES_HOME/${prog}.pid"
lockfile=/var/lock/subsys/$prog

# backwards compatibility for old config sysconfig files, pre 0.90.1
if [ -n $USER ] && [ -z $ES_USER ] ; then
   ES_USER=$USER
fi

checkJava() {
    if [ -x "$JAVA_HOME/bin/java" ]; then
        JAVA="$JAVA_HOME/bin/java"
    else
        JAVA=`which java`
    fi

    if [ ! -x "$JAVA" ]; then
        echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME"
        exit 1
    fi
}

start() {
    checkJava
    [ -x $exec ] || echo "$exec script file is not executable." ||exit 5
    [ -f $CONF_FILE ] || echo "$CONF_FILE is not exists." || exit 6
#    if [ -n "$MAX_LOCKED_MEMORY" -a -z "$ES_HEAP_SIZE" ]; then
#        echo "MAX_LOCKED_MEMORY is set - ES_HEAP_SIZE must also be set"
#        return 7
#    fi
    if [ -n "$MAX_OPEN_FILES" ]; then
        ulimit -n $MAX_OPEN_FILES
    fi
    if [ -n "$MAX_LOCKED_MEMORY" ]; then
        ulimit -l $MAX_LOCKED_MEMORY
    fi
    if [ -n "$MAX_MAP_COUNT" -a -f /proc/sys/vm/max_map_count ]; then
        sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT
    fi
    if [ -n "$WORK_DIR" ]; then
        mkdir -p "$WORK_DIR"
        chown "$ES_USER":"$ES_GROUP" "$WORK_DIR"
    fi

    # Ensure that the LEIDA_ES_HOME exists (it is cleaned at OS startup time)
    if [ -n "$LEIDA_ES_HOME" ] && [ ! -e "$LEIDA_ES_HOME" ]; then
        mkdir -p "$LEIDA_ES_HOME" && chown "$ES_USER":"$ES_GROUP" "$LEIDA_ES_HOME"
    fi
    chown -R "$ES_USER":"$ES_GROUP" "$LEIDA_ES_HOME"
    chown -R "$ES_USER":"$ES_GROUP" "${ES_HOME}"
	if [ -h "$ES_HOME" ]; then
        eslink=`ls -ld "$ES_HOME"`
        # Drop everything prior to ->
        eslink=`expr "$eslink" : '.*-> \(.*\)$'`
	    echo "elasticsearch's real path : $eslink"
        if expr "$eslink" : '/.*' > /dev/null; then
            chown -R "$ES_USER":"$ES_GROUP" "${eslink}"
        fi
    fi

    if [ -n "$pidfile" ] && [ ! -e "$pidfile" ]; then
        touch "$pidfile" && chown "$ES_USER":"$ES_GROUP" "$pidfile"
    fi

    echo -n $"Starting $prog ${ES_NODE}: "
    # if not running, start it up here, usually something like "daemon $exec" # -Djava.io.tmpdir=${WORK_DIR}
    daemon --user $ES_USER --pidfile $pidfile $exec -d -p $pidfile -Epath.conf=${CONF_DIR}
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog ${ES_NODE}: "
    # stop it here, often "killproc $prog"
    killproc -p $pidfile -d 20 $prog
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    stop
    start
}

reload() {
    restart
}

force_reload() {
    restart
}

rh_status() {
    # run checks to determine if the service is running or use generic status
    status -p $pidfile $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}

case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart)
        $1
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
        restart
        ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
        exit 2
esac
exit $?

创建datanode

和master节点的配置类似,注意node.master:false和node.data:true;node.name在集群中唯一。生产环境我们要做冷热分离,es1.7是通过分组来实现,ES5已改成node.attr.box_type。另外最好datanode和clientnode独立,非生产环境就合二为一了。cp /usr/local/elasticsearch/config/elasticsearch.yml /data/es/es_9301/config/elasticsearch.yml修改elasticsearch.yml如下:

bootstrap.memory_lock: true
bootstrap.system_call_filter: false

cluster.name: loganalysis
node.attr.box_type: hot
node.name: hot_1
node.master: false
node.data: true

discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ["master1ip:9300","master2ip:9300","master3ip:9300"]
discovery.zen.fd.ping_timeout: 50s
discovery.zen.fd.ping_retries: 6

cluster.routing.allocation.node_initial_primaries_recoveries: 6
cluster.routing.allocation.node_concurrent_recoveries: 6
cluster.routing.allocation.cluster_concurrent_rebalance: 4

network.host: 改成机器的IP
http.port: 9201
transport.tcp.port: 9301
path.conf: /data/es/es_9301/config
path.data: /data/es/es_9301/data
path.logs: /data/es/es_9301/logs

http.cors.enabled: true
http.cors.allow-origin: /.*/
http.cors.allow-credentials: true

启动数据节点

先启动3个master节点,再启动配置的多个数据节点,组成Es集群

cd /data/es/es_9301/bin
./leidaES.sh start
elasticsearch's real path : /usr/local/elasticsearch-5.4.1
Starting elasticsearch es_9301:                            [  OK  ]

创建动态模版

配置mmseg分词和默认一个分片,不要副本。每个doc都有time字段和存储日志的msg字段。

PUT /_template/default_template
{
   "template": "*",
   "settings": {
      "index.refresh_interval": "15s",
      "index.translog.flush_threshold_size": "1gb",
	  "index.translog.sync_interval": "1000000ms",
	  "index.codec": "best_compression",
      "index.number_of_replicas": "0",
      "index.number_of_shards": "1"
   },
   "mappings": {
      "_default_": {
         "_source": {
         },
         "dynamic_templates": [
            {
               "msg": {
                  "path_match": "msg",
                  "mapping": {
                     "analyzer": "mmseg_maxword",
                     "search_analyzer": "mmseg_maxword",
                     "omit_norms": true,
                     "type": "text"
                  }
               }
            },
            {
               "time": {
                  "path_match": "time",
                  "mapping": {
                     "type": "date"
                  }
               }
            },
            {
               "other": {
                  "mapping": {
                     "fielddata": {
                        "format": "doc_values"
                     },
                     "index": "not_analyzed",
                     "ignore_above":1000,
                     "omit_norms": true,
                     "doc_values": true
                  },
                  "match": "*"
               }
            }
         ],
         "_all": {
            "enabled": false
         }
      }
   },
   "aliases": {}
}
测试中文分词`mmseg_maxword` ,`mmseg_complex` ,`mmseg_simple`
curl -X GET "localhost:9200/_analyze?analyzer=mmseg_maxword&pretty=true" -d '中华人民共和国12**3朝阳群众,www.elastic.co'

集群状态API接口测试

查看集群状态
curl 'localhost:9200/_cat/health?v'

创建索引(sense插件执行)
PUT /test

插入doc数据
POST /test/china/1
{"msg":"美国留给伊拉克的是个烂摊子吗"}
POST /test/china/2
{"msg":"公安部:各地校车将享最高路权"}
POST /test/china/3
{"msg":"中韩渔警冲突调查:韩警平均每天扣1艘中国渔船"}
POST /test/china/4
{"msg":"中国驻洛杉矶领事馆遭亚裔男子枪击 嫌犯已自首"}

搜索数据,能搜到2条数据。
POST /test/china/_search?
{"query":{"bool":{"must":[{"match_phrase_prefix":{"msg":{"query":"中国"}}}],"must_not":[]}},"from":0,"size":40}

至此集群正常运行,还需安装监控插件,后续优化配置

你可能感兴趣的:(ElasticSearch)