elk 7.8 安装+从mysql拉增量数据+密码配置

1 安装7.8 es

下载地址(三个软件均可在此处下载): https://www.elastic.co/cn/downloads/

在/config/elasticsearch.yml 内配置文件

# ======================== Elasticsearch Configuration =========================
path.data: /disk_data/data_7.8
network.host: 127.0.0.1
http.port: 9201
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
bootstrap.memory_lock: true
indices.breaker.request.limit: 10%
index.merge.scheduler.max_thread_count: 1
indices.queries.cache.size: 20%
indices.requests.cache.size: 2%
indices.fielddata.cache.size: 30%

在/config/jvm.options 内配置文件(配置运行es的jvm虚拟机的参数)

只需要改这个就好了:

-Xms4g
-Xmx4g

配置的依据是系统总内存的二分之一

运行的时候需要切换成其他用户 (非root用户均可)

启动完成后 在bin目录下 ./setup-passwords interactive 启动这个辅助软件来创建密码 (密码创建后重新启动es 即可)
配置ik中文分词器
将ik分词器解压到 plugins 下 然后将名字改为analysis-ik即可(注意:需要在配置完密码后再安装ik,因为配置密码也是在es下建库来存放的)

2 配置logstant

配置config 下的logstash.yml 文件 添加这些配置

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: xxxxxxxx   //用户名
xpack.monitoring.elasticsearch.password: xxxxxxx     //密码
xpack.monitoring.elasticsearch.hosts: ["http://127.0.0.1:9200"]

连接mysql拉增量数据的文件 这是win7的需要使用的时候将文件地址改为linux即可.

input {
  stdin {
  }
   jdbc {
   jdbc_connection_string => "jdbc:mysql://192.168.1.186:3306/smalltarget_bak?characterEncoding=utf8"
   jdbc_user => "root"
   jdbc_password => "admin"
   jdbc_driver_library => "xxxxr" //连接mysql的jar包位置
   jdbc_driver_class => "com.mysql.jdbc.Driver"
   record_last_run => "true"
   use_column_value => "true"  //开启使用最后放入id的文件
   tracking_column => "userid"
   last_run_metadata_path => "xxxx" //记录最后放入es的数据的Id
   clean_run => "false"

   statement => "select * from user "
   schedule => "* * * * * "
	type => "users"
  }
   jdbc {
   jdbc_connection_string => "jdbc:mysql://192.168.1.186:3306/smalltarget_bak?characterEncoding=utf8"
   jdbc_user => "root"
   jdbc_password => "admin"
   jdbc_driver_library => "mysql jar包放置的地方"
   jdbc_driver_class => "com.mysql.jdbc.Driver"
   record_last_run => "true"
   use_column_value => "true"    //是否开启使用记录最后一条的Id
   tracking_column => "signId"
   last_run_metadata_path => "记录最后一条记录Id放置的地方"
   clean_run => "false"
   lowercase_column_names => false
   statement => "select * from sign "
   schedule => "* * * * * "
	type => "signs"
  }
 
}

filter {
  json {
    source => "message"
    remove_field => ["message"]
  }
}

output {
	stdout {
        codec => json_lines
    }

    if [type] == "users" {
    elasticsearch {
        hosts  => "127.0.0.1:9200"
        index => "users" 
        document_id => "%{userid}" 
		user=> "xxxxx"
		password=>"xxxxx"
    }
    }
	if [type] == "signs" {
	elasticsearch {
    hosts => "127.0.0.1:9200"
    index => "signs"
    document_id => "%{signId}"
	user=> "xxxxx"
    password=>"xxxxxx"
    }
    }
   
}

拉数据的配置文件和记录最后Id的文件要放在bin文件夹的mysql-congfig(此目录自己创建)下
启动 logstant

  logstash -f config-mysql/mysql.conf

3 安装kibana

在config下的kibana.yml 文件中添加

elasticsearch.hosts: ["http://localhost:9200"]  //es地址
elasticsearch.username: "xxxxxx" 
elasticsearch.password: "xxxxxx"

重启即可

Es建造index的文件

注意 一: 7.8 数据结构删除了type

PUT users
{
	"settings": {
		"number_of_shards": "3",      
		"number_of_replicas": "1"
	},
	"mappings": {
			"properties": {
				"username": {
					"type": "text",
					"analyzer":  "ik_max_word"
				},
				"phone": {
					"type": "text",
					"analyzer":  "ik_max_word"
				}
		}
	}
}

PUT signs
{
	"settings": {
		"number_of_shards": "3",
		"number_of_replicas": "1"
	},
	"mappings": {
			"properties": {
				"signid": {
					"type": "integer"
				},
				"content": {
					"type": "text",
					 "analyzer":"ik_max_word"
				}
		}
	}
}

以上所有如有错误,希望大家指正.

你可能感兴趣的:(教程,java,大数据,elasticsearch)