flume编译源码后启动报错

19/07/30 11:13:22 INFO util.RestClientHelper: es_username: null,  es_password:  null,  es_hostName:  172.16.10.10,172.16.10.11,  es_port: 9200
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.NoSuchFieldError: INSTANCE
    at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager$InternalAddressResolver.(PoolingNHttpClientConnectionManager.java:591)
    at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.(PoolingNHttpClientConnectionManager.java:163)
    at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.(PoolingNHttpClientConnectionManager.java:147)
    at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.(PoolingNHttpClientConnectionManager.java:119)
    at org.apache.http.impl.nio.client.HttpAsyncClientBuilder.build(HttpAsyncClientBuilder.java:668)
    at org.elasticsearch.client.RestClientBuilder$1.run(RestClientBuilder.java:213)
    at org.elasticsearch.client.RestClientBuilder$1.run(RestClientBuilder.java:210)
    at java.security.AccessController.doPrivileged(Native Method)
    at org.elasticsearch.client.RestClientBuilder.createHttpClient(RestClientBuilder.java:210)
    at org.elasticsearch.client.RestClientBuilder.build(RestClientBuilder.java:184)
    at com.geo.cn.util.RestClientHelper.(RestClientHelper.java:82)
    at com.geo.cn.util.RestClientHelper.getClient(RestClientHelper.java:114)
    at com.geo.cn.util.EsUtil.getRestHighLevelClient(EsUtil.java:43)
    at com.geo.cn.DASink.process(DASink.java:182)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
    at java.lang.Thread.run(Thread.java:745)」

原因:flume lib(classPath) 存在多个版本的http-client.jar,http-core.jar 。有flume自带的低版本 maven重新编译的flume-ng-core自带的高版本


解决:删除 flume lib目录下的 低版本的http-client.jar,http-core.jar

 

 

 

flume解决后再次测试数据:

flume监控功能测试:


1.将测试数据cat出来,赋值一部分到kafka producer中


(1)创建topic:
kafka-topics.sh --create --zookeeper zookeeper地址(localhost:2181) --replication-factor 1 --partitions 1 --topic topic名称

 

(2)将日志重定向到kafka-console-producer.sh中:
/usr/kafka_2.10-0.9.0.1/bin/kafka-console-producer.sh   --topic  生产新建测试topic  --broker-list   生成环境kafka的broker-list对应的 ip:port  << EOF
`cat kafka日志文件全路径`
EOF

/usr/kafka_2.10-0.9.0.1/bin/kafka-console-producer.sh   --topic  dp_monitor_rh_out-20190717  --broker-list   hadoop26.geotmt.com:6667,hadoop206.geotmt.com:6667,hadoop207.geotmt.com:6667  << EOF
`cat da.log`
EOF

 /usr/kafka_2.10-0.9.0.1/bin/kafka-console-producer.sh   --topic  dp_monitor_rh_out-20190717  --broker-list   hadoop26.geotmt.com:6667,hadoop206.geotmt.com:6667,hadoop207.geotmt.com:6667  << EOF
> `cat da.log`
> EOF


2.将数据放入后,查看flume日志,看是否正常插入elasticsearch ,查看elasticsearch中是否有对应模型id数据

你可能感兴趣的:(maven,flume)