今天给大家分享下 使用flume采集日志,直接将日志导入对应的hive表中,然后使用hive进行日志分析
下面就以apache access log为例
具体使用hive的外部表还是普通的表,个人决定哈
我这里就以普通表来讲解,首先我们创建一个hive表(注:此表我是从hive官网上直接拷贝修改了下表名,哈哈)
1,首先进入hive 命令行模式
我创建了一个自己的数据库
create database hive_1208;
然后使用该数据库:
use hive_1208;
直接执行建表语句:
CREATE TABLE td_log_analyze( host STRING, identity STRING, user STRING, time STRING, request STRING, status STRING, size STRING, referer STRING, agent STRING) partitioned by (dt string) ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES ( "input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|\\[[^\\]]*\\]) ([^ \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\"[^\"]*\") ([^ \"]*|\"[^\"]*\"))?", "output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s" ) STORED AS TEXTFILE;
2,然后你就可以在hdfs 看到对应的目录
下面就该flume 出动了,
我的配置如下:
#定义agent名, source、channel、sink的名称 logAnalyzeAG.sources = r1 logAnalyzeAG.channels = c1 logAnalyzeAG.sinks = k1 #具体定义source logAnalyzeAG.sources.r1.type = spooldir logAnalyzeAG.sources.r1.spoolDir = /usr/local/nginx_logs #设置缓存提交行数 logAnalyzeAG.sources.s1.deserializer.maxLineLength =1048576 logAnalyzeAG.sources.s1.fileSuffix = .DONE logAnalyzeAG.sources.s1.ignorePattern = access(_\d{4}\-\d{2}\-\d{2}_\d{2})?\.log(\.DONE)? logAnalyzeAG.sources.s1.consumeOrder = oldest logAnalyzeAG.sources.s1.deserializer = org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder logAnalyzeAG.sources.s1.batchsize = 5 #定义拦截器,为消息添加时间戳 #logAnalyzeAG.sources.r1.interceptors = i1 #logAnalyzeAG.sources.r1.interceptors.i1.type = org.apache.flume.interceptor.TimestampInterceptor$Builder #具体定义channel logAnalyzeAG.channels.c1.type = memory logAnalyzeAG.channels.c1.capacity = 10000 logAnalyzeAG.channels.c1.transactionCapacity = 100 #具体定义sink logAnalyzeAG.sinks.k1.type = hdfs #%y-%m-%d/%H%M/%S #这里对应就是hive 表的目录 此处如果是外部表,则直接对应你的localtion地址,如果普通则对应到你的hive表目录即可 logAnalyzeAG.sinks.k1.hdfs.path = hdfs://192.168.3.141:9000/hive/warehouse/hive_1208.db/td_log_analyze/%Y-%m-%d_%H logAnalyzeAG.sinks.k1.hdfs.filePrefix = nginx-%Y-%m-%d_%H logAnalyzeAG.sinks.k1.hdfs.fileSuffix = .log logAnalyzeAG.sinks.k1.hdfs.fileType = DataStream #不按照条数生成文件 logAnalyzeAG.sinks.k1.hdfs.rollCount = 0 #HDFS上的文件达到128M时生成一个文件 logAnalyzeAG.sinks.k1.hdfs.rollSize = 2914560 #HDFS上的文件达到60秒生成一个文件 #logAnalyzeAG.sinks.k1.hdfs.rollInterval = 60 logAnalyzeAG.sinks.k1.hdfs.useLocalTimeStamp = true #组装source、channel、sink logAnalyzeAG.sources.r1.channels = c1 logAnalyzeAG.sinks.k1.channel = c1
我这是flume sources使用的是spoolDir type监控/usr/local/nginx_logs
进去该监控目录vi assec_2015-12-08_12.log
将如下数据导入加入
180.173.250.74 - - [08/Jan/2015:12:38:08 +0800] "GET /avatar/xxx.png HTTP/1.1" 200 968 "http://www.iteblog.com/archives/994" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36" 180.173.250.74 - - [08/Jan/2015:12:38:08 +0800] "GET /avatar/xxx.png HTTP/1.1" 200 968 "http://www.iteblog.com/archives/994" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36" 180.173.250.74 - - [08/Jan/2015:12:38:08 +0800] "GET /avatar/xxx.png HTTP/1.1" 200 968 "http://www.iteblog.com/archives/994" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
保存完之后了flume则会采集该日志 到指定的hdfs dir
==你会看到hdfs目录如下:
此时在说明日志采集成功,
我这里是对日志进行的分区存放,所以要想将hive表对应指定分区需要创建hive 分区如下
ALTER TABLE td_log_analyze ADD IF NOT EXISTS PARTITION (dt='2015-12-08_12') LOCATION '/hive/warehouse/hive_1208.db/td_log_analyze/2015-12-08_12/';
此时你进去hive命令执行查看分区是否创建成功
show partitions td_log_analyze;
然后查询hive表
select * from td_log_analyze;
OK到此处一切正常结束,具体分析就看个人业务了哈
注:上面需要手动给hive创建分区那个步骤,看具体分区的间隔,可写自动化脚本来自动创建分区,个人建议提前一天创建分区即可。