Hadoop审计日志配置

原文链接: https://blog.csdn.net/u011281987/article/details/30034143

Hadoop审计日志配置

  • hdfs审计
    log4j.properties 中配置(缺省就包含)
    hdfs.audit.logger=INFO,NullAppender
    hdfs.audit.log.maxfilesize=256MB
    hdfs.audit.log.maxbackupindex=20
    log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
    log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false   (是否同时写到namenode日志中)
    #log4j.logger.org.apache.hadoop.security=DEBUG,RFAAUDIT (会有部分认证授权信息)
    log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
    log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
    log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
    log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
    log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
    log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
    

修改hadoop-env.sh中 HADOOP_NAMENODE_OPTS中hdfs.audit.logger配置

export HADOOP_NAMENODE_OPTS=.... -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,RFAAUDIT} $HADOOP_NAMENODE_OPTS

日志输出namenode主机logs/hdfs-audit.log
日志格式如下:

2014-04-30 10:19:13,173 INFO FSNamesystem.audit: allowed=true    ugi=cdh5 (auth:SIMPLE)    ip=/10.1.251.52    cmd=create    src=/a._COPYING_    dst=null    perm=cdh5:supergroup:rw-r--r--


ugi ,[,]*
ip
cmd (open|create|delete|rename|mkdirs|listStatus|setReplication|setOwner|setPermission)
src
dst (|"null")
perm (::|"null")

 

  • MapReduce审计
    log4j.properties配置,缺省没有log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger=${mapred.audit.logger},手工增加:
    mapred.audit.logger=INFO,MRAUDIT
    mapred.audit.log.maxfilesize=256MB
    mapred.audit.log.maxbackupindex=20
    log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger=${mapred.audit.logger}
    log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
    log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
    log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
    log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
    log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
    log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
    log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
    log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
    

日志输出在resourcemanager主机的logs/mapred-audit.log

格式如下:

2014-04-30 10:35:09,595 INFO resourcemanager.RMAuditLogger: USER=cdh5    IP=10.1.251.52    OPERATION=Submit Application RequestTARGET=ClientRMService    RESULT=SUCCESS    APPID=application_1398825288110_0001

 

  • hive审计

元数据查询修改审计:
conf/hive-log4j.properties

log4j.appender.HAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.HAUDIT.File=${hive.log.dir}/hive_audit.log
log4j.appender.HAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.HAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.HAUDIT.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
log4j.logger.org.apache.hadoop.hive.metastore.HiveMetaStore.audit=INFO,HAUDIT

 

日志文件:logs/hive_audit.log

日志格式:

2014-04-30 11:26:09,918 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(242)) - ugi=cdh5    ip=unknown-ip-addr      cmd=get_database: default
2014-04-30 11:26:09,931 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(242)) - ugi=cdh5    ip=unknown-ip-addr      cmd=get_tables: db=default pat=.*
2014-04-30 11:26:45,153 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(242)) - ugi=cdh5    ip=unknown-ip-addr      cmd=get_table : db=default tbl=abc
2014-04-30 11:26:45,253 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(242)) - ugi=cdh5    ip=unknown-ip-addr      cmd=get_table : db=default tbl=abc
2014-04-30 11:26:45,285 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(242)) - ugi=cdh5    ip=unknown-ip-addr      cmd=get_table : db=default tbl=abc
2014-04-30 11:26:45,315 INFO  HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(242)) - ugi=cdh5    ip=unknown-ip-addr      cmd=drop_table : db=default tbl=abc

 

  • hbase security

1)打开hbase security
在所有节点hbase-site.xml中加入以下内容重启,其中指定rpc engine为SecurepcEngine,因为该引擎能传递remote client传递的用户凭证(如用户名)

    
          hbase.rpc.engine
          org.apache.hadoop.hbase.ipc.SecureRpcEngine
     
     
          hbase.coprocessor.master.classes
         org.apache.hadoop.hbase.security.access.AccessController
     
     
          hbase.coprocessor.region.classes
          org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController
    

   hbase.superuser
   superuser-accout:cdh5       


 

2)配置log4j.properties,打开Security audit appender,缺省有

hbase.security.log.file=SecurityAuth.audit
hbase.security.log.maxfilesize=256MB
hbase.security.log.maxbackupindex=20
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}
log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.category.SecurityLogger=${hbase.security.logger}
log4j.additivity.SecurityLogger=true
log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE

 

日志格式如下:

2014-06-10 16:09:53,319 TRACE SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController: Access allowed for user cdh5; reason: Table permission granted; remote address: /10.1.251.152; request: deleteTable; context: (user=cdh5, scope=yqhtt, family=, action=ADMIN)
2014-06-10 16:09:53,356 TRACE SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController: Access allowed for user cdh5; reason: All users allowed; remote address: /10.1.251.152; request: getClosestRowBefore; context: (user=cdh5, scope=hbase:meta, family=info, action=READ)
2014-06-10 16:09:53,403 TRACE SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController: Access allowed for user cdh5; reason: Table permission granted; remote address: /10.1.251.152; request: delete; context: (user=cdh5, scope=hbase:meta, family=info:, action=WRITE)
2014-06-10 16:09:53,444 TRACE SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController: Access allowed for user cdh5; reason: Table permission granted; remote address: /10.1.251.152; request: delete; context: (user=cdh5, scope=hbase:acl, family=l:, action=WRITE)
2014-06-10 16:09:53,471 TRACE SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController: Access allowed for user cdh5; reason: All users allowed; remote address: /10.1.251.152; request: getClosestRowBefore; context: (user=cdh5, scope=hbase:meta, family=info, action=READ)

 

 

 


你可能感兴趣的:(Hadoop审计日志配置)