-
create 'WLSSERVER' , 'LOG'
-
put 'WLSSERVER', 'row1', 'LOG:TIME_STAMP', 'Apr-8-2014-7:06:16-PM-PDT'
-
put 'WLSSERVER', 'row1', 'LOG:CATEGORY', 'Notice'
-
put 'WLSSERVER', 'row1', 'LOG:TYPE', 'WebLogicServer'
-
put 'WLSSERVER', 'row1', 'LOG:SERVERNAME', 'AdminServer'
-
put 'WLSSERVER', 'row1', 'LOG:CODE', 'BEA-000365'
-
put 'WLSSERVER', 'row1', 'LOG:MSG', 'Server state changed to STANDBY'
-
put 'WLSSERVER', 'row2', 'LOG:TIME_STAMP', 'Apr-8-2014-7:06:17-PM-PDT'
-
put 'WLSSERVER', 'row2', 'LOG:CATEGORY', 'Notice'
-
put 'WLSSERVER', 'row2', 'LOG:TYPE', 'WebLogicServer'
-
put 'WLSSERVER', 'row2', 'LOG:SERVERNAME', 'AdminServer'
-
put 'WLSSERVER', 'row2', 'LOG:CODE', 'BEA-000365'
-
put 'WLSSERVER', 'row2', 'LOG:MSG', 'Server state changed to STARTING'
-
put 'WLSSERVER', 'row3', 'LOG:TIME_STAMP', 'Apr-8-2014-7:06:18-PM-PDT'
-
put 'WLSSERVER', 'row3', 'LOG:CATEGORY', 'Notice'
-
put 'WLSSERVER', 'row3', 'LOG:TYPE', 'WebLogicServer'
-
put 'WLSSERVER', 'row3', 'LOG:SERVERNAME', 'AdminServer'
-
put 'WLSSERVER', 'row3', 'LOG:CODE', 'BEA-000365'
-
put 'WLSSERVER', 'row3', 'LOG:MSG', 'Server state changed to ADMIN'
-
put 'WLSSERVER', 'row4', 'LOG:TIME_STAMP', 'Apr-8-2014-7:06:19-PM-PDT'
-
put 'WLSSERVER', 'row4', 'LOG:CATEGORY', 'Notice'
-
put 'WLSSERVER', 'row4', 'LOG:TYPE', 'WebLogicServer'
-
put 'WLSSERVER', 'row4', 'LOG:SERVERNAME', 'AdminServer'
-
put 'WLSSERVER', 'row4', 'LOG:CODE', 'BEA-000365'
-
put 'WLSSERVER', 'row4', 'LOG:MSG', 'Server state changed to RESUMING'
-
put 'WLSSERVER', 'row5', 'LOG:TIME_STAMP', 'Apr-8-2014-7:06:20-PM-PDT'
-
put 'WLSSERVER', 'row5', 'LOG:CATEGORY', 'Notice'
-
put 'WLSSERVER', 'row5', 'LOG:TYPE', 'WebLogicServer'
-
put 'WLSSERVER', 'row5', 'LOG:SERVERNAME', 'AdminServer'
-
put 'WLSSERVER', 'row5', 'LOG:CODE', 'BEA-000331'
-
put 'WLSSERVER', 'row5', 'LOG:MSG', 'Started WebLogic AdminServer'
-
put 'WLSSERVER', 'row6', 'LOG:TIME_STAMP', 'Apr-8-2014-7:06:21-PM-PDT'
-
put 'WLSSERVER', 'row6', 'LOG:CATEGORY', 'Notice'
-
put 'WLSSERVER', 'row6', 'LOG:TYPE', 'WebLogicServer'
-
put 'WLSSERVER', 'row6', 'LOG:SERVERNAME', 'AdminServer'
-
put 'WLSSERVER', 'row6', 'LOG:CODE', 'BEA-000365'
-
put 'WLSSERVER', 'row6', 'LOG:MSG', 'Server state changed to RUNNING'
-
put 'WLSSERVER', 'row7', 'LOG:TIME_STAMP', 'Apr-8-2014-7:06:22-PM-PDT'
-
put 'WLSSERVER', 'row7', 'LOG:CATEGORY', 'Notice'
-
put 'WLSSERVER', 'row7', 'LOG:TYPE', 'WebLogicServer'
-
put 'WLSSERVER', 'row7', 'LOG:SERVERNAME', 'AdminServer'
-
put 'WLSSERVER', 'row7', 'LOG:CODE', 'BEA-000360'
-
put 'WLSSERVER', 'row7', 'LOG:MSG', 'Server started in RUNNING mode'
-
Oracle Loader for Hadoop Release 3.5.0 - Production
-
Copyright (c) 2011, 2015, Oracle and/or its affiliates. All rights reserved.
-
15/12/21 10:37:22 INFO loader.OraLoader: Oracle Loader for Hadoop Release 3.5.0 - Production
-
Copyright (c) 2011, 2015, Oracle and/or its affiliates. All rights reserved.
-
15/12/21 10:37:22 INFO loader.OraLoader: Built-Against: hadoop-2.2.0 hive-0.13.0 avro-1.7.6 jackson-1.8.8
-
15/12/21 10:37:50 INFO loader.OraLoader: oracle.hadoop.loader.loadByPartition is disabled because table: WLSSERVER is not partitioned
-
15/12/21 10:37:50 INFO loader.OraLoader: oracle.hadoop.loader.enableSorting disabled, no sorting key provided
-
15/12/21 10:37:50 INFO loader.OraLoader: Reduce tasks set to 0 because of no partitioning or sorting. Loading will be done in the map phase.
-
15/12/21 10:37:50 INFO output.DBOutputFormat: Setting map tasks speculative execution to false for : oracle.hadoop.loader.lib.output.JDBCOutputFormat
-
15/12/21 10:37:52 WARN loader.OraLoader: Sampler is disabled because the number of reduce tasks is less than two. Job will continue without sampled information.
-
15/12/21 10:37:52 INFO loader.OraLoader: Submitting OraLoader job OraLoader
-
15/12/21 10:37:52 INFO client.RMProxy: Connecting to ResourceManager at server1/192.168.56.101:8032
-
15/12/21 10:37:57 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
-
15/12/21 10:37:57 INFO metastore.ObjectStore: ObjectStore, initialize called
-
15/12/21 10:37:57 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
-
15/12/21 10:37:57 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
-
15/12/21 10:38:00 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
-
15/12/21 10:38:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
-
15/12/21 10:38:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
-
15/12/21 10:38:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
-
15/12/21 10:38:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
-
15/12/21 10:38:03 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
-
15/12/21 10:38:04 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
-
15/12/21 10:38:04 INFO metastore.ObjectStore: Initialized ObjectStore
-
15/12/21 10:38:04 INFO metastore.HiveMetaStore: Added admin role in metastore
-
15/12/21 10:38:04 INFO metastore.HiveMetaStore: Added public role in metastore
-
15/12/21 10:38:04 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
-
15/12/21 10:38:05 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=wlsserver_hbase
-
15/12/21 10:38:05 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_table : db=default tbl=wlsserver_hbase
-
15/12/21 10:38:06 INFO input.HiveToAvroInputFormat: Row filter: null
-
15/12/21 10:38:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x62a54948 connecting to ZooKeeper ensemble=localhost:2181
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:host.name=server1
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_65
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:java.home=/home/hadoop/jdk1.8.0_65/jre
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client 15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop-2.6.2/lib/native
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:os.version=3.8.13-55.1.6.el7uek.x86_64
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x62a549480x0, quorum=localhost:2181, baseZNode=/hbase
-
15/12/21 10:38:07 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
-
15/12/21 10:38:07 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
-
15/12/21 10:38:07 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x151c51c160b000d, negotiated timeout = 90000
-
15/12/21 10:38:07 WARN mapreduce.TableInputFormatBase: You are using an HTable instance that relies on an HBase-managed Connection. This is usually due to directly creating an HTable, which is deprecated. Instead, you should create a Connection object and then request a Table instance from it. If you don't need the Table instance for your own use, you should instead use the TableInputFormatBase.initalizeTable method directly.
-
15/12/21 10:38:07 INFO mapreduce.TableInputFormatBase: Creating an additional unmanaged connection because user provided one can't be used for administrative actions. We'll close it when we close out the table.
-
15/12/21 10:38:07 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x5b1efaaf connecting to ZooKeeper ensemble=localhost:2181
-
15/12/21 10:38:07 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x5b1efaaf0x0, quorum=localhost:2181, baseZNode=/hbase
-
15/12/21 10:38:07 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
-
15/12/21 10:38:07 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
-
15/12/21 10:38:07 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x151c51c160b000e, negotiated timeout = 90000
-
15/12/21 10:38:07 INFO util.RegionSizeCalculator: Calculating region sizes for table "WLSSERVER".
-
15/12/21 10:38:09 INFO metastore.HiveMetaStore: 0: Shutting down the object store...
-
15/12/21 10:38:09 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=Shutting down the object store...
-
15/12/21 10:38:09 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete.
-
15/12/21 10:38:09 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=Metastore shutdown complete.
-
15/12/21 10:38:09 INFO mapreduce.JobSubmitter: number of splits:1
-
15/12/21 10:38:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1450710904519_0002
-
15/12/21 10:38:10 INFO impl.YarnClientImpl: Submitted application application_1450710904519_0002
-
15/12/21 10:38:10 INFO mapreduce.Job: The url to track the job: http://server1:8088/proxy/application_1450710904519_0002/
-
15/12/21 10:38:28 INFO loader.OraLoader: map 0% reduce 0%
-
15/12/21 10:39:12 INFO loader.OraLoader: map 100% reduce 0%
-
15/12/21 10:39:17 INFO loader.OraLoader: Job complete: OraLoader (job_1450710904519_0002)
-
15/12/21 10:39:17 INFO loader.OraLoader: Counters: 30
-
File System Counters
-
FILE: Number of bytes read=0
-
FILE: Number of bytes written=125961
-
FILE: Number of read operations=0
-
FILE: Number of large read operations=0
-
FILE: Number of write operations=0
-
HDFS: Number of bytes read=16889
-
HDFS: Number of bytes written=22053
-
HDFS: Number of read operations=5
-
HDFS: Number of large read operations=0
-
HDFS: Number of write operations=4
-
Job Counters
-
Launched map tasks=1
-
Data-local map tasks=1
-
Total time spent by all maps in occupied slots (ms)=45057
-
Total time spent by all reduces in occupied slots (ms)=0
-
Total time spent by all map tasks (ms)=45057
-
Total vcore-seconds taken by all map tasks=45057
-
Total megabyte-seconds taken by all map tasks=46138368
-
Map-Reduce Framework
-
Map input records=7
-
Map output records=7
-
Input split bytes=16889
-
Spilled Records=0
-
Failed Shuffles=0
-
Merged Map outputs=0
-
GC time elapsed (ms)=400
-
CPU time spent (ms)=8080
-
Physical memory (bytes) snapshot=196808704
-
Virtual memory (bytes) snapshot=2101088256
-
Total committed heap usage (bytes)=123797504
-
File Input Format Counters
-
Bytes Read=0
-
File Output Format Counters
-
Bytes Written=22053