2018-01-07 Hadoop Platform and Application Framework -- Lesson 2 Exploration of the services working in Cloudera VM

Cloudera QuickStart VM Requires

Software

Windows 7+, Mac OS X 10.10+, Ubuntu 14.04+ or CentOS 6+ VirtualBox 5+, VMWare Workstation 9+ or VMWare Fusion 7+

Hardware

Quad Core Processor (VT-x or AMD-V support recommended)

8 GB RAM

20 GB disk free

Internet Connection

4 GB Cloudera QuickStart VM 

https://downloads.cloudera.com/demo_vm/virtualbox/cloudera-quickstart-vm-5.4.2-0-virtualbox.zip.


Sqoop: export MySQL DB data into HDFS

Command below lanuchs MapReduce jobs to export data from MySQL DB, and put the export files in Avro (Apache optimized file format) in HDFS.

It also creating the Avro schema (*.avsc) in current directory for each table/each *.avro accordingly.  That's the called scheme-and-read style. These schema files are for qeury by Impala/Hive


Export :

[cloudera@quickstart ~]$ sqoop import-all-tables  -m 1  --connect jdbc:mysql://quickstart:3306/retail_db  --username=retail_dba  --password=cloudera  --compression-code=snappy  --as-avrodatafile  --warehouse-dir=/user/hive/warehouse


verification :

[cloudera@quickstart ~]$ hadoop fs -ls /user/hive/warehouse

[cloudera@quickstart ~]$ hadoop fs -ls /user/hive/warehouse/categories/            --> it list .avro file.


move schema file to a easy location for futher use:

[cloudera@quickstart ~]$ sudo -u hdfs hadoop fs -mkdir /user/examples

[cloudera@quickstart ~]$ sudo -u hdfs hadoop fs -chmod +rw /user/examples

[cloudera@quickstart ~]$ hdfs hadoop fs -copyFromLocal ~/*.avsc /user/examples/


Impala : create the metadata for tables in HDFS and then query them 

With Hue providing web-based interface via port 8888 on Master Node, select "Impala", run code below to achieve goal.


create tables:

CREATE EXTERNAL TABLE catogories STORED AS AVRO

LOCATION 'hdfs:///user/hive/warehouse/categories'

TBLPROPERTIES('avro.schema.url'='hdfs:///quickstart/user/example/sqoop_import_categories.avsc');

......   --> for other tables.


show tables:

show tables;


query tables to get bussiness report:

case 1 : 

2018-01-07 Hadoop Platform and Application Framework -- Lesson 2 Exploration of the services working in Cloudera VM_第1张图片

case 2 :

2018-01-07 Hadoop Platform and Application Framework -- Lesson 2 Exploration of the services working in Cloudera VM_第2张图片

Hive : create tables from Build Data (like log)

Beeline is a command line JDBC client.  We here use it to create tables from log file in HDFS.

invoke Beeline:

[cloudera@quickstart ~]$ beeline -u jdbc:hive2://quickstart:10000/default  -n admin  -d org.apache.hive.jdbc.HiveDriver


Use Hive's SerDes to parse the logs into individual files using a regular expression:

SerDes : serializers / deserializers

2018-01-07 Hadoop Platform and Application Framework -- Lesson 2 Exploration of the services working in Cloudera VM_第3张图片

Transfer the data from table above to one that does not require any special SerDr, for Impala fast query:

2018-01-07 Hadoop Platform and Application Framework -- Lesson 2 Exploration of the services working in Cloudera VM_第4张图片

Inform Impala - the metadata catalog has been updated, and do new query related to the log content:

in Hue -> Impala, run:

        invalidate metadata;

query:

        select count(*), url from tokenized_access_logs

        wherer url like '%\/product\/%'

        group by url order by count(*) desc;

2018-01-07 Hadoop Platform and Application Framework -- Lesson 2 Exploration of the services working in Cloudera VM_第5张图片

Lesson 2 Slides

你可能感兴趣的:(2018-01-07 Hadoop Platform and Application Framework -- Lesson 2 Exploration of the services working in Cloudera VM)