Hadoop之操作篇

【基础篇】

连接AWS

action->start,connect

打开terminal cd key-pair.pem的地址 输入:ssh -I "xxxxx.pem" [email protected]

✅进入服务器Linux系统

vi file.txt 编辑txt文件

启用Hadoop

在Linux根目录下:sh runstart.sh 

✅启用Hadoop 之后可以进行Hadoop操作

hadoop fs -ls / 查看Hadoop根目录下的文件

hadoop fs -cat /user/myfilm/part-m-00000 | head -5 查看文件的前五行

hadoop fs -cat 查看文件内容

hadoop fs -get file1 file2 把Hadoop的file1放到Linux的file2里

hadoop fs -put product.txt /userdata 把Linux的product.txt放到Hadoop的/userdata里

hadoop fs -rm 删除文件夹包括其中的所有子文件夹和文件

进入mysql

在Linux任意目录下:mysql -u ubuntu -p 输入密码

✅进入mysql

看数据库:show databases;

进数据库:use (database);

看数据库下的表:show tables;


【Sqoop篇】

Sqoop作用:在mysql和HDFS之间互相导出

从mysql导入HDFS

更多参数见:https://blog.csdn.net/w1992wishes/article/details/92027765

从mysql的sakila数据库下将表actor导入到HDFS位置为/userdata的父级目录下:

sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table actor

从mysql的sakila数据库下将表film导入到HDFS位置为/user/myfilms的目录下:

sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--target-dir /user/myfilms \--table film

从mysql的sakila数据库下将表city的两类'city_id, city'的导入到HDFS位置为/user/userdata的父级目录下:

sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table city \--columns 'city_id, city'

从mysql的sakila数据库下将表rental满足条件'inventory_id <= 10'的数据导入到HDFS位置为/user/userdata的父级目录下:

sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table rental \--where 'inventory_id <= 10'

针对rental_id来更新import表:

sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table rental \--where 'inventory_id > 10 and inventory_id < 20' \--incremental append \--check-column rental_id

从HDFS导入mysql

Mysql> CREATE TABLE new_rental SELECT * FROM rental LIMIT 0;

$ sqoop export \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--export-dir /userdata/rental \--table new_rental


【Pig篇】

Pig作用:处理HDFS的数据

use Pig interactively

在Linux输入pig 出现援引 “grunt” – the Pig shell

例:film.pig

Line #1: Upload (read) data from (hdfs) /user/myfilm into ‘data’ variable

data = LOAD '/user/myfilm' USING PigStorage(',')

as (film_id:int, title:chararray, rental_rate:float);

Line #4: Filter data by rental_rate greater than or equal to $3.99

data = FILTER data BY rental_rate >= 3.99;

Line #6: Return the data to the screen (dump)

DUMP data;

Line #8: Also, store the data into a new HDFS folder called “top_films”

STORE data INTO '/user/top_films' USING PigStorage('|');

例: realestate.pig

Load “realestate.txt” data into “listings”object (notice file path):

listings = LOAD '/mydata/class2/realestate.txt' USING PigStorage(',')

as

(listing_id:int, date_listed:chararray, list_price:float,

sq_feet:int, address:chararray);

Convert date (string) to datetime format:

listings = FOREACH listings GENERATE listing_id, ToDate(date_listed, 'YYYY-MM-dd') AS date_listed, list_price, sq_feet, address;

--DUMP listings;

Filter data:

bighomes = FILTER listings BY sq_feet >= 2000;

Select columns (same as before):

bighomes_dateprice = FOREACH bighomes GENERATE

listing_id, date_listed, list_price;

DUMP bighomes_dateprice;

Store data in HDFS:

STORE bighomes_dateprice INTO '/mydata/class2/homedata';

你可能感兴趣的:(Hadoop之操作篇)