【Oozie】Oozie中工作流workflow的定义及各种Action的配置运行

1、Oozie中工作流的定义

Oozie中的工作流workflow包含控制流节点和Action节点。通过workflow.xml定义,通过schema进行约束。
Workflow的控制流节点包括:start、decision、fork、join、kill、end节点。
Action是执行或计算的任务,如 MapReduce job、Pig job、a shell command。跑的一个MapReduce任务就是一个MapReduce Action。Action节点有2个转移:ok和error。
Workflow的Action节点包括:MapReduce Action、Pig Action、Fs(HDFS)Action、Ssh Action、Sub-workflow Action、Java Action。Oozie的Workflow里面运行MapReduce、Hive、Sqoop或Shell脚本。
Action Extensions包括:Email Action、Shell Action、Hive Action、Hive 2 Action、Sqoop Action、Ssh Action、DistCp Action、Writing a Custom Action Executor。
Workflow的定义语言是基于XML的,叫做hPDL(Hadoop Process Defination Language)。节点名字范式:[a-zA-Z][\-_a-zA-Z0-9]*=,长度小于20个字符。
job.properties:用于指向workflow.xml文件所在的HDFS位置。
workflow.xml:包含start、action、kill、end。
lib 目录:存放依赖的jar包。

2、MapReduce Action

目的:使用Ooize调度MapReduce程序。
方式:将以前Java MapReduce程序中的Driver部分写到workflow.xml中的configuration里面。
示例:用Oozie安装包中自带的examples例子跑wordcount程序。
(1)在hdfs上创建wordcount输入文件夹,并向文件夹中上传wordcount要统计的文件。
hadoop-2.5.0-cdh5.3.6]$ bin/hdfs dfs -mkdir -p mapreduce/wordcount/input
hadoop-2.5.0-cdh5.3.6]$ bin/hdfs dfs -put /opt/datas/wc.input mapreduce/wordcount/input
(2)执行wordcount的jar包,指定输入输出路径。
hadoop-2.5.0-cdh5.3.6]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar wordcount mapreduce/wordcount/input mapreduce/wordcount/output
(3)将oozie自带的examples例子拷贝到新建的目录下,作为mapreduce action的示例。
oozie-4.0.0-cdh5.3.6]$ mkdir oozie-apps
oozie-4.0.0-cdh5.3.6]$ cd oozie-apps/
oozie-apps]$ cp -r ../examples/apps/map-reduce/ .
oozie-apps]$ mv map-reduce/ mr-wordcount-wf
(4)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/mr-wordcount-wf/workflow.xml。


    
    
        
            ${jobTracker}
            ${nameNode}
            
                
            
            
                
                    mapred.mapper.new-api
                    true
                
                
                    mapred.reducer.new-api
                    true
                
                
                    mapreduce.job.queuename
                    ${queueName}
                
                
                    mapreduce.job.map.class
                    com.ibeifeng.hadoop.senior.mapreduce.WordCount$WordCountMapper
                
                
                    mapreduce.job.reduce.class
                    com.ibeifeng.hadoop.senior.mapreduce.WordCount$WordCountReducer
                
                
                    mapreduce.map.output.key.class
                    org.apache.hadoop.io.Text
                
                
                    mapreduce.map.output.value.class
                    org.apache.hadoop.io.IntWritable
                
                
                    mapreduce.job.output.key.class
                    org.apache.hadoop.io.Text
                
                
                    mapreduce.job.output.value.class
                    org.apache.hadoop.io.IntWritable
                
                
                    mapreduce.input.fileinputformat.inputdir
                    ${nameNode}/${oozieDataRoot}/${inputDir}
                
                
                    mapreduce.output.fileoutputformat.outputdir
                    ${nameNode}/${oozieDataRoot}/${outputDir}
                
            
        
        
        
    
    
        Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
    
    

(5)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/mr-wordcount-wf/job.properties。

nameNode=hdfs://hadoop-senior.ibeifeng.com:8020
jobTracker=hadoop-senior.ibeifeng.com:8032
queueName=default
oozieAppsRoot=user/beifeng/oozie-apps
oozieDataRoot=user/beifeng/oozie/datas

oozie.wf.application.path=${nameNode}/${oozieAppsRoot}/mr-wordcount-wf/workflow.xml
inputDir=mr-wordcount-wf/input
outputDir=mr-wordcount-wf/output

(6)将之前编写并打包的mr-wc.jar文件放入mapreduce action例子程序的lib包中。
$ cd /opt/modules/hadoop-2.5.0/jars/
jars]$ cp mr-wc.jar /opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/mr-wordcount-wf/lib
mr-wordcount-wf]$ cd lib
lib]$ ls mr-wc.jar
(7)将mapreduce action文件夹上传到hdfs中。
oozie-4.0.0-cdh5.3.6]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put oozie-apps/ oozie-apps
(8)在hdfs上创建wordcount程序的输入文件夹。
hadoop-2.5.0-cdh5.3.6]$ bin/hdfs dfs -mkdir -p oozie/datas/mr-wordcount-wf/input
(9)将wordcount程序要统计的文件上传到hdfs的输入文件夹中。
hadoop-2.5.0-cdh5.3.6]$ bin/hdfs dfs -put /opt/datas/wc.input oozie/datas/mr-wordcount-wf/input
(10)如果上传错误,删除hdfs上的文件。
hadoop-2.5.0-cdh5.3.6]$ bin/hdfs dfs -r oozie/datas/input
(11)临时设置环境变量,为浏览器中访问oozie的url地址。
oozie-4.0.0-cdh5.3.6]$ export OOZIE_URL=http://hadoop-senior.ibeifeng.com:11000/oozie/
(12)运行oozie的工作流调度,执行mapreduce action。
oozie-4.0.0-cdh5.3.6]$ bin/oozie job -config oozie-apps/mr-wordcount-wf/job.properties -run

3、Hive Action

官网网址:http://archive.cloudera.com/cdh5/cdh/5/oozie-4.0.0-cdh5.3.6/DG_HiveActionExtension.html
(1)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/hive-select/job.properties。

nameNode=hdfs://hadoop-senior.ibeifeng.com:8020
jobTracker=hadoop-senior.ibeifeng.com:8032
queueName=default
oozieAppsRoot=user/beifeng/oozie-apps
oozieDataRoot=user/beifeng/oozie/datas

oozie.use.system.libpath=true

oozie.wf.application.path=${nameNode}/${oozieAppsRoot}/hive-select

outputDir=hive-select/output

(2)编辑sql脚本/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/hive-select/select-student.sql。执行时类似于执行shell命令:bin/hive -f select-student.sql

insert overwrite directory '${OUTPUT}';
select count(1) cnt from default.student;

(3)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/hive-select/workflow.xml。


    

    
        
            ${jobTracker}
            ${nameNode}
            
                
            
            ${nameNode}/${oozieAppsRoot}/hive-select/hive-site.xml
            
                
                    mapred.job.queue.name
                    ${queueName}
                
            
            
            OUTPUT=${nameNode}/${oozieDataRoot}/${outputDir}
        
        
        
    

    
        Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
    
    

(4)将hive action例子程序上传到hdfs中。
oozie-apps]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put hive-select/ /user/beifeng/oozie-apps
(5)配置环境变量。
export OOZIE_URL=http://hadoop-senior.ibeifeng.com:11000/oozie/
(6)运行oozie工作流调度hive action。
oozie-4.0.0-cdh5.3.6]$ bin/oozie job -config oozie-apps/hive-select/job.properties -run
(7)将/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/conf/hive-site.xml配置文件放入hive action文件夹中。
conf]$ cp hive-site.xml /opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/hive-select/
(8)上传错误,删除hdfs上的文件。
hive-select]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -rm /user/beifeng/oozie-apps/hive-select/workflow.xml
(9)将hive-site.xml文件上传hdfs上的指定目录中。
hive-select]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put workflow.xml hive-site.xml /user/beifeng/oozie-apps/hive-select/
(10)将mysql的jar包放入hive action的lib包中。
hive-select]$ mkdir lib
hive-select]$ cp /opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/lib/mysql-connector-java-5.1.27-bin.jar ./lib/
(11)上传lib包中的mysql jar包到hdfs中。
hive-select]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put lib/ /user/beifeng/oozie-apps/hive-select/
(12)运行oozie工作流调度hive action程序。
oozie-4.0.0-cdh5.3.6]$ bin/oozie job -config oozie-apps/hive-select/job.properties -run
(13)杀掉oozie正在执行的任务。
oozie-4.0.0-cdh5.3.6]$ bin/oozie job -kill 0000001-180717120019494-oozie-beif-W

4、Sqoop Action

官网网址:http://archive.cloudera.com/cdh5/cdh/5/oozie-4.0.0-cdh5.3.6/DG_SqoopActionExtension.html
(1)进入/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/examples/apps目录,将examples中的sqoop例子程序拷贝到自己的目录中。
apps]$ cp -r sqoop ../../oozie-apps/
(2)查看mysql服务是否已启动。
$ su
Password:
[root@hadoop-senior apps]# service mysql status
MySQL running (1717) [ OK ]
[root@hadoop-senior apps]# exit
$ mysql -uroot -p123456
(3)编写第一个sqoop action。编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/sqoop-import-user/workflow.xml。


    

    
        
            ${jobTracker}
            ${nameNode}
            
                
            
            
                
                    mapred.job.queue.name
                    ${queueName}
                
            
             import --connect jdbc:mysql://hadoop-senior.ibeifeng.com:3306/test --username root --password 123456 --table my_user --target-dir /user/beifeng/oozie/datas/sqoop-import-user/output --fields-terminated-by "$$$" --num-mappers 1
        
        
        
    

    
        Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
    
    

(4)编写第一个sqoop action。编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/sqoop-import-user/job.properties。

nameNode=hdfs://hadoop-senior.ibeifeng.com:8020
jobTracker=hadoop-senior.ibeifeng.com:8032
queueName=default
oozieAppsRoot=user/beifeng/oozie-apps
oozieDataRoot=user/beifeng/oozie/datas

oozie.use.system.libpath=true

oozie.wf.application.path=${nameNode}/${oozieAppsRoot}/sqoop-import-user

outputDir=sqoop-import-user/output

(5)将mysql jar包拷贝到sqoop action的lib目录中。
(6)将sqoop action程序上传hdfs。
oozie-apps]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put sqoop-import-user/ /user/beifeng/oozie-apps
(7)运行oozie工作流调度sqoop action程序。
oozie-4.0.0-cdh5.3.6]$ bin/oozie job -config oozie-apps/sqoop-import-user/job.properties -run
(8)编写第二个sqoop action程序。编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/sqoop-import-user2/workflow.xml。


    

    
        
            ${jobTracker}
            ${nameNode}
            
                
            
            
                
                    mapred.job.queue.name
                    ${queueName}
                
            
            import --options-file ${imp-user}
        
        
        
    

    
        Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
    
    

(9)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/sqoop-import-user2/job.properties。

nameNode=hdfs://hadoop-senior.ibeifeng.com:8020
jobTracker=hadoop-senior.ibeifeng.com:8032
queueName=default
oozieAppsRoot=user/beifeng/oozie-apps
oozieDataRoot=user/beifeng/oozie/datas

oozie.use.system.libpath=true

oozie.wf.application.path=${nameNode}/${oozieAppsRoot}/sqoop-import-user

outputDir=sqoop-import-user/output

imp-user=${nameNode}/${oozieAppsRoot}/sqoop-import-user/imp-user.sql

(10)编辑sql脚本/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/sqoop-import-user2/imp-user.sql。执行时类似于执行shell命令:bin/sqoop --option-file imp-user.sql

--connect
jdbc:mysql://hadoop-senior.ibeifeng.com:3306/test 
--username 
root 
--password 
123456 
--table 
my_user 
--target-dir 
/user/beifeng/oozie/datas/sqoop-import-user/output 
--fields-terminated-by 
"\t" 
--num-mappers 
1

(11)将mysql jar包拷贝到第二个sqoop action的lib目录中。
(12)上传sqoop action的第二个示例程序。
oozie-apps]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put sqoop-import-user2 oozie-apps/
(13)运行oozie工作流调度第二个sqoop action程序。
oozie-4.0.0-cdh5.3.6]$ bin/oozie job -config oozie-apps/sqoop-import-user2/job.properties -run

5、Shell Action

官网网址:http://archive.cloudera.com/cdh5/cdh/5/oozie-4.0.0-cdh5.3.6/DG_ShellActionExtension.html
(1)进入/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/examples/apps目录中,将examples目录下的shell示例程序拷贝到自己的目录中,并按照需求改名。
apps]$ cp -r shell/ ../../oozie-apps/
oozie-apps]$ mv shell shell-hive-select/
(2)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/shell-hive-select/job.properties。

nameNode=hdfs://hadoop-senior.ibeifeng.com:8020
jobTracker=hadoop-senior.ibeifeng.com:8032
queueName=default
oozieAppsRoot=user/beifeng/oozie-apps
oozieDataRoot=user/beifeng/oozie/datas

oozie.wf.application.path=${nameNode}/${oozieAppsRoot}/shell-hive-select

exec=student-select.sh
script=student-select.sql

(3)编辑配置文件/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/shell-hive-select/workflow.xml。


    
    
        
            ${jobTracker}
            ${nameNode}
            
                
                    mapred.job.queue.name
                    ${queueName}
                
            
            ${exec}
            ${nameNode}/${oozieAppsRoot}/shell-hive-select/${exec}#${exec}
            ${nameNode}/${oozieAppsRoot}/shell-hive-select/${script}#${script}
            
        
        
        
    
    
        Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
    
    

(4)编辑shell脚本/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/shell-hive-select/student-select.sh。

#!/usr/bin/env bash

## student select
/opt/cdh-5.3.6/hive-0.13.1-cdh5.3.6/bin/hive -f student-select.sql

(5)编辑sql脚本/opt/cdh-5.3.6/oozie-4.0.0-cdh5.3.6/oozie-apps/shell-hive-select/student-select.sql。

insert overwrite directory '/user/beifeng/oozie/datas/shell-hive-select/output'
select id, name from default.student;

(6)将shell action程序上传到hdfs中。
oozie-apps]$ /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put shell-hive-select/ oozie-apps/
(7)运行Oozie工作流调度shell action程序。
oozie-4.0.0-cdh5.3.6]$ bin/oozie job -config oozie-apps/shell-hive-select/job.properties -run

6、多个Action的协作

配置文件workflow.xml中如下配置:

start node
hive action对输入数据进行分析,将结果存储在hdfs中
sqoop action将hdfs上的数据导入mysql中,mysql进行数据的前端展示
kill node
end node

你可能感兴趣的:(BigData,Components,BigData)