Oozie

Oozie的安装和任务调度:

简介

Oozie英文翻译为:驯象人。一个基于工作流引擎的开源框架,由Cloudera公司贡献给Apache,提供对Hadoop
Mapreduce、Pig Jobs的任务调度与协调。Oozie需要部署到Java Servlet容器中运行。主要用于定时调度任务,多任务可以按照执行的逻辑顺序调度。

功能

Oozie是一个管理Hdoop作业(job)的工作流程调度管理系统
Oozie的工作流是一系列动作的直接周期图(DAG)
Oozie协调作业就是通过时间(频率)和有效数据触发当前的Oozie工作流程
Oozie是Yahoo针对Apache Hadoop开发的一个开源工作流引擎。用于管理和协调运行在Hadoop平台上(包括:HDFS、Pig和MapReduce)的Jobs。Oozie是专为雅虎的全球大规模复杂工作流程和数据管道而设计
Oozie围绕两个核心:工作流和协调器,前者定义任务的拓扑和执行逻辑,后者负责工作流的依赖和触发

模块

  1. Workflow:顺序执行流程节点,支持fork(分支多个节点),join(合并多个节点为一个)

  2. Coordinator:定时触发workflow

  3. Bundle Job:绑定多个Coordinator

常用节点

  1. 控制流节点(Control Flow Nodes):控制流节点一般都是定义在工作流开始或者结束的位置,比如start,end,kill等。以及提供工作流的执行路径机制,如decision,fork,join等。

  2. 动作节点(Action Nodes):负责执行具体动作的节点,比如:拷贝文件,执行某个Shell脚本等等。

部署

所需软件链接 链接:链接:https://pan.baidu.com/s/18_iOFGL06g7_Ye-mZZRwag 提取码:qlbu

部署 Hadoop

这里不详细介绍,请查阅Hadoop安装,这里用的是Clouder公司的CDH版本的Hadop。

修改配置

core-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[hadoop@datanode1 hadoop]$ vim core-site.xml
<configuration>

<property>
<name>fs.defaultFSname>
<value>hdfs://datanode1:9000value>
property>

<property>
<name>hadoop.tmp.dirname>
<value>/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/datavalue>
property>
<property>
<name>fs.trash.interval name>
<value>60value>
property>

<property>
<name>hadoop.proxyuser.hadoop.hostsname>
<value>*value>
property>


<property>
<name>hadoop.proxyuser.hadoop.groupsname>
<value>*value>
property>
configuration>

hadoop.proxyuser.admin.hosts类似属性中的hadoop用户替换成你的hadoop用户。因为我的用户名就是hadoop

yarn-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[hadoop@datanode1 hadoop]$ vim yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-servicesname>
<value>mapreduce_shufflevalue>
property>

<property>
<name>yarn.resourcemanager.hostnamename>
<value>datanode2value>
property>

<property>
<name>yarn.log-aggregation-enablename>
<value>truevalue>
property>

<property>
<name>yarn.log-aggregation.retain-secondsname>
<value>86400value>
property>


<property>
<name>yarn.log.server.urlname>
<value>http://datanode1:19888/jobhistory/logs/value>
property>
configuration>

mapred-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<configuration>
<property>
<name>mapreduce.framework.namename>
<value>yarnvalue>
property>

<property>
<name>mapreduce.jobhistory.addressname>
<value>datanode1:10020value>
property>

<property>
<name>mapreduce.jobhistory.webapp.addressname>
<value>datanode1:19888value>
property>
configuration>

不要忘记同步到其他集群 然后namenode -for mate 执行初始化

部署 Oozie

oozie根目录下解压hadooplibs

1
tar -zxf oozie-hadooplibs-4.0.0-cdh5.3.6.tar.gz -C ../

在Oozie根目录下创建libext目录

1
mkdir libext/

拷贝依赖Jar包

1
cp -ra hadooplibs/hadooplib-2.5.0-cdh5.3.6.oozie-4.0.0-cdh5.3.6/* libext/

上传Mysql驱动包到libext目录下

上传ext-2.2.zip拷贝到libext目录下

修改oozie-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
属性:oozie.service.JPAService.jdbc.driver
属性值:com.mysql.jdbc.Driver
解释:JDBC的驱动

属性:oozie.service.JPAService.jdbc.url
属性值:jdbc:mysql://datanode1:3306/oozie
解释:oozie所需的数据库地址

属性:oozie.service.JPAService.jdbc.username
属性值:root
解释:数据库用户名

属性:oozie.service.JPAService.jdbc.password
属性值:123456
解释:数据库密码

属性:oozie.service.HadoopAccessorService.hadoop.configurations
属性值:*=/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/etc/hadoop
解释:让Oozie引用Hadoop的配置文件

在Mysql中创建Oozie的数据库

1
2
mysql -uroot -p123456
mysql> create database oozie;

初始化Oozie

1
bin/oozie-setup.sh sharelib create -fs hdfs://datanode1:9000 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz
创建oozie.sql文件
1
bin/oozie-setup.sh db create -run -sqlfile oozie.sql
打包项目,生成war包
1
bin/oozie-setup.sh prepare-war

需要zip命令 最小化安装可能需要

Oozie服务

1
2
3
 bin/oozied.sh start
//如需正常关闭Oozie服务,请使用:
bin/oozied.sh stop

Web页面

Oozie_第1张图片

Oozie任务

调度shell

1.解压官方模板

1
tar -zxf oozie-examples.tar.gz

2.创建工作目录

1
mkdir oozie-apps/

3.拷贝任务模板

1
cp -r examples/apps/shell/ oozie-apps/

4.shell脚本

1
2
3
4
5
6
7
8
9
#!/bin/bash
i=1
mkdir /home/hadoop/oozie-test1
cd /home/hadoop/oozie-test1
for(( i=1;i<=100;i++ ))
do
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/oozie-test1/logs.log
done

5.job.properties

1
2
3
4
5
6
7
nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=shell
examplesRoot=oozie-apps

oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/shell
EXEC=p1.sh

6.workflow.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}job-tracker>
<name-node>${nameNode}name-node>
<configuration>
<property>
<name>mapred.job.queue.namename>
<value>${queueName}value>
property>
configuration>
<exec>${EXEC}exec>
<file>/user/hadoop/oozie-apps/shell/${EXEC}#${EXEC}file>
<capture-output/>
shell>
<ok to="end"/>
<error to="fail"/>
action>
<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('shell-node')['my_output'] eq 'Hello Oozie'}
case>
<default to="fail-output"/>
switch>
decision>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]message>
kill>
<kill name="fail-output">
<message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shell-node')['my_output']}]message>
kill>
<end name="end"/>
workflow-app>

7.上传任务配置

1
/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put  -f  oozie-apps/ /user/hadoop

8.执行任务

1
bin/oozie job -oozie http://datanode1:11000/oozie -config oozie-apps/shell/job.properties -run

9.杀死任务

1
bin/oozie job -oozie http://datanode1:11000/oozie -kill 0000004-170425105153692-oozie-z-W

Oozie_第2张图片

调度逻辑shell

在原有的基础上进行适当修改

1.job.properties

1
2
3
4
5
6
7
8
nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=shell
examplesRoot=oozie-apps

oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/shell
EXEC1=p1.sh
EXEC2=p2.sh

2.脚本 p1.sh

1
2
3
4
5
6
7
8
9
#!/bin/bash               
mkdir /home/hadoop/Oozie2_test_p1
cd /home/hadoop/Oozie2_test_p1
i=1
for(( i=1;i<=100;i++ ))
do
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/Oozie2_test_p1/Oozie2_p1.log
done

2.脚本 p2.sh

1
2
3
4
5
6
7
8
9
#!/bin/bash
mkdir /home/hadoop/Oozie2_test_p1
cd /home/hadoop/Oozie2_test_p1
i=1
for(( i=1;i<=100;i++ ))
do
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/Oozie2_test_p1/Oozie2_p1.log
done

3.workflow.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
<workflow-app xmlns="uri:oozie:workflow:0.4" name="shell-wf">
<start to="shell-node"/>
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}job-tracker>
<name-node>${nameNode}name-node>
<configuration>
<property>
<name>mapred.job.queue.namename>
<value>${queueName}value>
property>
configuration>
<exec>${EXEC1}exec>
<file>/user/hadoop/oozie-apps/shell/${EXEC1}#${EXEC1}file>
<capture-output/>
shell>
<ok to="p2-shell-node"/>
<error to="fail"/>
action>

<action name="p2-shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}job-tracker>
<name-node>${nameNode}name-node>
<configuration>
<property>
<name>mapred.job.queue.namename>
<value>${queueName}value>
property>
configuration>
<exec>${EXEC2}exec>
<file>/user/hadoop/oozie-apps/shell/${EXEC2}#${EXEC2}file>

<capture-output/>
shell>
<ok to="end"/>
<error to="fail"/>
action>

<decision name="check-output">
<switch>
<case to="end">
${wf:actionData('shell-node')['my_output'] eq 'Hello Oozie'}
case>
<default to="fail-output"/>
switch>
decision>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]message>
kill>
<kill name="fail-output">
<message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shell-node')['my_output']}]message>
kill>
<end name="end"/>
workflow-app>

调度MapReduce

前提:确定YARN可用

1.拷贝官方模板到oozie-apps

1
[hadoop@datanode1 lib]$ cp /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar ./

2.配置job.properties

1
2
3
4
5
6
7
nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=map-reduce
examplesRoot=oozie-apps

oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/map-reduce/workflow.xml
outputDir=/output

3.workflow.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf">
<start to="mr-node"/>
<action name="mr-node">
<map-reduce>
<job-tracker>${jobTracker}job-tracker>
<name-node>${nameNode}name-node>
<prepare>
<delete path="/output"/>
prepare>
<configuration>
<property>
<name>mapred.job.queue.namename>
<value>${queueName}value>
property>

<property>
<name>mapred.mapper.new-apiname>
<value>truevalue>
property>

<property>
<name>mapred.reducer.new-apiname>
<value>truevalue>
property>

<property>
<name>mapreduce.job.output.key.classname>
<value>org.apache.hadoop.io.Textvalue>
property>

<property>
<name>mapreduce.job.output.value.classname>
<value>org.apache.hadoop.io.IntWritablevalue>
property>

<property>
<name>mapreduce.job.map.classname>
<value>org.apache.hadoop.examples.WordCount$TokenizerMappervalue>
property>

<property>
<name>mapreduce.job.reduce.classname>
<value>org.apache.hadoop.examples.WordCount$IntSumReducervalue>
property>
<property>
<name>mapred.map.tasksname>
<value>1value>
property>
<property>
<name>mapred.input.dirname>
<value>/inputvalue>
property>
<property>
<name>mapred.output.dirname>
<value>/_outputvalue>
property>
configuration>
map-reduce>
<ok to="end"/>
<error to="fail"/>
action>
<kill name="fail">
<message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]message>
kill>
<end name="end"/>
workflow-app>

4.拷贝jar包

1
[hadoop@datanode1 lib]$ cp /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar ./

5.上传任务配置

1
/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put -f oozie-apps /user/hadoop/oozie-apps

6.执行任务

1
[hadoop@datanode1 oozie-4.0.0-cdh5.3.6]$  bin/oozie job -oozie http://datanode1:11000/oozie -config oozie-apps/map-reduce/job.properties -run

7.查看结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[hadoop@datanode1 module]$ /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -cat /input/*.txt
19/01/10 19:13:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
I
Love
Hadoop
and
Sopark
I
Love
BigData
and
AI
[hadoop@datanode1 module]$ /opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -cat /_output/p*
19/01/10 19:13:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
AI 1
BigData 1
Hadoop 1
I 2
Love 2
Sopark 1
and 2

调度定时任务/循环任务

前提:

1
2
3
4
5
##检查系统当前时区: 
date -R
##注意这里,如果显示的时区不是+0800,你可以删除localtime文件夹后,再关联一个正确时区的链接过去,命令如下:
rm -rf /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

ntp配置

1
vim /etc/ntp.conf

主机配置
Oozie_第3张图片

从机配置
Oozie_第4张图片

从节点同步时间

1
2
3
4
5
service ntpd restart
chkconfig ntpd on # 开机启动
ntpdate -u datanode1
crontab -e
* */1 * * * /usr/sbin/ntpdate datanode1 #每一小时同步一次 注意 要用root创建

1.配置oozie-site.xml文件

1
2
3
属性:oozie.processing.timezone
属性值:GMT+0800
解释:修改时区为东八区区时

2.修改js框架代码

1
2
3
4
5
6
 vi /opt/module/cdh/oozie-4.0.0-cdh5.3.6/oozie-server/webapps/oozie/oozie-console.js
修改如下:
function getTimeZone() {
Ext.state.Manager.setProvider(new Ext.state.CookieProvider());
return Ext.state.Manager.get("TimezoneId","GMT+0800");
}

3.重启oozie服务,并重启浏览器(一定要注意清除缓存)

1
2
bin/oozied.sh stop
bin/oozied.sh start

4.拷贝官方模板配置定时任务

1
cp -r examples/apps/cron/ oozie-apps/

5.修改job.properties

1
2
3
4
5
6
7
8
9
10
11
nameNode=hdfs://datanode1:9000
jobTracker=datanode2:8032
queueName=cronTask
examplesRoot=oozie-apps

oozie.coord.application.path=${nameNode}/user/${user.name}/${examplesRoot}/cron
start=2019-01-10T21:40+0800
end=2019-01-10T22:00+0800
workflowAppUri=${nameNode}/user/${user.name}/${examplesRoot}/cron

EXEC3=p3.sh

6.修改coordinator.xml 注意${coord:minutes(5)}的5是最小值不能比5再小了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<coordinator-app name="cron-coord" frequency="${coord:minutes(5)}" start="${start}" end="${end}" timezone="GMT+0800"
xmlns="uri:oozie:coordinator:0.2">
<action>
<workflow>
<app-path>${workflowAppUri}app-path>
<configuration>
<property>
<name>jobTrackername>
<value>${jobTracker}value>
property>
<property>
<name>nameNodename>
<value>${nameNode}value>
property>
<property>
<name>queueNamename>
<value>${queueName}value>
property>
configuration>
workflow>
action>
coordinator-app>

7.创建脚本

1
2
3
#!/bin/bash
d=$( date +%Y-%m-%d\ %H\:%M\:%S )
echo "data:$d $i">>/home/hadoop/Oozie3_p3.log

8.修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<workflow-app xmlns="uri:oozie:workflow:0.5" name="one-op-wf">
<start to="p3-shell-node"/>
<action name="p3-shell-node">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}job-tracker>
<name-node>${nameNode}name-node>
<configuration>
<property>
<name>mapred.job.queue.namename>
<value>${queueName}value>
property>
configuration>
<exec>${EXEC3}exec>
<file>/user/hadoop/oozie-apps/cron/${EXEC3}#${EXEC3}file>

<capture-output/>
shell>
<ok to="end"/>
<error to="fail"/>
action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]message>
kill>
<kill name="fail-output">
<message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shell-node')['my_output']}]message>
kill>
<end name="end"/>
workflow-app>

9.提交配置

1
/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin/hdfs dfs -put oozie-apps/cron/ /user/hadoop/oozie-apps

10.提交任务

1
bin/oozie job -oozie http://datanode1:11000/oozie -config oozie-apps/cron/job.properties -run

Oozie_第5张图片

你可能感兴趣的:(Oozie)