oozie的简单案例

1、fs的action:
================job.properties的内容如下:
nameNode=hdfs://hadoop007:9000
jobTracker=hadoop007:8032
queueName=default
examplesRoot=fs


oozie.wf.application.path=${nameNode}/user/${user.name}/workflows/${examplesRoot}/workflow.xml


======================workflow.xml的内容如下:

 
 
 
   
   
 

 
 





 
 
   
 

 
 
 



 
 
   
 

 
 
 



 
  script failed, error message:${wf:errorMessage(wf:lastErrorNode())}
 



 



提交并运行作业:oozie job -oozie http://hadoop007:11000/oozie -config ./job.properties -run


2、fs的选择action:
================job.properties的内容如下:
nameNode=hdfs://hadoop007:9000
jobTracker=hadoop007:8032
queueName=default
examplesRoot=fs2


oozie.wf.application.path=${nameNode}/user/${user.name}/workflows/${examplesRoot}/workflow.xml


======================workflow.xml的内容如下:



 
 
 
   ${fs:exists("/beicai/123")}
   
 

 

 
 
   
 

 
 
 



 
  script failed, error message:${wf:errorMessage(wf:lastErrorNode())}
 



 



提交并运行作业:oozie job -oozie http://hadoop007:11000/oozie -config ./job.properties -run




3、shell的action:
================job.properties的内容如下:
nameNode=hdfs://hadoop007:9000
jobTracker=hadoop007:8032
queueName=default
examplesRoot=shell
exec=script.sh


oozie.wf.application.path=${nameNode}/user/${user.name}/workflows/${examplesRoot}/workflow.xml


======================workflow.xml的内容如下:

 
 
 
   ${jobTracker}
   ${nameNode}
   ${exec}
   ${exec}#${exec}
 

 
 


 



 
  script failed, error message:${wf:errorMessage(wf:lastErrorNode())}
 



 



提交并运行作业:oozie job -oozie http://hadoop007:11000/oozie -config ./job.properties -run


4、hive的action:
================job.properties的内容如下:
nameNode=hdfs://hadoop007:9000
jobTracker=hadoop007:8032
queueName=default
examplesRoot=hive


oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/${user.name}/workflows/${examplesRoot}/workflow.xml


======================workflow.xml的内容如下:

 


 
 
   ${jobTracker}
   ${nameNode}
   
   
     oozie.hive.defaults
     hive-site.xml
   

   
   
     hive.metastore.local
     false
   

   
     hive.metastore.uris
     thrift://192.168.216.7:9083
   

   
     hive.metastore.warehouse.dir
     /user/hive/warehouse
   

   

   
 

 
 
 



 
  script failed, error message:${wf:errorMessage(wf:lastErrorNode())}
 



 



myfirst.sql 文件内容如下:
load data inpath "/data" into table ooziehivetable;




提交并运行作业:oozie job -oozie http://hadoop007:11000/oozie -config ./job.properties -run


5、mapreduce的action:(执行的mr的job包需要放在当前目录的lib下面)
================job.properties的内容如下:
nameNode=hdfs://hadoop007:9000
jobTracker=hadoop007:8032
queueName=default
examplesRoot=map-reduce


oozie.wf.application.path=${nameNode}/user/${user.name}/workflows/${examplesRoot}/workflow.xml




======================workflow.xml的内容如下:

 
 
 
 
   ${jobTracker}
   ${nameNode}
   
   
   
   

   
   
     mapred.mapper.new-api
     true
   

     
     mapred.reducer.new-api
     true
   

   
     mapreduce.job.map.class
     edu.beicai.mr.test.Grep$MyMapper
   

   
     mapreduce.input.fileinputformat.inputdir
     /user/hadoop/workflows/map-reduce/data
   

   
     mapreduce.job.output.key.class
     org.apache.hadoop.io.Text
   

   
    mapreduce.job.output.value.class
    org.apache.hadoop.io.Text
   

   
    mapreduce.output.fileoutputformat.outputdir
    /out/oozie/00
   

   
    mapreduce.job.acl-view-job
    *
   

   
    oozie.launcher.mapreduce.job.acl-view-job
    *
   

   

 

 
 
 
 
 
 
 
 
   ${jobTracker}
   ${nameNode}
   
   
   

   
   
     mapred.mapper.new-api
     true
   

     
     mapred.reducer.new-api
     true
   

   
     mapreduce.job.map.class
     edu.beicai.mr.test.WordCount$MyMapper
   

   
mapreduce.job.reduce.class
edu.beicai.mr.test.WordCount$MyReducer
   

   
     mapreduce.input.fileinputformat.inputdir
     /out/oozie/00
   

   
     mapreduce.job.output.key.class
     org.apache.hadoop.io.Text
   

   
    mapreduce.job.output.value.class
    org.apache.hadoop.io.Text
   

   
    mapreduce.output.fileoutputformat.outputdir
    /out/oozie/01
   

   
    mapreduce.job.acl-view-job
    *
   

   
    oozie.launcher.mapreduce.job.acl-view-job
    *
   

   

 

 
 
 
 


 
  script failed, error message:${wf:errorMessage(wf:lastErrorNode())}
 



 



提交并运行作业:oozie job -oozie http://hadoop007:11000/oozie -config ./job.properties -run


6、coordinator的action:
================job.properties的内容如下:
nameNode=hdfs://hadoop007:9000
jobTracker=hadoop007:8032
queueName=default
examplesRoot=coordinator


oozie.service.coord.check.maximum.frequency=false
oozie.coord.application.path=${nameNode}/user/${user.name}/workflows/${examplesRoot}
start=2017-03-14T22:30+0800
end=2017-03-14T22:40+0800
workflowAppUri=${oozie.coord.application.path}


======================workflow.xml的内容如下:(空的wf)

 


 





======================coordinator.xml的内容如下:



${workflowAppUri}


jobTracker
${jobTracker}


nameNode
${nameNode}


queueName
${queueName}







提交并运行作业:oozie job -oozie http://hadoop007:11000/oozie -config ./job.properties -run

你可能感兴趣的:(大数据)