oozie 运行demo

昨晚装好了oozie,能启动了,并且配置了mysql作为数据库,好了,今天要执行oozie自带的demo了,好家伙,一执行就报错!报错很多,就不一一列举了,就说我最后解决的方法吧。

oozie job -oozie http://localhost:11000/oozie -config examples/apps/map-reduce/job.properties –run

这句话需要在oozie的目录里面执行,然后在网上查了很多资料,最后搞定了,需要修改三个配置文件。

在说修改配置文件之前,还漏了一些东西,先补上,首先我们需要解压目录下面的oozie-examples.tar.gz,oozie-client-3.3.2.tar.gz,

oozie-sharelib-3.3.2.tar.gz,然后把examples和share目录上传到fs上面去。

hadoop fs -put examples examples

hadoop fs -put share share

然后在/etc/profile配置oozie-client的环境变量。

接下来说怎么解决的oozie的吧。

1.修改oozie的conf目录下的oozie-site.xml

增加以下内容:

 

oozie 运行demo
<property>  

       <name>oozie.services</name>  

        <value>  

            org.apache.oozie.service.SchedulerService,   

            org.apache.oozie.service.InstrumentationService,   

            org.apache.oozie.service.CallableQueueService,   

            org.apache.oozie.service.UUIDService,   

            org.apache.oozie.service.ELService,   

            org.apache.oozie.service.AuthorizationService,      

            org.apache.oozie.service.MemoryLocksService,   

            org.apache.oozie.service.DagXLogInfoService,   

            org.apache.oozie.service.SchemaService,   

            org.apache.oozie.service.LiteWorkflowAppService,   

            org.apache.oozie.service.JPAService,   

            org.apache.oozie.service.StoreService,   

            org.apache.oozie.service.CoordinatorStoreService,   

            org.apache.oozie.service.SLAStoreService,   

            org.apache.oozie.service.DBLiteWorkflowStoreService,   

            org.apache.oozie.service.CallbackService,   

            org.apache.oozie.service.ActionService,   

            org.apache.oozie.service.ActionCheckerService,   

            org.apache.oozie.service.RecoveryService,   

            org.apache.oozie.service.PurgeService,   

            org.apache.oozie.service.CoordinatorEngineService,   

            org.apache.oozie.service.BundleEngineService,   

            org.apache.oozie.service.DagEngineService,   

            org.apache.oozie.service.CoordMaterializeTriggerService,   

            org.apache.oozie.service.StatusTransitService,   

            org.apache.oozie.service.PauseTransitService, 

        org.apache.oozie.service.HadoopAccessorService  

        </value>  

        <description>  

            All services to be created and managed by Oozie Services singleton.   

            Class names must be separated by commas.   

        </description>  

    </property>



<property> 

       <name>oozie.service.ProxyUserService.proxyuser.cenyuhai.hosts</name> 

       <value>*</value> 

       <description> 

           List of hosts the '#USER#' user is allowed to perform 'doAs' 

           operations.



           The '#USER#' must be replaced with the username o the user who is 

           allowed to perform 'doAs' operations.



           The value can be the '*' wildcard or a list of hostnames.



           For multiple users copy this property and replace the user name 

           in the property name. 

       </description> 

   </property>



   <property> 

       <name>oozie.service.ProxyUserService.proxyuser.cenyuhai.groups</name> 

       <value>*</value> 

       <description> 

           List of groups the '#USER#' user is allowed to impersonate users 

           from to perform 'doAs' operations.



           The '#USER#' must be replaced with the username o the user who is 

           allowed to perform 'doAs' operations.



           The value can be the '*' wildcard or a list of groups.



           For multiple users copy this property and replace the user name 

           in the property name. 

       </description> 

   </property> 
View Code

 

2.修改oozie-env.sh,增加以下内容

export OOZIE_CONF=${OOZIE_HOME}/conf 

export OOZIE_DATA=${OOZIE_HOME}/data 

export OOZIE_LOG=${OOZIE_HOME}/logs 

export CATALINA_BASE=${OOZIE_HOME}/oozie-server 

export CATALINA_TMPDIR=${OOZIE_HOME}/oozie-server/temp 

export CATALINA_OUT=${OOZIE_LOG}/catalina.out

 

3.修改所有节点的hadoop的配置文件core-site.xml,

<property> 

    <name>hadoop.proxyuser.cenyuhai.hosts</name> 

    <value>hadoop.Master</value> 

 </property> 

 <property> 

    <name>hadoop.proxyuser.cenyuhai.groups</name> 

    <value>cenyuhai</value> 

</property>

然后重启就可以执行了,里面的cenyuhai是我的本机账号。

 

补充:在进行完上述配置之后,作业可以提交了,但是提交了MR作业之后,在web页面中查看,遇到了一个错误:

 JA006: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused

 这个问题排查了很久,都没有得到解决 ,最后通过修改job.properties,把jobTracker从localhost:9001改成下面的全称才行,这个可能跟我的hadoop的

 jobTracker设置有关,所以遇到有这方面问题的童鞋可以试试。

nameNode=hdfs://192.168.1.133:9000
jobTracker=http://192.168.1.133:9001

接下来我们接着运行hive的demo,运行之前记得修改hive的demo的job.properties,改为上面写的那样。

然后提交,提交成功了,但是在web页面上查看状态为KILLED,被干掉了。。。

错误代码:JA018,错误消息:org/apache/hadoop/hive/cli/CliDriver

然后我就想着可能是jar包的问题,删掉share目录下的hive目录里的所有jar包,然后把自己机器上的hive的所有jar包复制到该目录下。

然后上传到共享目录上:

hadoop fs -put share share

再次提交,就可以查看到成功的状态啦!

oozie job -oozie http://localhost:11000/oozie -config examples/apps/hive/job.properties -run

但是这个坑爹的玩意儿,其实是把数据插入到了Derby中。。。无语了,虽然现实成功了,但是没有用。。。因为我们配置了外置的mysql数据库,那怎么办呢?

需要修改workflow.xml,把其中的configuration的配置节改成下面的样子。

oozie 运行demo
<configuration>

                <property>

                    <name>mapred.job.queue.name</name>

                    <value>${queueName}</value>

                </property>

        <property>

                <name>hive.metastore.local</name>

                <value>true</value>

            </property>

            <property>

                <name>javax.jdo.option.ConnectionURL</name>

                <value>jdbc:mysql://192.168.1.133:3306/hive?createDatabaseIfNotExist=true</value>

            </property>

            <property>

                    <name>javax.jdo.option.ConnectionDriverName</name>

                <value>com.mysql.jdbc.Driver</value>

            </property>

            <property>

                <name>javax.jdo.option.ConnectionUserName</name>

                    <value>hive</value>

            </property>

            <property>

                <name>javax.jdo.option.ConnectionPassword</name>

                <value>mysql</value>

               </property>

        <property>  

              <name>hive.metastore.warehouse.dir</name>  

              <value>/user/hive/warehouse</value>

        </property> 

 </configuration>
View Code

然后提交之后,在hive中就可以查询到你所建立的表啦,oh,yeah!

 

你可能感兴趣的:(oozie)