千家信息网

Oozie-4.1.0和hadoop-2.7.1怎么进行编译

发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,这篇文章主要介绍"Oozie-4.1.0和hadoop-2.7.1怎么进行编译",在日常操作中,相信很多人在Oozie-4.1.0和hadoop-2.7.1怎么进行编译问题上存在疑惑,小编查阅了各式资
千家信息网最后更新 2025年01月23日Oozie-4.1.0和hadoop-2.7.1怎么进行编译

这篇文章主要介绍"Oozie-4.1.0和hadoop-2.7.1怎么进行编译",在日常操作中,相信很多人在Oozie-4.1.0和hadoop-2.7.1怎么进行编译问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答"Oozie-4.1.0和hadoop-2.7.1怎么进行编译"的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

一、环境

maven-3.3.0

hadoop-2.7.1

二、编译

[root@hftclclw0001 opt]# pwd/opt[root@hftclclw0001 opt]# wget http://apache.mirrors.pair.com/oozie/4.1.0/oozie-4.1.0.tar.gz[root@hftclclw0001 opt]# tar -zxvf  oozie-4.1.0.tar.gz[root@hftclclw0001 opt]# cd oozie-4.1.0#默认#sqoop.version=1.4.3 #hive.version=0.13.1     => 修改为其他,编译出错#hbase.version=0.94.2    => 修改为其他,编译出错#pig.version=0.12.1 #hadoop.version=2.3.0    => 最新版本是2.3.0 但是支持2.7.1#tomcat.version=6.0.43[root@hftclclw0001 opt]# ./bin/mkdistro.sh -DskipTests -Phadoop-2  -Dsqoop.version=1.4.6.........INFO] BUILD SUCCESS[INFO] ------------------------------------------------------------------------[INFO] Total time: 07:25 min[INFO] Finished at: 2016-06-19T12:46:07+00:00[INFO] Final Memory: 128M/1178M[INFO] ------------------------------------------------------------------------Oozie distro created, DATE[2016.06.19-12:38:39GMT] VC-REV[unavailable], available at [/opt/oozie-4.1.0/distro/target]

三、配置

[root@hftclclw0001 opt]# pwd/opt[root@hftclclw0001 opt]# mkdir Oozie[root@hftclclw0001 opt]# cd Oozie[root@hftclclw0001 Oozie]# pwd/opt/Oozie[root@hftclclw0001 Oozie]# cp ../oozie-4.1.0/distro/target/oozie-4.1.0-distro.tar.gz ./[root@hftclclw0001 Oozie]# tar -zxvf oozie-4.1.0-distro.tar.gz[root@hftclclw0001 Oozie]# cd oozie-4.1.0[root@hftclclw0001 oozie-4.1.0]# pwd/opt/Oozie/oozie-4.1.0[root@hftclclw0001 oozie-4.1.0]# mkdir libext[root@hftclclw0001 oozie-4.1.0]# cp /opt/oozie-4.1.0/hadooplibs/hadoop-2/target/hadooplibs/hadooplib-2.3.0.oozie-4.1.0/* ./libext[root@hftclclw0001 oozie-4.1.0]# cd libext[root@hftclclw0001 libext]# curl -O http://archive.cloudera.com/gplextras/misc/ext-2.2.zip下载mysql驱动放入libext,因为用mysql作为元数据库,默认为Derby[root@hftclclw0001 libext]# lltotal 26452...-rw------- 1 root root  848401 Jun 19 13:41 mysql-connector-java-5.1.25-bin.jar...[root@hftclclw0001 libext]# cd ..[root@hftclclw0001 oozie-4.1.0]# pwd/opt/Oozie/oozie-4.1.0[root@hftclclw0001 oozie-4.1.0]# ./bin/oozie-setup.sh prepare-war[root@hftclclw0001 oozie-4.1.0]# ./bin/oozie-setup.sh sharelib create -fs hdfs://localhost:9000创建Oozie数据库[root@hftclclw0001 oozie-4.1.0]# mysql -uroot -pmysql>CREATE DATABASE OOZIEDB;mysql>GRANT ALL PRIVILEGES ON OOZIEDB.* TO oozie IDENTIFIED BY "oozie";mysql>FLUSH PRIVILEGES;配置conf/oozie-site.xmloozie.service.JPAService.jdbc.driveroozie.service.JPAService.jdbc.urloozie.service.JPAService.jdbc.usernameoozie.service.JPAService.jdbc.password[root@hftclclw0001 oozie-4.1.0]# ./bin/ooziedb.sh create db -run配置etc/hadoop/core-site.xml,配置oozie的proxyuser   hadoop.proxyuser.$USER.hosts   *   hadoop.proxyuser.$USER.groups   */value>$USER替换为oozie service的用户,或oozie,或root等[root@hftclclw0001 oozie-4.1.0]# ./oozied.sh start

四、examples

job.properties

nameNode=hdfs://nameservice1#nameNode=hdfs://nameservice1 ==> HA#nameNode=hdfs://${namenode}:8020 ==> single namenodejobTracker=dapdevhmn001.qa.webex.com:8032#jobTracker=rm1,rm2 ==> HA#jobTracker(yarn.resourcemanager.address)=10.224.243.124:8032queueName=defaultexamplesRoot=examples#oozie.use.system.libpath=trueoozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/map-reduceoutputDir=map-reduce

workflow.xml

                            ${jobTracker}            ${nameNode}                                                                                        mapred.job.queue.name                    ${queueName}                                                    mapred.mapper.class                    org.apache.oozie.example.SampleMapper                                                    mapred.reducer.class                    org.apache.oozie.example.SampleReducer                                                    mapred.map.tasks                    1                                                    mapred.input.dir                    /user/${wf:user()}/${examplesRoot}/input-data/text                                                    mapred.output.dir                    /user/${wf:user()}/${examplesRoot}/output-data/${outputDir}                                                                    Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]        

lib/oozie-examples-4.1.0.jar

hadoop fs -mkdir -p /user/root/examples/apps/map-reduce

hadoop fs -put ./job.properties /user/root/examples/apps/map-reduce

hadoop fs -put ./workflow.xml /user/root/examples/apps/map-reduce

hadoop fs -put ./lib/oozie-examples-4.1.0.jar /user/root/examples/apps/map-reduce

job.properties ==> 不仅仅需要在HDFS,本地也需要一份。执行命令-config是指向本地的文件

oozie job -oozie ${OOZIE_URL} -config job.properties -run

oozie job -oozie ${OOZIE_URL} -info ${oozie_id}

#oozie job -oozie ${OOZIE_URL} -info 0000001-170206083712434-oozie-oozi-W

oozie job -oozie ${OOZIE_URL} -log ${oozie_id}

#oozie job -oozie ${OOZIE_URL} -log 0000001-170206083712434-oozie-oozi-W

五、distcp

job.properties

nameNode=hdfs://${sourceNameNode}:8020destNameNode=hdfs://${destNameNode}:8020jobTracker=${RM}:8032#yarn.resourcemanager.address=${RM}:8032queueName=defaultexamplesRoot=examplesoozie.use.system.libpath=trueoozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/distcp_2outputDir=distcp

workflow.xml

                            ${jobTracker}            ${nameNode}                                                                                        mapred.job.queue.name                    ${queueName}                                        ${nameNode}/user/${wf:user()}/${examplesRoot}/input-data/text/data.txt            ${destNameNode}/tmp/data.txt                                            DistCP failed, error message[${wf:errorMessage(wf:lastErrorNode())}]        

到此,关于"Oozie-4.1.0和hadoop-2.7.1怎么进行编译"的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注网站,小编会继续努力为大家带来更多实用的文章!

0