千家信息网

Hadoop多节点测试环境快速部署半自动脚本的示例代码

发表于:2025-01-27 作者:千家信息网编辑
千家信息网最后更新 2025年01月27日,小编给大家分享一下Hadoop多节点测试环境快速部署半自动脚本的示例代码,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!本半自动部署包括两个脚本hdp_ini.sh(环境初始化)和 h
千家信息网最后更新 2025年01月27日Hadoop多节点测试环境快速部署半自动脚本的示例代码

小编给大家分享一下Hadoop多节点测试环境快速部署半自动脚本的示例代码,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!

本半自动部署包括两个脚本hdp_ini.sh(环境初始化)和 hdp_bld.sh(创建hadoop)。
执行完第一个脚本,再检查手动调整一下。
然后根据第二个脚本里说明配置好ssh passwordless, 再执行第二个脚本,基本上就可以快速的部署好测试环境。

hdp_ini.sh - Run on all the nodes in your cluster env!

#!/bin/bash# Name: hdp_ini.sh# Purpose: For Fast INITIALIZATION of the testing env, to save time for life :) # Author: Stone@dbinterest.com + Many guys on the internet# Time: 03/09/2012# User: root# Reminder:  1) Remember to add the executable permission the script before start the script#                          eg: chmod a+x env_ini.bash OR chmod 744 env_ini.bash#                         2) Need to change the $HOSTNAME2BE and $HOST_IP variable on each node accordingly.# Attention: 1) You might need run this script TWICE to have the server ip address in effect#                         if the ifconfig output shows deivce other than "eth0"!!!#            2) Please execute the script from the console inside the node, not form ssh tool#            like xshell, SecureCRT, etc. to avoid connection lost.############################################################################### Different env variables setting. Please change this part according to### your specific env ...############################################################################export HOST01='192.168.1.121 hdp01.dbinterest.local hdp01'export HOST02='192.168.1.122 hdp02.dbinterest.local hdp02'export HOST03='192.168.1.123 hdp03.dbinterest.local hdp03'export BAKDIR='/root/bak_dir'export HOSTSFILE='/etc/hosts'export NETWORKFILE='/etc/sysconfig/network'export CURRENT_HOSTNAME=`hostname`### Modify hostname according to your node in the cluster#export HOSTNAME2BE='hdp01.dbinterest.local'#export HOSTNAME2BE='hdp02.dbinterest.local'export HOSTNAME2BE='hdp03.dbinterest.local'export ETH0STRING=`ifconfig | grep eth0`export HWADDRSS=`ifconfig | grep HWaddr | awk '{print $5}'`export IFCFG_FILE='/etc/sysconfig/network-scripts/ifcfg-eth0'### Modify host IP address according to your node in the cluster#export HOST_IP='192.168.1.121'#export HOST_IP='192.168.1.122'export HOST_IP='192.168.1.123'export GATEWAYIP='192.168.1.1'export DNSIP01='8.8.8.8'export DNSIP02='8.8.4.4'export FILE70='/etc/udev/rules.d/70-persistent-net.rules'export DATETIME=`date +%Y%m%d%H%M%S`export FQDN=`hostname -f`### Make the backup directory for the different filesif [ -d $BAKDIR ];        then                echo "The backup directory $BAKDIR exists!"        else                echo "Making the backup directory $BAKDIR..."                mkdir $BAKDIRfi                ############################################################################### Config the hosts file "/etc/hosts"############################################################################if [ -f $HOSTSFILE ];        then                cp $HOSTSFILE $BAKDIR/hosts\_$DATETIME.bak                echo '127.0.0.1   localhost localhost.localdomain' > $HOSTSFILE                echo '::1         localhost6 localhost6.localdomain6' >> $HOSTSFILE                echo "$HOST01" >> $HOSTSFILE                echo "$HOST02" >> $HOSTSFILE                echo "$HOST03" >> $HOSTSFILE        else                echo "File $HOSTSFILE does not exists"fi############################################################################### Config the network file "/etc/sysconfig/network"############################################################################if [ -f $NETWORKFILE ];        then                cp $NETWORKFILE $BAKDIR/network\_$DATETIME.bak                echo 'NETWORKING=yes' > $NETWORKFILE                echo "HOSTNAME=$HOSTNAME2BE" >> $NETWORKFILE        else                echo "File $NETWORKFILE does not exists"fi############################################################################### Config the ifcfg-eth0 file "/etc/sysconfig/network-scripts/ifcfg-eth0"############################################################################if [ -f $IFCFG_FILE ];        then                cp $IFCFG_FILE $BAKDIR/ifcfg_file\_$DATETIME.bak                echo 'DEVICE=eth0' > $IFCFG_FILE                echo 'BOOTPROTO=static' >> $IFCFG_FILE                echo "HWADDR=$HWADDRSS" >> $IFCFG_FILE                echo "IPADDR=$HOST_IP" >> $IFCFG_FILE                echo 'NETMASK=255.255.255.0' >> $IFCFG_FILE                echo "GATEWAY=$GATEWAYIP" >> $IFCFG_FILE                echo "DNS1=$DNSIP01" >> $IFCFG_FILE                echo "DNS2=$DNSIP02" >> $IFCFG_FILE                echo 'ONBOOT=yes' >> $IFCFG_FILEfiecho ''echo "DEFAULT hostname is $CURRENT_HOSTNAME."echo "Hostname is going to be changed to $HOSTNAME2BE..."if [ "$CURRENT_HOSTNAME" != "$HOSTNAME2BE" ];        then                hostname $HOSTNAME2BE        else                echo "The hostname is already configured correctly!"fi                                ############################################################################### Check the current config setting for the different files############################################################################echo ''echo -e "Current fully qualified domain name is: \n $FQDN"echo "Current config setting for $HOSTSFILE, $NETWORKFILE and $IFCFG_FILE"echo ''echo $HOSTSFILEcat $HOSTSFILEecho ''echo $NETWORKFILEcat $NETWORKFILEecho ''echo $IFCFG_FILEcat $IFCFG_FILE############################################################################### Stop Iptables and SELinux. The reboot will make those in effect!############################################################################echo ''echo "Stopping Ipstables and SELinux ..."service iptables stopchkconfig iptables offsed -i.bak 's/=enforcing/=disabled/g' /etc/selinux/config############################################################################### Restarting the network ...############################################################################echo ''echo "Restarting network ..."service network restart############################################################################### For the machine copying/cloning in the VMware env, network deive was ### changed to "eth2" from "eth0" after the 1st time copying, and then "eth3" ### the 2nd, then "eth4". For a consistent test env, all of them was changed### to "eth0" ...############################################################################if [ -z "$ETH0STRING" ];        then                echo "Network device eth0 does NOT exists!!!"            if [ -f $FILE70 ];                     then                                        echo "Now, deleting the the file $FILE70... and Rebooting..."                                cp $FILE70 $BAKDIR/file70\_$DATETIME.bak                                rm /etc/udev/rules.d/70-persistent-net.rules                                reboot                fielse                echo "Network device eth0 exists."fi

hdp_bld.sh - Just run on the Master node.

#!/bin/bash# Name: hdp_bld.sh# Purpose: For Fast Hadoop Installation of the testing env, to save time for life :) # Author: Stone@dbinterest.com + Many guys on the internet# Time: 04/09/2012# User: hadoop user "hdpadm"# Attention: Passwordless SSH access need to be set up 1st between different nodes#            to allow the script to take effect!############################################################################### Different env variables setting. Please change this part according to### your specific env ...############################################################################export HDPPKG='hadoop-1.0.3.tar.gz'export HDPLOC='/home/hdpadm/hadoop'export HDPADMHM='/home/hdpadm'export MASTER01='hdp01.dbinterest.local'export SLAVE01='hdp02.dbinterest.local'export SLAVE02='hdp03.dbinterest.local'export HDPLINK='http://archive.apache.org/dist/hadoop/core/stable/hadoop-1.0.3.tar.gz'export SLAVES="hdp02.dbinterest.local hdp03.dbinterest.local"export USER='hdpadm'############################################################################### For the script to run, the hadoop user "hdpadm" should be set up first!### And the SSH Passwordless should be set up for all the nodes ...#############################################################################/usr/sbin/groupadd hdpadm#/usr/sbin/useradd hdpadm -g hdpadm# Run as new user "hdpadm" on each node# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa# eg: ssh-copy-id -i ~/.ssh/id_rsa.pub hdp01.dbinterest.local# Syntax: ssh-copy-id [-i [identity_file]] [user@]machine############################################################################### Get the Hadoop the packages and prepare the installation############################################################################if [ ! -d $HDPLOC ];         then                  mkdir $HDPLOC                  cd $HDPLOCfiif [ ! -f "$HDPPKG" ];         then                echo "Getting the Hadoop the packages and prepare the installation..."                wget $HDPLINK -O $HDPLOC/$HDPPKG                tar xvzf $HDPLOC/$HDPPKG -C $HDPLOC                rm -f $HDPLOC/$HDPPKGfi############################################################################### Hadoop config Step by Step ############################################################################# Config the profileecho "Configuring the profile..."if [ $(getconf LONG_BIT) == 64 ]; then  echo '' >> $HDPADMHM/.bash_profile          echo "#Added Configurations for Hadoop" >> $HDPADMHM/.bash_profile  echo "export JAVA_HOME=/usr/jdk64/jdk1.6.0_31" >> $HDPADMHM/.bash_profileelse  echo "export JAVA_HOME=/usr/jdk32/jdk1.6.0_31" >> $HDPADMHM/.bash_profilefiecho "export HADOOP_HOME=/home/hdpadm/hadoop/hadoop-1.0.3" >> $HDPADMHM/.bash_profileecho "export PATH=\$PATH:\$HADOOP_HOME/bin:\$JAVA_HOME/bin" >> $HDPADMHM/.bash_profileecho "export HADOOP_HOME_WARN_SUPPRESS=1" >> $HDPADMHM/.bash_profile#hadoop core-site.xmlecho "Configuring the core-site.xml file..."echo "" > $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "    fs.default.name" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "    hdfs://$MASTER01:9000" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "    hadoop.tmp.dir" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "    $HDPLOC/tmp" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/core-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/core-site.xml#hadoop hdfs-site.xmlecho "Configuring the hdfs-site.xml..."echo "" > $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "    dfs.name.dir" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "    $HDPLOC/name" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "    dfs.data.dir" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "    $HDPLOC/data" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "    dfs.replication" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "    2" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/hdfs-site.xml#hadoop mapred-site.xmlecho "Configuring mapred-site.xml file..."echo "" > $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "    mapred.job.tracker" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "    $MASTER01:9001" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "  " >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "" >> $HDPLOC/hadoop-1.0.3/conf/mapred-site.xmlecho "Configuring the masters, slaves and hadoop-env.sh files..."#hadoop "masters" configecho "$MASTER01" > $HDPLOC/hadoop-1.0.3/conf/masters#hadoop "slaves" configecho "$SLAVE01" > $HDPLOC/hadoop-1.0.3/conf/slavesecho "$SLAVE02" >> $HDPLOC/hadoop-1.0.3/conf/slaves#hadoop "hadoop-env.sh" configecho "export JAVA_HOME=/usr/jdk64/jdk1.6.0_31" > $HDPLOC/hadoop-1.0.3/conf/hadoop-env.shecho 'export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh echo 'export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh echo 'export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh echo 'export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh echo 'export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"' >> $HDPLOC/hadoop-1.0.3/conf/hadoop-env.sh # Copy the config files and hadoop folder from "master" to all "slaves"for slave in $SLAVESdo        echo "------Copying profile and hadoop directory-------"         scp $HDPADMHM/.bash_profile $USER@$slave:$HDPADMHM/.bash_profile        ssh $USER@$slave source $HDPADMHM/.bash_profile        scp -r $HDPLOC $USER@$slave:/home/hdpadmdone        source $HDPADMHM/.bash_profile#hadoop namenode -format#$HADOOP_HOME/bin/start-all.sh

看完了这篇文章,相信你对"Hadoop多节点测试环境快速部署半自动脚本的示例代码"有了一定的了解,如果想了解更多相关知识,欢迎关注行业资讯频道,感谢各位的阅读!

0