千家信息网

Spark1.0.0如何实现伪分布安装

发表于:2025-02-22 作者:千家信息网编辑
千家信息网最后更新 2025年02月22日,这篇文章主要介绍了Spark1.0.0如何实现伪分布安装,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。一、下载须知软件准备:spar
千家信息网最后更新 2025年02月22日Spark1.0.0如何实现伪分布安装

这篇文章主要介绍了Spark1.0.0如何实现伪分布安装,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。

一、下载须知

软件准备:

spark-1.0.0-bin-hadoop1.tgz 下载地址:spark1.0.0

scala-2.10.4.tgz 下载下载:Scala 2.10.4

hadoop-1.2.1-bin.tar.gz 下载地址:hadoop-1.2.1-bin.tar.gz

jdk-7u60-linux-i586.tar.gz 下载地址:去官网下载就行,这个1.7.x都行

二、安装步骤

hadoop-1.2.1安装步骤,请看: http://my.oschina.net/dataRunner/blog/292584

1.解压:

tar -zxvf scala-2.10.4.tgz mv  scala-2.10.4 scalatar -zxvf spark-1.0.0-bin-hadoop1.tgz mv spark-1.0.0-bin-hadoop1 spark

2. 配置环境变量:

vim /etc/profile   (在最后一行加入以下内容就行)export HADOOP_HOME_WARN_SUPPRESS=1export JAVA_HOME=/home/big_data/jdkexport JRE_HOME=${JAVA_HOME}/jreexport CLASS_PATH=.:${JAVA_HOME}/lib:${JRE_HOME}/libexport HADOOP_HOME=/home/big_data/hadoopexport HIVE_HOME=/home/big_data/hiveexport SCALA_HOME=/home/big_data/scalaexport SPARK_HOME=/home/big_data/sparkexport PATH=.:$SPARK_HOME/bin:$SCALA_HOME/bin:$HIVE_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

3.修改spark的spark-env.sh文件

cd spark/confcp spark-env.sh.template  spark-env.shvim spark-env.sh  (在最后一行加入以下内容就行)export JAVA_HOME=/home/big_data/jdkexport SCALA_HOME=/home/big_data/scalaexport SPARK_MASTER_IP=192.168.80.100export SPARK_WORKER_MEMORY=200mexport HADOOP_CONF_DIR=/home/big_data/hadoop/conf

然后就配置完毕勒!!!(就这么简单,艹,很多人都知道,但是共享的人太少勒)

三、测试步骤

hadoop-1.2.1测试步骤,请看: http://my.oschina.net/dataRunner/blog/292584

1.验证scala

[root@master ~]# scala -versionScala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL[root@master ~]# [root@master big_data]# scalaWelcome to Scala version 2.10.4 (Java HotSpot(TM) Client VM, Java 1.7.0_60).Type in expressions to have them evaluated.Type :help for more information.scala> 1+1res0: Int = 2scala> :q

2.验证spark (先启动hadoop-dfs.sh)

[root@master big_data]# cd spark[root@master spark]# cd sbin/start-all.sh( 也可以分别启动[root@master spark]$ sbin/start-master.sh可以通过 http://master:8080/ 看到对应界面[root@master spark]$ sbin/start-slaves.sh park://master:7077可以通过 http://master:8081/ 看到对应界面)[root@master spark]# jps[root@master ~]# jps4629 NameNode  (hadoop的)5007 Master   (spark的)6150 Jps4832 SecondaryNameNode  (hadoop的)5107 Worker  (spark的)4734 DataNode  (hadoop的)可以通过 http://192.168.80.100:8080/ 看到对应界面   [root@master big_data]# spark-shellSpark assembly has been built with Hive, including Datanucleus jars on classpath14/07/20 21:41:04 INFO spark.SecurityManager: Changing view acls to: root14/07/20 21:41:04 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root)14/07/20 21:41:04 INFO spark.HttpServer: Starting HTTP Server14/07/20 21:41:05 INFO server.Server: jetty-8.y.z-SNAPSHOT14/07/20 21:41:05 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:43343Welcome to      ____              __     / __/__  ___ _____/ /__    _\ \/ _ \/ _ `/ __/  '_/   /___/ .__/\_,_/_/ /_/\_\   version 1.0.0      /_/Using Scala version 2.10.4 (Java HotSpot(TM) Client VM, Java 1.7.0_60)。。。scala> 可以通过 http://192.168.80.100:4040/ 看到对应界面  (随便上传一个文件,里面随便一些英文单词,到hdfs上面) scala> val file=sc.textFile("hdfs://master:9000/input")14/07/20 21:51:05 INFO storage.MemoryStore: ensureFreeSpace(608) called with curMem=31527, maxMem=31138775014/07/20 21:51:05 INFO storage.MemoryStore: Block broadcast_1 stored as values to memory (estimated size 608.0 B, free 296.9 MB)file: org.apache.spark.rdd.RDD[String] = MappedRDD[5] at textFile at :12scala> val count=file.flatMap(line=>line.split(" ")).map(word=>(word,1)).reduceByKey(_+_)14/07/20 21:51:14 INFO mapred.FileInputFormat: Total input paths to process : 1count: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[10] at reduceByKey at :14scala> count.collect()14/07/20 21:51:48 INFO spark.SparkContext: Job finished: collect at :17, took 2.482381535 sres0: Array[(String, Int)] = Array((previously-registered,1), (this,3), (Spark,1), (it,3), (original,1), (than,1), (its,1), (previously,1), (have,2), (upon,1), (order,2), (whenever,1), (it's,1), (could,3), (Configuration,1), (Master's,1), (SPARK_DAEMON_JAVA_OPTS,1), (This,2), (which,2), (applications,2), (register,,1), (doing,1), (for,3), (just,2), (used,1), (any,1), (go,1), ((equivalent,1), (Master,4), (killing,1), (time,1), (availability,,1), (stop-master.sh,1), (process.,1), (Future,1), (node,1), (the,9), (Workers,1), (however,,1), (up,2), (Details,1), (not,3), (recovered,1), (process,1), (enable,3), (spark-env,1), (enough,1), (can,4), (if,3), (While,2), (provided,1), (be,5), (mode.,1), (minute,1), (When,1), (all,2), (written,1), (store,1), (enter,1), (then,1), (as,1), (officially,1)...scala> scala> count.saveAsTextFile("hdfs://master:9000/output")   (结果保存到hdfs上的/output文件夹下)scala> :qStopping spark context.[root@master ~]# hadoop fs -ls /       Found 3 itemsdrwxr-xr-x   - root supergroup          0 2014-07-18 21:10 /home-rw-r--r--   1 root supergroup       1722 2014-07-18 06:18 /inputdrwxr-xr-x   - root supergroup          0 2014-07-20 21:53 /output[root@master ~]# [root@master ~]# hadoop fs -cat /output/p*。。。(mount,1)(production-level,1)(recovery).,1)(Workers/applications,1)(perspective.,1)(so,2)(and,1)(ZooKeeper,2)(System,1)(needs,1)(property       Meaning,1)(solution,1)(seems,1)

感谢你能够认真阅读完这篇文章,希望小编分享的"Spark1.0.0如何实现伪分布安装"这篇文章对大家有帮助,同时也希望大家多多支持,关注行业资讯频道,更多相关知识等着你来学习!

0