千家信息网

如何使用spark Context转成RDD

发表于:2025-02-03 作者:千家信息网编辑
千家信息网最后更新 2025年02月03日,这篇文章主要介绍"如何使用spark Context转成RDD",在日常操作中,相信很多人在如何使用spark Context转成RDD问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希
千家信息网最后更新 2025年02月03日如何使用spark Context转成RDD

这篇文章主要介绍"如何使用spark Context转成RDD",在日常操作中,相信很多人在如何使用spark Context转成RDD问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答"如何使用spark Context转成RDD"的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

一. 背景

在spark rdd转换算子中join和cogroup是有些需要区分的算子转换,这里使用示例来说明一下。

二. 示例

1.构建List示例数据

List> studentsList = Arrays.asList(      new Tuple2(1,"xufengnian"),      new Tuple2(2,"xuyao"),      new Tuple2(2,"wangchudong"),      new Tuple2(3,"laohuang")      );List> scoresList = Arrays.asList(      new Tuple2(1,100),      new Tuple2(2,90),      new Tuple2(3,80),      new Tuple2(1,101),      new Tuple2(2,91),      new Tuple2(3,81),      new Tuple2(3,71)      );

2.使用sparkContext转成RDD

JavaPairRDD studentsRDD = sc.parallelizePairs(studentsList);JavaPairRDD scoresRDD = sc.parallelizePairs(scoresList);//studentsRDD 为:List>//(1,xufengnian)(2,xuyao)(2,wangchudong)(3,laohuang),下面进行打印查看studentsRDD.foreach(new VoidFunction>(){   public void call(Tuple2 tuple){      System.out.println(tuple._1);//1 2 3      System.out.println(tuple._2);// xufengnian xuyao laohuang   }});

3.进行join

/*前面数据(1,xufengnian)(2,xuyao)(2,"wangchudong")(3,laohuang)(1,100)(2,90)(3,80)(1,101)(2,91)(3,81)(3,71)join之后:(1,(xufengnian,100))(1,(xufengnian,101))(3,(laohuang,80))(3,(laohuang,81))(3,(laohuang,71))(2,(xuyao,90))(2,(xuyao,91))(2,(wangchudong,90))(2,(wangchudong,91))*/JavaPairRDD> studentScores = studentsRDD.join(scoresRDD);//join为key相同的join,key不变,value变成(string,integer)studentScores.foreach(new VoidFunction>>() {      private static final long serialVersionUID = 1L;   @Override   public void call(Tuple2> student)         throws Exception {      System.out.println("student id: " + student._1);//1 1 3      System.out.println("student name: " + student._2._1);//xufengnian xufengnian laohuang      System.out.println("student score: " + student._2._2);//100 101 80      System.out.println("===================================");   }});

4.进行cogroup

/*前面的数据(1,xufengnian)(2,xuyao)(2,"wangchudong")(3,laohuang)(1,100)(2,90)(3,80)(1,101)(2,91)(3,81)(3,71)cogroup之后:(1,([xufengnian],[100,101]))  (3,([laohuang],[80,81,71]))  (2,([xuyao,wangchudong],[90,91]))*/JavaPairRDD,Iterable>> studentScores2 = studentsRDD.cogroup(scoresRDD);studentScores2.foreach(new VoidFunction, Iterable>>>() {   @Override   public void call(Tuple2, Iterable>> stu) throws Exception {      System.out.println("stu id:"+stu._1);//1 3      System.out.println("stu name:"+stu._2._1);//[xufengnian] [laohuang]      System.out.println("stu score:"+stu._2._2);//[100,101] [80,81,71]      Iterable integers = stu._2._2;      for (Iterator iter = integers.iterator(); iter.hasNext();) {         Integer str = (Integer)iter.next();         System.out.println(str);//100 101 80 81 71      }      System.out.println("===================================");   }});

到此,关于"如何使用spark Context转成RDD"的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注网站,小编会继续努力为大家带来更多实用的文章!

0