千家信息网

Apache Flink 中Flink DataSet编程的示例分析

发表于:2025-01-29 作者:千家信息网编辑
千家信息网最后更新 2025年01月29日,这篇文章给大家介绍一下什么是Apache Flink 中Flink DataSet编程,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。Flink中DataSet编程是非常常规的编程
千家信息网最后更新 2025年01月29日Apache Flink 中Flink DataSet编程的示例分析

这篇文章给大家介绍一下什么是Apache Flink 中Flink DataSet编程,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。

Flink中DataSet编程是非常常规的编程,只需要实现他的数据集的转换(例如filtering, mapping, joining, grouping)。这个数据集最初是通过数据源创建(例如读取文件、本地数据集加载本地集合),转换的结果通过sink返回到本地(或者分布式)的文件系统或者终端。Flink程序可以运行在各种环境中例如单机,或者嵌入其他程序中。执行过程可以在本地JVM中或者集群中。

Source ===> Flink(transformation)===> Sink

基于文件

  • readTextFile(path) / TextInputFormat - Reads files line wise and returns them as Strings.

  • readTextFileWithValue(path) / TextValueInputFormat - Reads files line wise and returns them as StringValues. StringValues are mutable strings.

  • readCsvFile(path) / CsvInputFormat - Parses files of comma (or another char) delimited fields. Returns a DataSet of tuples or POJOs. Supports the basic java types and their Value counterparts as field types.

  • readFileOfPrimitives(path, Class) / PrimitiveInputFormat - Parses files of new-line (or another char sequence) delimited primitive data types such as String or Integer.

  • readFileOfPrimitives(path, delimiter, Class) / PrimitiveInputFormat - Parses files of new-line (or another char sequence) delimited primitive data types such as String or Integer using the given delimiter.

基于集合

  • fromCollection(Collection)

  • fromCollection(Iterator, Class)

  • fromElements(T ...)

  • fromParallelCollection(SplittableIterator, Class)

  • generateSequence(from, to)

从简单的基于集合创建DataSet

基于集合的数据源往往用来在开发环境中或者程序员学习中,可以随意造我们所需要的数据,因为方式简单。下面从java和scala两种方式来实现使用集合作为数据源。数据源是简单的1到10

java

import org.apache.flink.api.java.ExecutionEnvironment;import java.util.ArrayList;import java.util.List;public class JavaDataSetSourceApp {    public static void main(String[] args) throws Exception {        ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment();        fromCollection(executionEnvironment);    }    public static void fromCollection(ExecutionEnvironment env) throws Exception {        List list = new ArrayList();        for (int i = 1; i <= 10; i++) {            list.add(i);        }        env.fromCollection(list).print();    }}

scala

import org.apache.flink.api.scala.ExecutionEnvironmentobject DataSetSourceApp {  def main(args: Array[String]): Unit = {    val env = ExecutionEnvironment.getExecutionEnvironment    fromCollection(env)  }  def fromCollection(env: ExecutionEnvironment): Unit = {    import org.apache.flink.api.scala._    val data = 1 to  10    env.fromCollection(data).print()  }}

读文件或文件夹方式创建DataSet

在本地文件夹:E:\test\input,下面有一个hello.txt,内容如下:

hello    world   welcomehello   welcome

Scala

  def main(args: Array[String]): Unit = {    val env = ExecutionEnvironment.getExecutionEnvironment    //fromCollection(env)    textFile(env)  }  def textFile(env: ExecutionEnvironment): Unit = {    val filePathFilter = "E:/test/input/hello.txt"    env.readTextFile(filePathFilter).print()  }

readTextFile方法需要参数1:文件路径(可以使本地,也可以是hdfs://host:port/file/path),参数2:编码(如果不写,默认UTF-8)

是否可以指定文件夹?

我们直接传递文件夹路径

  def main(args: Array[String]): Unit = {    val env = ExecutionEnvironment.getExecutionEnvironment    //fromCollection(env)    textFile(env)  }  def textFile(env: ExecutionEnvironment): Unit = {    //val filePathFilter = "E:/test/input/hello.txt"    val filePathFilter = "E:/test/input"    env.readTextFile(filePathFilter).print()  }

运行结果正常。说明readTextFile方法传入文件夹,也没有问题,它将会遍历文件夹下面的所有文件

Java

    public static void main(String[] args) throws Exception {        ExecutionEnvironment executionEnvironment = ExecutionEnvironment.getExecutionEnvironment();        // fromCollection(executionEnvironment);        textFile(executionEnvironment);    }    public static void textFile(ExecutionEnvironment env) throws Exception {        String filePath = "E:/test/input/hello.txt";        // String filePath = "E:/test/input";        env.readTextFile(filePath).print();    }

同样的道理,java中也可以指定文件或者文件夹,如果指定文件夹,那么将遍历文件夹下面的所有文件。

读CSV文件创建DataSet

创建一个CSV文件,内容如下:

name,age,jobTom,26,catJerry,24,mousesophia,30,developer

Scala

读取csv文件方法readCsvFile,参数如下:

      filePath: String,      lineDelimiter: String = "\n",      fieldDelimiter: String = ",", 字段分隔符      quoteCharacter: Character = null,      ignoreFirstLine: Boolean = false,  是否忽略第一行      ignoreComments: String = null,      lenient: Boolean = false,      includedFields: Array[Int] = null, 读取文件的哪几列      pojoFields: Array[String] = null)

读取csv文件代码如下:

  def csvFile(env:ExecutionEnvironment): Unit = {    import org.apache.flink.api.scala._    val filePath = "E:/test/input/people.csv"    env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print()  }

如何只读前两列,就需要指定includedFields了,

env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()

之前使用Tuple方式指定类型,如何指定自定义的一个case class?

  def csvFile(env: ExecutionEnvironment): Unit = {    import org.apache.flink.api.scala._    val filePath = "E:/test/input/people.csv"    //    env.readCsvFile[(String, Int, String)](filePath, ignoreFirstLine = true).print()    //    env.readCsvFile[(String, Int)](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()    env.readCsvFile[MyCaseClass](filePath, ignoreFirstLine = true, includedFields = Array(0, 1)).print()  }  case class MyCaseClass(name: String, age: Int)

如何指定POJO?

新建一个POJO类,people

public class People {    private String name;    private int age;    private String job;    @Override    public String toString() {        return "People{" +                "name='" + name + '\'' +                ", age=" + age +                ", job='" + job + '\'' +                '}';    }    public String getName() {        return name;    }    public void setName(String name) {        this.name = name;    }    public int getAge() {        return age;    }    public void setAge(int age) {        this.age = age;    }    public String getJob() {        return job;    }    public void setJob(String job) {        this.job = job;    }}
      env.readCsvFile[People](filePath, ignoreFirstLine = true, pojoFields = Array("name", "age", "job")).print()

java

    public static void csvFile(ExecutionEnvironment env) throws Exception {        String filePath = "E:/test/input/people.csv";        DataSource> types = env.readCsvFile(filePath).ignoreFirstLine().includeFields("11").types(String.class, Integer.class);        types.print();    }

只取出第一列和第二列的数据。

读取POJO数据:

        env.readCsvFile(filePath).ignoreFirstLine().pojoType(People.class, "name", "age", "job").print();

读递归文件夹创建DataSet

scala

  def main(args: Array[String]): Unit = {    val env = ExecutionEnvironment.getExecutionEnvironment    //fromCollection(env)    //    textFile(env)//    csvFile(env)    readRecursiveFiles(env)  }  def readRecursiveFiles(env: ExecutionEnvironment): Unit = {    val filePath = "E:/test/nested"    val parameter = new Configuration()    parameter.setBoolean("recursive.file.enumeration", true)    env.readTextFile(filePath).withParameters(parameter).print()  }

从压缩文件中创建DataSet

Scala

  def readCompressionFiles(env: ExecutionEnvironment): Unit = {    val filePath = "E:/test/my.tar.gz"    env.readTextFile(filePath).print()  }

可以直接读取压缩文件。因为提高了空间利用率,但是却导致CPU的压力也提升了。因此需要一个权衡。需要调优,在各种情况下去选择更合适的方式。不是任何一种优化都能带来想要的结果。如果本身集群的CPU压力就高,那么就不应该读取压缩文件了。

关于Apache Flink 中Flink DataSet编程的示例分析就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。

0