使用 Flink 将实时数据写入 MatrixOne

概述

Apache Flink 是一个强大的框架和分布式处理引擎,专注于进行有状态计算,适用于处理无边界和有边界的数据流。Flink 能够在各种常见集群环境中高效运行,并以内存速度执行计算,支持处理任意规模的数据。

应用场景

  • 事件驱动型应用

    事件驱动型应用通常具备状态,并且它们从一个或多个事件流中提取数据,根据到达的事件触发计算、状态更新或执行其他外部动作。典型的事件驱动型应用包括反欺诈系统、异常检测、基于规则的报警系统和业务流程监控。

  • 数据分析应用

    数据分析任务的主要目标是从原始数据中提取有价值的信息和指标。Flink 支持流式和批量分析应用,适用于各种场景,例如电信网络质量监控、移动应用中的产品更新和实验评估分析、消费者技术领域的实时数据即席分析以及大规模图分析。

  • 数据管道应用

    提取 - 转换 - 加载(ETL)是在不同存储系统之间进行数据转换和迁移的常见方法。数据管道和 ETL 作业有相似之处,都可以进行数据转换和丰富,然后将数据从一个存储系统移动到另一个存储系统。不同之处在于数据管道以持续流模式运行,而不是周期性触发。典型的数据管道应用包括电子商务中的实时查询索引构建和持续 ETL。

本篇文档将介绍两种示例,一种是使用计算引擎 Flink 实现将实时数据写入到 MatrixOne,另一种是使用计算引擎 Flink 将流式数据写入到 MatrixOne 数据库。

前期准备

硬件环境

本次实践对于机器的硬件要求如下:

服务器名称服务器 IP安装软件操作系统
node1192.168.146.10MatrixOneDebian11.1 x86
node2192.168.146.12kafkaCentos7.9
node3192.168.146.11IDEA、MYSQLwin10

软件环境

本次实践需要安装部署以下软件环境:

示例 1:从 MySQL 迁移数据至 MatrixOne

步骤一:初始化项目

  1. 打开 IDEA,点击 File > New > Project,选择 Spring Initializer,并填写以下配置参数:

    • Name:matrixone-flink-demo
    • Location:~\Desktop
    • Language:Java
    • Type:Maven
    • Group:com.example
    • Artifact:matrixone-flink-demo
    • Package name:com.matrixone.flink.demo
    • JDK 1.8

    配置示例如下图所示:

    使用 Flink 将实时数据写入 MatrixOne - 图1

  2. 添加项目依赖,编辑项目根目录下的 pom.xml 文件,将以下内容添加到文件中:

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <project xmlns="http://maven.apache.org/POM/4.0.0"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5. <modelVersion>4.0.0</modelVersion>
  6. <groupId>com.matrixone.flink</groupId>
  7. <artifactId>matrixone-flink-demo</artifactId>
  8. <version>1.0-SNAPSHOT</version>
  9. <properties>
  10. <scala.binary.version>2.12</scala.binary.version>
  11. <java.version>1.8</java.version>
  12. <flink.version>1.17.0</flink.version>
  13. <scope.mode>compile</scope.mode>
  14. </properties>
  15. <dependencies>
  16. <!-- Flink Dependency -->
  17. <dependency>
  18. <groupId>org.apache.flink</groupId>
  19. <artifactId>flink-connector-hive_2.12</artifactId>
  20. <version>${flink.version}</version>
  21. </dependency>
  22. <dependency>
  23. <groupId>org.apache.flink</groupId>
  24. <artifactId>flink-java</artifactId>
  25. <version>${flink.version}</version>
  26. </dependency>
  27. <dependency>
  28. <groupId>org.apache.flink</groupId>
  29. <artifactId>flink-streaming-java</artifactId>
  30. <version>${flink.version}</version>
  31. </dependency>
  32. <dependency>
  33. <groupId>org.apache.flink</groupId>
  34. <artifactId>flink-clients</artifactId>
  35. <version>${flink.version}</version>
  36. </dependency>
  37. <dependency>
  38. <groupId>org.apache.flink</groupId>
  39. <artifactId>flink-table-api-java-bridge</artifactId>
  40. <version>${flink.version}</version>
  41. </dependency>
  42. <dependency>
  43. <groupId>org.apache.flink</groupId>
  44. <artifactId>flink-table-planner_2.12</artifactId>
  45. <version>${flink.version}</version>
  46. </dependency>
  47. <!-- JDBC相关依赖包 -->
  48. <dependency>
  49. <groupId>org.apache.flink</groupId>
  50. <artifactId>flink-connector-jdbc</artifactId>
  51. <version>1.15.4</version>
  52. </dependency>
  53. <dependency>
  54. <groupId>mysql</groupId>
  55. <artifactId>mysql-connector-java</artifactId>
  56. <version>8.0.33</version>
  57. </dependency>
  58. <!-- Kafka相关依赖 -->
  59. <dependency>
  60. <groupId>org.apache.kafka</groupId>
  61. <artifactId>kafka_2.13</artifactId>
  62. <version>3.5.0</version>
  63. </dependency>
  64. <dependency>
  65. <groupId>org.apache.flink</groupId>
  66. <artifactId>flink-connector-kafka</artifactId>
  67. <version>3.0.0-1.17</version>
  68. </dependency>
  69. <!-- JSON -->
  70. <dependency>
  71. <groupId>com.alibaba.fastjson2</groupId>
  72. <artifactId>fastjson2</artifactId>
  73. <version>2.0.34</version>
  74. </dependency>
  75. </dependencies>
  76. <build>
  77. <plugins>
  78. <plugin>
  79. <groupId>org.apache.maven.plugins</groupId>
  80. <artifactId>maven-compiler-plugin</artifactId>
  81. <version>3.8.0</version>
  82. <configuration>
  83. <source>${java.version}</source>
  84. <target>${java.version}</target>
  85. <encoding>UTF-8</encoding>
  86. </configuration>
  87. </plugin>
  88. <plugin>
  89. <artifactId>maven-assembly-plugin</artifactId>
  90. <version>2.6</version>
  91. <configuration>
  92. <descriptorRefs>
  93. <descriptor>jar-with-dependencies</descriptor>
  94. </descriptorRefs>
  95. </configuration>
  96. <executions>
  97. <execution>
  98. <id>make-assembly</id>
  99. <phase>package</phase>
  100. <goals>
  101. <goal>single</goal>
  102. </goals>
  103. </execution>
  104. </executions>
  105. </plugin>
  106. </plugins>
  107. </build>
  108. </project>

步骤二:读取 MatrixOne 数据

使用 MySQL 客户端连接 MatrixOne 后,创建演示所需的数据库以及数据表。

  1. 在 MatrixOne 中创建数据库、数据表,并导入数据:

    1. CREATE DATABASE test;
    2. USE test;
    3. CREATE TABLE `person` (`id` INT DEFAULT NULL, `name` VARCHAR(255) DEFAULT NULL, `birthday` DATE DEFAULT NULL);
    4. INSERT INTO test.person (id, name, birthday) VALUES(1, 'zhangsan', '2023-07-09'),(2, 'lisi', '2023-07-08'),(3, 'wangwu', '2023-07-12');
  2. 在 IDEA 中创建 MoRead.java 类,以使用 Flink 读取 MatrixOne 数据:

    1. package com.matrixone.flink.demo;
    2. import org.apache.flink.api.common.functions.MapFunction;
    3. import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
    4. import org.apache.flink.api.java.ExecutionEnvironment;
    5. import org.apache.flink.api.java.operators.DataSource;
    6. import org.apache.flink.api.java.operators.MapOperator;
    7. import org.apache.flink.api.java.typeutils.RowTypeInfo;
    8. import org.apache.flink.connector.jdbc.JdbcInputFormat;
    9. import org.apache.flink.types.Row;
    10. import java.text.SimpleDateFormat;
    11. /**
    12. * @author MatrixOne
    13. * @description
    14. */
    15. public class MoRead {
    16. private static String srcHost = "192.168.146.10";
    17. private static Integer srcPort = 6001;
    18. private static String srcUserName = "root";
    19. private static String srcPassword = "111";
    20. private static String srcDataBase = "test";
    21. public static void main(String[] args) throws Exception {
    22. ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
    23. // 设置并行度
    24. environment.setParallelism(1);
    25. SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
    26. // 设置查询的字段类型
    27. RowTypeInfo rowTypeInfo = new RowTypeInfo(
    28. new BasicTypeInfo[]{
    29. BasicTypeInfo.INT_TYPE_INFO,
    30. BasicTypeInfo.STRING_TYPE_INFO,
    31. BasicTypeInfo.DATE_TYPE_INFO
    32. },
    33. new String[]{
    34. "id",
    35. "name",
    36. "birthday"
    37. }
    38. );
    39. DataSource<Row> dataSource = environment.createInput(JdbcInputFormat.buildJdbcInputFormat()
    40. .setDrivername("com.mysql.cj.jdbc.Driver")
    41. .setDBUrl("jdbc:mysql://" + srcHost + ":" + srcPort + "/" + srcDataBase)
    42. .setUsername(srcUserName)
    43. .setPassword(srcPassword)
    44. .setQuery("select * from person")
    45. .setRowTypeInfo(rowTypeInfo)
    46. .finish());
    47. // 将 Wed Jul 12 00:00:00 CST 2023 日期格式转换为 2023-07-12
    48. MapOperator<Row, Row> mapOperator = dataSource.map((MapFunction<Row, Row>) row -> {
    49. row.setField("birthday", sdf.format(row.getField("birthday")));
    50. return row;
    51. });
    52. mapOperator.print();
    53. }
    54. }
  3. 在 IDEA 中运行 MoRead.Main(),执行结果如下:

    MoRead 执行结果

步骤三:将 MySQL 数据写入 MatrixOne

现在可以开始使用 Flink 将 MySQL 数据迁移到 MatrixOne。

  1. 准备 MySQL 数据:在 node3 上,使用 Mysql 客户端连接本地 Mysql,创建所需数据库、数据表、并插入数据:

    1. mysql -h127.0.0.1 -P3306 -uroot -proot
    2. mysql> CREATE DATABASE motest;
    3. mysql> USE motest;
    4. mysql> CREATE TABLE `person` (`id` int DEFAULT NULL, `name` varchar(255) DEFAULT NULL, `birthday` date DEFAULT NULL);
    5. mysql> INSERT INTO motest.person (id, name, birthday) VALUES(2, 'lisi', '2023-07-09'),(3, 'wangwu', '2023-07-13'),(4, 'zhaoliu', '2023-08-08');
  2. 清空 MatrixOne 表数据:

    在 node3 上,使用 MySQL 客户端连接 node1 的 MatrixOne。由于本示例继续使用前面读取 MatrixOne 数据的示例中的 test 数据库,因此我们需要首先清空 person 表的数据。

    1. -- node3 上,使用 Mysql 客户端连接 node1 MatrixOne
    2. mysql -h192.168.146.10 -P6001 -uroot -p111
    3. mysql> TRUNCATE TABLE test.person;
  3. 在 IDEA 中编写代码:

    创建 Person.javaMysql2Mo.java 类,使用 Flink 读取 MySQL 数据,执行简单的 ETL 操作(将 Row 转换为 Person 对象),最终将数据写入 MatrixOne 中。

  1. package com.matrixone.flink.demo.entity;
  2. import java.util.Date;
  3. public class Person {
  4. private int id;
  5. private String name;
  6. private Date birthday;
  7. public int getId() {
  8. return id;
  9. }
  10. public void setId(int id) {
  11. this.id = id;
  12. }
  13. public String getName() {
  14. return name;
  15. }
  16. public void setName(String name) {
  17. this.name = name;
  18. }
  19. public Date getBirthday() {
  20. return birthday;
  21. }
  22. public void setBirthday(Date birthday) {
  23. this.birthday = birthday;
  24. }
  25. }
  1. package com.matrixone.flink.demo;
  2. import com.matrixone.flink.demo.entity.Person;
  3. import org.apache.flink.api.common.functions.MapFunction;
  4. import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
  5. import org.apache.flink.api.java.typeutils.RowTypeInfo;
  6. import org.apache.flink.connector.jdbc.*;
  7. import org.apache.flink.streaming.api.datastream.DataStreamSink;
  8. import org.apache.flink.streaming.api.datastream.DataStreamSource;
  9. import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
  10. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  11. import org.apache.flink.types.Row;
  12. import java.sql.Date;
  13. /**
  14. * @author MatrixOne
  15. * @description
  16. */
  17. public class Mysql2Mo {
  18. private static String srcHost = "127.0.0.1";
  19. private static Integer srcPort = 3306;
  20. private static String srcUserName = "root";
  21. private static String srcPassword = "root";
  22. private static String srcDataBase = "motest";
  23. private static String destHost = "192.168.146.10";
  24. private static Integer destPort = 6001;
  25. private static String destUserName = "root";
  26. private static String destPassword = "111";
  27. private static String destDataBase = "test";
  28. private static String destTable = "person";
  29. public static void main(String[] args) throws Exception {
  30. StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
  31. //设置并行度
  32. environment.setParallelism(1);
  33. //设置查询的字段类型
  34. RowTypeInfo rowTypeInfo = new RowTypeInfo(
  35. new BasicTypeInfo[]{
  36. BasicTypeInfo.INT_TYPE_INFO,
  37. BasicTypeInfo.STRING_TYPE_INFO,
  38. BasicTypeInfo.DATE_TYPE_INFO
  39. },
  40. new String[]{
  41. "id",
  42. "name",
  43. "birthday"
  44. }
  45. );
  46. //添加 srouce
  47. DataStreamSource<Row> dataSource = environment.createInput(JdbcInputFormat.buildJdbcInputFormat()
  48. .setDrivername("com.mysql.cj.jdbc.Driver")
  49. .setDBUrl("jdbc:mysql://" + srcHost + ":" + srcPort + "/" + srcDataBase)
  50. .setUsername(srcUserName)
  51. .setPassword(srcPassword)
  52. .setQuery("select * from person")
  53. .setRowTypeInfo(rowTypeInfo)
  54. .finish());
  55. //进行 ETL
  56. SingleOutputStreamOperator<Person> mapOperator = dataSource.map((MapFunction<Row, Person>) row -> {
  57. Person person = new Person();
  58. person.setId((Integer) row.getField("id"));
  59. person.setName((String) row.getField("name"));
  60. person.setBirthday((java.util.Date)row.getField("birthday"));
  61. return person;
  62. });
  63. //设置 matrixone sink 信息
  64. mapOperator.addSink(
  65. JdbcSink.sink(
  66. "insert into " + destTable + " values(?,?,?)",
  67. (ps, t) -> {
  68. ps.setInt(1, t.getId());
  69. ps.setString(2, t.getName());
  70. ps.setDate(3, new Date(t.getBirthday().getTime()));
  71. },
  72. new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
  73. .withDriverName("com.mysql.cj.jdbc.Driver")
  74. .withUrl("jdbc:mysql://" + destHost + ":" + destPort + "/" + destDataBase)
  75. .withUsername(destUserName)
  76. .withPassword(destPassword)
  77. .build()
  78. )
  79. );
  80. environment.execute();
  81. }
  82. }

步骤四:查看执行结果

在 MatrixOne 中执行如下 SQL 查询结果:

  1. mysql> select * from test.person;
  2. +------+---------+------------+
  3. | id | name | birthday |
  4. +------+---------+------------+
  5. | 2 | lisi | 2023-07-09 |
  6. | 3 | wangwu | 2023-07-13 |
  7. | 4 | zhaoliu | 2023-08-08 |
  8. +------+---------+------------+
  9. 3 rows in set (0.01 sec)

示例 2:将 Kafka 数据写入 MatrixOne

步骤一:启动 Kafka 服务

Kafka 集群协调和元数据管理可以通过 KRaft 或 ZooKeeper 来实现。在这里,我们将使用 Kafka 3.5.0 版本,无需依赖独立的 ZooKeeper 软件,而是使用 Kafka 自带的 KRaft 来进行元数据管理。请按照以下步骤配置配置文件,该文件位于 Kafka 软件根目录下的 config/kraft/server.properties

配置文件内容如下:

  1. # Licensed to the Apache Software Foundation (ASF) under one or more
  2. # contributor license agreements. See the NOTICE file distributed with
  3. # this work for additional information regarding copyright ownership.
  4. # The ASF licenses this file to You under the Apache License, Version 2.0
  5. # (the "License"); you may not use this file except in compliance with
  6. # the License. You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. #
  16. # This configuration file is intended for use in KRaft mode, where
  17. # Apache ZooKeeper is not present. See config/kraft/README.md for details.
  18. #
  19. ############################# Server Basics #############################
  20. # The role of this server. Setting this puts us in KRaft mode
  21. process.roles=broker,controller
  22. # The node id associated with this instance's roles
  23. node.id=1
  24. # The connect string for the controller quorum
  25. controller.quorum.voters=1@192.168.146.12:9093
  26. ############################# Socket Server Settings #############################
  27. # The address the socket server listens on.
  28. # Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
  29. # If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
  30. # with PLAINTEXT listener name, and port 9092.
  31. # FORMAT:
  32. # listeners = listener_name://host_name:port
  33. # EXAMPLE:
  34. # listeners = PLAINTEXT://your.host.name:9092
  35. #listeners=PLAINTEXT://:9092,CONTROLLER://:9093
  36. listeners=PLAINTEXT://192.168.146.12:9092,CONTROLLER://192.168.146.12:9093
  37. # Name of listener used for communication between brokers.
  38. inter.broker.listener.name=PLAINTEXT
  39. # Listener name, hostname and port the broker will advertise to clients.
  40. # If not set, it uses the value for "listeners".
  41. #advertised.listeners=PLAINTEXT://localhost:9092
  42. # A comma-separated list of the names of the listeners used by the controller.
  43. # If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
  44. # This is required if running in KRaft mode.
  45. controller.listener.names=CONTROLLER
  46. # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
  47. listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
  48. # The number of threads that the server uses for receiving requests from the network and sending responses to the network
  49. num.network.threads=3
  50. # The number of threads that the server uses for processing requests, which may include disk I/O
  51. num.io.threads=8
  52. # The send buffer (SO_SNDBUF) used by the socket server
  53. socket.send.buffer.bytes=102400
  54. # The receive buffer (SO_RCVBUF) used by the socket server
  55. socket.receive.buffer.bytes=102400
  56. # The maximum size of a request that the socket server will accept (protection against OOM)
  57. socket.request.max.bytes=104857600
  58. ############################# Log Basics #############################
  59. # A comma separated list of directories under which to store log files
  60. log.dirs=/home/software/kafka_2.13-3.5.0/kraft-combined-logs
  61. # The default number of log partitions per topic. More partitions allow greater
  62. # parallelism for consumption, but this will also result in more files across
  63. # the brokers.
  64. num.partitions=1
  65. # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
  66. # This value is recommended to be increased for installations with data dirs located in RAID array.
  67. num.recovery.threads.per.data.dir=1
  68. ############################# Internal Topic Settings #############################
  69. # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
  70. # For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
  71. offsets.topic.replication.factor=1
  72. transaction.state.log.replication.factor=1
  73. transaction.state.log.min.isr=1
  74. ############################# Log Flush Policy #############################
  75. # Messages are immediately written to the filesystem but by default we only fsync() to sync
  76. # the OS cache lazily. The following configurations control the flush of data to disk.
  77. # There are a few important trade-offs here:
  78. # 1. Durability: Unflushed data may be lost if you are not using replication.
  79. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
  80. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
  81. # The settings below allow one to configure the flush policy to flush data after a period of time or
  82. # every N messages (or both). This can be done globally and overridden on a per-topic basis.
  83. # The number of messages to accept before forcing a flush of data to disk
  84. #log.flush.interval.messages=10000
  85. # The maximum amount of time a message can sit in a log before we force a flush
  86. #log.flush.interval.ms=1000
  87. ############################# Log Retention Policy #############################
  88. # The following configurations control the disposal of log segments. The policy can
  89. # be set to delete segments after a period of time, or after a given size has accumulated.
  90. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
  91. # from the end of the log.
  92. # The minimum age of a log file to be eligible for deletion due to age
  93. log.retention.hours=72
  94. # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
  95. # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
  96. #log.retention.bytes=1073741824
  97. # The maximum size of a log segment file. When this size is reached a new log segment will be created.
  98. log.segment.bytes=1073741824
  99. # The interval at which log segments are checked to see if they can be deleted according
  100. # to the retention policies
  101. log.retention.check.interval.ms=300000

文件配置完成后,执行如下命令,启动 Kafka 服务:

  1. #生成集群ID
  2. $ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
  3. #设置日志目录格式
  4. $ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
  5. #启动Kafka服务
  6. $ bin/kafka-server-start.sh config/kraft/server.properties

步骤二:创建 Kafka 主题

为了使 Flink 能够从中读取数据并写入到 MatrixOne,我们需要首先创建一个名为 “matrixone” 的 Kafka 主题。在下面的命令中,使用 --bootstrap-server 参数指定 Kafka 服务的监听地址为 192.168.146.12:9092

  1. $ bin/kafka-topics.sh --create --topic matrixone --bootstrap-server 192.168.146.12:9092

步骤三:读取 MatrixOne 数据

在连接到 MatrixOne 数据库之后,需要执行以下操作以创建所需的数据库和数据表:

  1. 在 MatrixOne 中创建数据库和数据表,并导入数据:

    1. CREATE TABLE `users` (
    2. `id` INT DEFAULT NULL,
    3. `name` VARCHAR(255) DEFAULT NULL,
    4. `age` INT DEFAULT NULL
    5. )
  2. 在 IDEA 集成开发环境中编写代码:

    在 IDEA 中,创建两个类:User.javaKafka2Mo.java。这些类用于使用 Flink 从 Kafka 读取数据,并将数据写入 MatrixOne 数据库中。

  1. package com.matrixone.flink.demo.entity;
  2. public class User {
  3. private int id;
  4. private String name;
  5. private int age;
  6. public int getId() {
  7. return id;
  8. }
  9. public void setId(int id) {
  10. this.id = id;
  11. }
  12. public String getName() {
  13. return name;
  14. }
  15. public void setName(String name) {
  16. this.name = name;
  17. }
  18. public int getAge() {
  19. return age;
  20. }
  21. public void setAge(int age) {
  22. this.age = age;
  23. }
  24. }
  1. package com.matrixone.flink.demo;
  2. import com.alibaba.fastjson2.JSON;
  3. import com.matrixone.flink.demo.entity.User;
  4. import org.apache.flink.api.common.eventtime.WatermarkStrategy;
  5. import org.apache.flink.api.common.serialization.AbstractDeserializationSchema;
  6. import org.apache.flink.connector.jdbc.JdbcExecutionOptions;
  7. import org.apache.flink.connector.jdbc.JdbcSink;
  8. import org.apache.flink.connector.jdbc.JdbcStatementBuilder;
  9. import org.apache.flink.connector.jdbc.internal.options.JdbcConnectorOptions;
  10. import org.apache.flink.connector.kafka.source.KafkaSource;
  11. import org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer;
  12. import org.apache.flink.streaming.api.datastream.DataStreamSource;
  13. import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
  14. import org.apache.kafka.clients.consumer.OffsetResetStrategy;
  15. import java.nio.charset.StandardCharsets;
  16. /**
  17. * @author MatrixOne
  18. * @desc
  19. */
  20. public class Kafka2Mo {
  21. private static String srcServer = "192.168.146.12:9092";
  22. private static String srcTopic = "matrixone";
  23. private static String consumerGroup = "matrixone_group";
  24. private static String destHost = "192.168.146.10";
  25. private static Integer destPort = 6001;
  26. private static String destUserName = "root";
  27. private static String destPassword = "111";
  28. private static String destDataBase = "test";
  29. private static String destTable = "person";
  30. public static void main(String[] args) throws Exception {
  31. //初始化环境
  32. StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
  33. //设置并行度
  34. env.setParallelism(1);
  35. //设置 kafka source 信息
  36. KafkaSource<User> source = KafkaSource.<User>builder()
  37. //Kafka 服务
  38. .setBootstrapServers(srcServer)
  39. //消息主题
  40. .setTopics(srcTopic)
  41. //消费组
  42. .setGroupId(consumerGroup)
  43. //偏移量 当没有提交偏移量则从最开始开始消费
  44. .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.LATEST))
  45. //自定义解析消息内容
  46. .setValueOnlyDeserializer(new AbstractDeserializationSchema<User>() {
  47. @Override
  48. public User deserialize(byte[] message) {
  49. return JSON.parseObject(new String(message, StandardCharsets.UTF_8), User.class);
  50. }
  51. })
  52. .build();
  53. DataStreamSource<User> kafkaSource = env.fromSource(source, WatermarkStrategy.noWatermarks(), "kafka_maxtixone");
  54. //kafkaSource.print();
  55. //设置 matrixone sink 信息
  56. kafkaSource.addSink(JdbcSink.sink(
  57. "insert into users (id,name,age) values(?,?,?)",
  58. (JdbcStatementBuilder<User>) (preparedStatement, user) -> {
  59. preparedStatement.setInt(1, user.getId());
  60. preparedStatement.setString(2, user.getName());
  61. preparedStatement.setInt(3, user.getAge());
  62. },
  63. JdbcExecutionOptions.builder()
  64. //默认值 5000
  65. .withBatchSize(1000)
  66. //默认值为 0
  67. .withBatchIntervalMs(200)
  68. //最大尝试次数
  69. .withMaxRetries(5)
  70. .build(),
  71. JdbcConnectorOptions.builder()
  72. .setDBUrl("jdbc:mysql://"+destHost+":"+destPort+"/"+destDataBase)
  73. .setUsername(destUserName)
  74. .setPassword(destPassword)
  75. .setDriverName("com.mysql.cj.jdbc.Driver")
  76. .setTableName(destTable)
  77. .build()
  78. ));
  79. env.execute();
  80. }
  81. }

代码编写完成后,你可以运行 Flink 任务,即在 IDEA 中选择 Kafka2Mo.java 文件,然后执行 Kafka2Mo.Main()

步骤四:生成数据

使用 Kafka 提供的命令行生产者工具,您可以向 Kafka 的 “matrixone” 主题中添加数据。在下面的命令中,使用 --topic 参数指定要添加到的主题,而 --bootstrap-server 参数指定了 Kafka 服务的监听地址。

  1. bin/kafka-console-producer.sh --topic matrixone --bootstrap-server 192.168.146.12:9092

执行上述命令后,您将在控制台上等待输入消息内容。只需直接输入消息值 (value),每行表示一条消息(以换行符分隔),如下所示:

  1. {"id": 10, "name": "xiaowang", "age": 22}
  2. {"id": 20, "name": "xiaozhang", "age": 24}
  3. {"id": 30, "name": "xiaogao", "age": 18}
  4. {"id": 40, "name": "xiaowu", "age": 20}
  5. {"id": 50, "name": "xiaoli", "age": 42}

使用 Flink 将实时数据写入 MatrixOne - 图3

步骤五:查看执行结果

在 MatrixOne 中执行如下 SQL 查询结果:

  1. mysql> select * from test.users;
  2. +------+-----------+------+
  3. | id | name | age |
  4. +------+-----------+------+
  5. | 10 | xiaowang | 22 |
  6. | 20 | xiaozhang | 24 |
  7. | 30 | xiaogao | 18 |
  8. | 40 | xiaowu | 20 |
  9. | 50 | xiaoli | 42 |
  10. +------+-----------+------+
  11. 5 rows in set (0.01 sec)