Task Submission And Execution Of JDBC API

The first way depends on the JDBC module in the pom:

  1. <dependency>
  2. <groupId>org.apache.linkis</groupId>
  3. <artifactId>linkis-jdbc-driver</artifactId>
  4. <version>${linkis.version}</version>
  5. </dependency>

Note: The module has not been deployed to the central repository. You need to execute mvn install -Dmaven.test.skip=true in the linkis-computation-governance/linkis-jdbc-driver directory for local installation.

The second way is through packaging and compilation:

  1. Enter the linkis-jdbc-driver directory in the Linkis project and enter the command in the terminal to package mvn assembly:assembly -Dmaven.test.skip=true The packaging instruction skips the running of the unit test and the compilation of the test code, and packages the dependencies required by the JDBC module into the Jar package.
  2. After the packaging is complete, two Jar packages will be generated in the target directory of JDBC. The one with dependencies in the Jar package name is the Jar package we need.

Establish a Java test class LinkisJDBCTest, the specific interface meaning can be seen in the notes:

  1. package org.apache.linkis.jdbc.test;
  2. import java.sql.*;
  3. public class LinkisJDBCTest {
  4. public static void main(String[] args) throws SQLException, ClassNotFoundException {
  5. //1. load driver:org.apache.linkis.ujes.jdbc.UJESSQLDriver
  6. Class.forName("org.apache.linkis.ujes.jdbc.UJESSQLDriver");
  7. //2. Get Connection:jdbc:linkis://gatewayIP:gatewayPort/dbName?EngineType=hive&creator=test, user/password
  8. Connection connection = DriverManager.getConnection("jdbc:linkis://127.0.0.1:9001/default?EngineType=hive&creator=test","hadoop","hadoop");
  9. //3. Create statement
  10. Statement st= connection.createStatement();
  11. ResultSet rs=st.executeQuery("show tables");
  12. //4.get result
  13. while (rs.next()) {
  14. ResultSetMetaData metaData = rs.getMetaData();
  15. for (int i = 1; i <= metaData.getColumnCount(); i++) {
  16. System.out.print(metaData.getColumnName(i) + ":" +metaData.getColumnTypeName(i)+": "+ rs.getObject(i) + " ");
  17. }
  18. System.out.println();
  19. }
  20. //close resource
  21. rs.close();
  22. st.close();
  23. connection.close();
  24. }
  25. }
  1. Where EngineType is the specified corresponding engine type: supports Spark/hive/presto/shell, etc.
  2. Creator is the specified corresponding application type, which is used for resource isolation between applications