5.2. Black Hole Connector

Primarily Black Hole connector is designed for high performance testing ofother components. It works like the /dev/null device on Unix-likeoperating systems for data writing and like /dev/null or /dev/zerofor data reading. However, it also has some other features that allow testing Prestoin a more controlled manner. Metadata for any tables created via this connectoris kept in memory on the coordinator and discarded when Presto restarts.Created tables are by default always empty, and any data written to themwill be ignored and data reads will return no rows.

During table creation, a desired rows number can be specified.In such case, writes will behave in the same way, but reads willalways return specified number of some constant rows.You shouldn’t rely on the content of such rows.

Warning

This connector will not work properly with multiple coordinators,since each coordinator will have a different metadata.

Configuration

To configure the Black Hole connector, create a catalog properties fileetc/catalog/blackhole.properties with the following contents:

  1. connector.name=blackhole

Examples

Create a table using the blackhole connector:

  1. CREATE TABLE blackhole.test.nation AS
  2. SELECT * from tpch.tiny.nation;

Insert data into a table in the blackhole connector:

  1. INSERT INTO blackhole.test.nation
  2. SELECT * FROM tpch.tiny.nation;

Select from the blackhole connector:

  1. SELECT count(*) FROM blackhole.test.nation;

The above query will always return zero.

Create a table with constant number of rows (500 1000 2000):

  1. CREATE TABLE blackhole.test.nation (
  2. nationkey bigint,
  3. name varchar
  4. )
  5. WITH (
  6. split_count = 500,
  7. pages_per_split = 1000,
  8. rows_per_page = 2000
  9. );

Now query it:

  1. SELECT count(*) FROM blackhole.test.nation;

The above query will return 1,000,000,000.

Length of variable length columns can be controlled using field_lengthtable property (default value is equal to 16):

  1. CREATE TABLE blackhole.test.nation (
  2. nationkey bigint,
  3. name varchar
  4. )
  5. WITH (
  6. split_count = 500,
  7. pages_per_split = 1000,
  8. rows_per_page = 2000,
  9. field_length = 100
  10. );

The consuming and producing rate can be slowed downusing the page_processing_delay table property.Setting this property to 5s will lead to a 5 seconddelay before consuming or producing a new page:

  1. CREATE TABLE blackhole.test.delay (
  2. dummy bigint
  3. )
  4. WITH (
  5. split_count = 1,
  6. pages_per_split = 1,
  7. rows_per_page = 1,
  8. page_processing_delay = '5s'
  9. );