State Processor API

Apache Flink’s State Processor API provides powerful functionality to reading, writing, and modifing savepoints and checkpoints using Flink’s batch DataSet api.This is useful for tasks such as analyzing state for interesting patterns, troubleshooting or auditing jobs by checking for discrepancies, and bootstrapping state for new applications.

Abstraction

To understand how to best interact with savepoints in a batch context it is important to have a clear mental model of how the data in Flink state relates to a traditional relational database.

A database can be thought of as one or more namespaces, each containing a collection of tables.Those tables in turn contain columns whose values have some intrinsic relationship between them, such as being scoped under the same key.

A savepoint represents the state of a Flink job at a particular point in time which is made up of many operators.Those operators contain various kinds of state, both partitioned or keyed state, and non-partitioned or operator state.

  1. MapStateDescriptor<Integer, Double> CURRENCY_RATES = new MapStateDescriptor<>("rates", Types.INT, Types.DOUBLE);
  2. class CurrencyConverter extends BroadcastProcessFunction<Transaction, CurrencyRate, Transaction> {
  3. public void processElement(
  4. Transaction value,
  5. ReadOnlyContext ctx,
  6. Collector<Transaction> out) throws Exception {
  7. Double rate = ctx.getBroadcastState(CURRENCY_RATES).get(value.currencyId);
  8. if (rate != null) {
  9. value.amount *= rate;
  10. }
  11. out.collect(value);
  12. }
  13. public void processBroadcastElement(
  14. CurrencyRate value,
  15. Context ctx,
  16. Collector<Transaction> out) throws Exception {
  17. ctx.getBroadcastState(CURRENCY_RATES).put(value.currencyId, value.rate);
  18. }
  19. }
  20. class Summarize extends RichFlatMapFunction<Transaction, Summary> {
  21. transient ValueState<Double> totalState;
  22. transient ValueState<Integer> countState;
  23. public void open(Configuration configuration) throws Exception {
  24. totalState = getRuntimeContext().getState(new ValueStateDescriptor<>("total", Types.DOUBLE));
  25. countState = getRuntimeContext().getState(new ValueStateDescriptor<>("count", Types.INT));
  26. }
  27. public void flatMap(Transaction value, Collector<Summary> out) throws Exception {
  28. Summary summary = new Summary();
  29. summary.total = value.amount;
  30. summary.count = 1;
  31. Double currentTotal = totalState.value();
  32. if (currentTotal != null) {
  33. summary.total += currentTotal;
  34. }
  35. Integer currentCount = countState.value();
  36. if (currentCount != null) {
  37. summary.count += currentCount;
  38. }
  39. countState.update(summary.count);
  40. out.collect(summary);
  41. }
  42. }
  43. DataStream<Transaction> transactions = . . .
  44. BroadcastStream<CurrencyRate> rates = . . .
  45. transactions
  46. .connect(rates)
  47. .process(new CurrencyConverter())
  48. .uid("currency_converter")
  49. .keyBy(transaction -> transaction.accountId)
  50. .flatMap(new Summarize())
  51. .uid("summarize")

This job contains multiple operators along with various kinds of state.When analyzing that state we can first scope data by its operator, named by setting its uid.Within each operator we can look at the registered states.CurrencyConverter has a broadcast state, which is a type of non-partitioned operator state.In general, there is no relationship between any two elements in an operator state and so we can look at each value as being its own row.Contrast this with Summarize, which contains two keyed states.Because both states are scoped under the same key we can safely assume there exists some relationship between the two values.Therefore, keyed state is best understood as a single table per operator containing one key column along with n value columns, one for each registered state.All of this means that the state for this job could be described using the following pseudo-sql commands.

  1. CREATE NAMESPACE currency_converter;
  2. CREATE TABLE currency_converter.rates (
  3. value Tuple2<Integer, Double>
  4. );
  5. CREATE NAMESPACE summarize;
  6. CREATE TABLE summarize.keyed_state (
  7. key INTEGER PRIMARY KEY,
  8. total DOUBLE,
  9. count INTEGER
  10. );

In general, the savepoint ↔ database relationship can be summarized as:

  1. * A savepoint is a database
  2. * An operator is a namespace named by its uid
  3. * Each operator state represents a single table
  4. * Each element in an operator state represents a single row in that table
  5. * Each operator containing keyed state has a single keyed_state table
  6. * Each keyed_state table has one key column mapping the key value of the operator
  7. * Each registered state represents a single column in the table
  8. * Each row in the table maps to a single key

Reading State

Reading state begins by specifiying the path to a valid savepoint or checkpoint along with the StateBackend that should be used to restore the data.The compatability guarantees for restoring state are identical to those when restoring a DataStream application.

  1. ExecutionEnvironment bEnv = ExecutionEnvironment.getExecutionEnvironment();
  2. ExistingSavepoint savepoint = Savepoint.load(bEnv, "hdfs://path/", new RocksDBStateBackend());
  1. val bEnv = ExecutionEnvironment.getExecutionEnvironment()
  2. val savepoint = Savepoint.load(bEnv, "hdfs://path/", new RocksDBStateBackend())

When reading operator state, simply specify the operator uid, state name, and type information.

  1. DataSet<Integer> listState = savepoint.readListState(
  2. "my-uid",
  3. "list-state",
  4. Types.INT);
  5. DataSet<Integer> unionState = savepoint.readUnionState(
  6. "my-uid",
  7. "union-state",
  8. Types.INT);
  9. DataSet<Tuple2<Integer, Integer>> broadcastState = savepoint.readBroadcastState(
  10. "my-uid",
  11. "broadcast-state",
  12. Types.INT,
  13. Types.INT);
  1. val listState = savepoint.readListState(
  2. "my-uid",
  3. "list-state",
  4. Types.INT)
  5. val unionState = savepoint.readUnionState(
  6. "my-uid",
  7. "union-state",
  8. Types.INT)
  9. val broadcastState = savepoint.readBroadcastState(
  10. "my-uid",
  11. "broadcast-state",
  12. Types.INT,
  13. Types.INT)

A custom TypeSerializer may also be specified if one was used in the StateDescriptor for the state.

  1. DataSet<Integer> listState = savepoint.readListState(
  2. "uid",
  3. "list-state",
  4. Types.INT,
  5. new MyCustomIntSerializer());
  1. val listState = savepoint.readListState(
  2. "uid",
  3. "list-state",
  4. Types.INT,
  5. new MyCustomIntSerializer())

When reading keyed state, users specify a KeyedStateReaderFunction to allow reading arbitrary columns and complex state types such as ListState, MapState, and AggregatingState.This means if an operator contains a stateful process function such as:

  1. public class StatefulFunctionWithTime extends KeyedProcessFunction<Integer, Integer, Void> {
  2. ValueState<Integer> state;
  3. @Override
  4. public void open(Configuration parameters) {
  5. ValueStateDescriptor<Integer> stateDescriptor = new ValueStateDescriptor<>("state", Types.INT);
  6. state = getRuntimeContext().getState(stateDescriptor);
  7. }
  8. @Override
  9. public void processElement(Integer value, Context ctx, Collector<Void> out) throws Exception {
  10. state.update(value + 1);
  11. }
  12. }
  1. public class StatefulFunctionWithTime extends KeyedProcessFunction[Integer, Integer, Void] {
  2. var state: ValueState[Integer];
  3. override def open(parameters: Configuration) {
  4. val stateDescriptor = new ValueStateDescriptor("state", Types.INT);
  5. state = getRuntimeContext().getState(stateDescriptor);
  6. }
  7. override def processElement(value: Integer, ctx: Context, out: Collector[Void]) {
  8. state.update(value + 1);
  9. }
  10. }

Then it can read by defining an output type and corresponding KeyedStateReaderFunction.

  1. class KeyedState {
  2. Integer key;
  3. Integer value;
  4. }
  5. class ReaderFunction extends KeyedStateReaderFunction<Integer, KeyedState> {
  6. ValueState<Integer> state;
  7. @Override
  8. public void open(Configuration parameters) {
  9. ValueStateDescriptor<Integer> stateDescriptor = new ValueStateDescriptor<>("state", Types.INT);
  10. state = getRuntimeContext().getState(stateDescriptor);
  11. }
  12. @Override
  13. public void readKey(
  14. Integer key,
  15. Context ctx,
  16. Collector<KeyedState> out) throws Exception {
  17. KeyedState data = new KeyedState();
  18. data.key = key;
  19. data.value = state.value();
  20. out.collect(data);
  21. }
  22. }
  23. DataSet<KeyedState> keyedState = savepoint.readKeyedState("my-uid", new ReaderFunction());
  1. case class KeyedState(key: Int, value: Int)
  2. class ReaderFunction extends KeyedStateReaderFunction[Integer, KeyedState] {
  3. var state: ValueState[Integer];
  4. override def open(parameters: Configuration) {
  5. val stateDescriptor = new ValueStateDescriptor("state", Types.INT);
  6. state = getRuntimeContext().getState(stateDescriptor);
  7. }
  8. override def processKey(
  9. key: Int,
  10. ctx: Context,
  11. out: Collector[Keyedstate]) throws Exception {
  12. val data = KeyedState(key, state.value())
  13. out.collect(data);
  14. }
  15. }
  16. val keyedState = savepoint.readKeyedState("my-uid", new ReaderFunction());

Note: When using a KeyedStateReaderFunction all state descriptors must be registered eagerly inside of open. Any attempt to call RuntimeContext#getState, RuntimeContext#getListState, or RuntimeContext#getMapState will result in a RuntimeException.

Writing New Savepoints

State writers are based around the abstraction of Savepoint, where one Savepoint may have many operators and the state for any particular operator is created using a BootstrapTransformation.

A BootstrapTransformation starts with a DataSet containing the values that are to be written into state.The transformation may be optionally keyed depending on whether or not you are writing keyed or operator state.Finally a bootstrap function is applied depending to the transformation; Flink supplies KeyedStateBootstrapFunction for writing keyed state, StateBootstrapFunction for writing non keyed state, and BroadcastStateBootstrapFunction for writing broadcast state.

  1. public class Account {
  2. public int id;
  3. public double amount;
  4. public long timestamp;
  5. }
  6. public class AccountBootstrapper extends KeyedStateBootstrapFunction<Integer, Account> {
  7. ValueState<Double> state;
  8. @Override
  9. public void open(Configuration parameters) {
  10. ValueStateDescriptor<Double> descriptor = new ValueStateDescriptor<>("total",Types.DOUBLE);
  11. state = getRuntimeContext().getState(descriptor);
  12. }
  13. @Override
  14. public void processElement(Account value, Context ctx) throws Exception {
  15. state.update(value.amount);
  16. }
  17. }
  18. ExecutionEnvironment bEnv = ExecutionEnvironment.getExecutionEnvironment();
  19. DataSet<Account> accountDataSet = bEnv.fromCollection(accounts);
  20. BootstrapTransformation<Account> transformation = OperatorTransformation
  21. .bootstrapWith(accountDataSet)
  22. .keyBy(acc -> acc.id)
  23. .transform(new AccountBootstrapper());
  1. case class Account(id: Int, amount: Double, timestamp: Long)
  2. class AccountBootstrapper extends KeyedStateBootstrapFunction[Integer, Account] {
  3. var state: ValueState[Double]
  4. override def open(parameters: Configuration): Unit = {
  5. val descriptor = new ValueStateDescriptor[Double]("total",Types.DOUBLE)
  6. state = getRuntimeContext().getState(descriptor)
  7. }
  8. @throws[Exception]
  9. override def processElement(value: Account, ctx: Context): Unit = {
  10. state.update(value.amount)
  11. }
  12. }
  13. val bEnv = ExecutionEnvironment.getExecutionEnvironment()
  14. val accountDataSet = bEnv.fromCollection(accounts)
  15. val transformation = OperatorTransformation
  16. .bootstrapWith(accountDataSet)
  17. .keyBy(acc -> acc.id)
  18. .transform(new AccountBootstrapper())

The KeyedStateBootstrapFunction supports setting event time and processing time timers.The timers will not fire inside the bootstrap function and only become active once restored within a DataStream application.If a processing time timer is set but the state is not restored until after that time has passed, the timer will fire immediatly upon start.

Once one or more transformations have been created they may be combined into a single Savepoint. Savepoint’s are created using a state backend and max parallelism, they may contain any number of operators.

  1. Savepoint
  2. .create(backend, 128)
  3. .withOperator("uid1", transformation1)
  4. .withOperator("uid2", transformation2)
  5. .write(savepointPath);
  1. Savepoint
  2. .create(backend, 128)
  3. .withOperator("uid1", transformation1)
  4. .withOperator("uid2", transformation2)
  5. .write(savepointPath)

Besides creating a savepoint from scratch, you can base on off an existing savepoint such as when bootstrapping a single new operator for an existing job.

  1. Savepoint
  2. .load(backend, oldPath)
  3. .withOperator("uid", transformation)
  4. .write(newPath);
  1. Savepoint
  2. .load(backend, oldPath)
  3. .withOperator("uid", transformation)
  4. .write(newPath)

Note: When basing a new savepoint on existing state, the state processor api makes a shallow copy of the pointers to the existing operators. This means that both savepoints share state and one cannot be deleted without corrupting the other!