Feature Column Support in SQLFlow XGBoost models

Overall

SQLFlow extends SQL grammar to support data pre-processing using COLUMN clauses. For example, we can use CATEGORY_HASH to parse a string column to an integer column, which is usually a common data pre-processing operation in NLP tasks.

  1. SELECT string_column1, int_column2, class FROM xgboost.gbtree
  2. TO TRAIN XGBoostModel
  3. COLUMN INDICATOR(CATEGORY_HASH(string_column1, 10)), int_column2
  4. LABEL class
  5. INTO sqlflow_xgboost_model.my_model;

Currently, COLUMN clauses are supported in SQLFlow TensorFlow models. The COLUMN clauses are transformed into TensorFlow feature column API calls inside SQLFlow codegen implementation.

However, XGBoost has no similar feature column APIs as TensorFlow. Currently, XGBoost models can only support simple column names like c1, c2, c3 in COLUMN clauses, and any data pre-processing is not supported. It makes that we cannot use XGBoost to train models which accept string column as their input.

This design explains how SQLFlow supports feature columns in XGBoost model.

Supported feature column in TensorFlow models

SQLFlow COLUMN clauses support the following listed feature columns, and they are implemented by TensorFlow APIs.

SQLFlow keywordsTensorFlow APIDescription
DENSEtf.feature_column.numeric_columnRaw numeric feature column without any pre-processing
BUCKETtf.feature_column.bucketized_columnTransform input integer to be the bucket id divided by boundaries
CATEGORY_IDtf.feature_column.categorical_column_with_identityIdentity mapping of integer feature column
CATEGORY_HASHtf.feature_column.categorical_column_with_hash_bucketUsing hash algorithm to map string or integer to category id
SEQ_CATEGORY_IDtf.feature_column.sequence_categorical_column_with_identitySequence data version of CATEGORY_ID
CROSStf.feature_column.crossed_columnCombine multiple category features using hash algorithm
INDICATORtf.feature_column.indicator_columnTransform category id to multi-hot representation
EMBEDDINGtf.feature_column.embedding_columnTransform category id to embedding representation

Feature column design in XGBoost models

The training process of XGBoost model inside SQLFlow are as follows:

  • Step 1: Read data from database. Call the db.db_generator() method to return a Python generator which yields each row in database.
  • Step 2: Dump SVM file. Call the dump_dmatrix() method to write the raw data into an SVM file. This file is ready to be loaded as XGBoost DMatrix.
  • Step 3: Training. Load the dumped file to be XGBoost DMatrix and start to train. The training process is performed by calling xgboost.train APIs.

As discussed in COLUMN clause for XGBoost, there are 3 candidate ways to support feature column in XGBoost models:

  • Method 1. Perform feature column transformation during step 1 and step 2. Data pre-processing can be done before dumping to SVM file. This method is suitable for offline training (both standalone and distributed), prediction and evaluation, since same transformation Python codes can be generated in both training, prediction and evaluation. But it is not suitable for online serving, because online serving usually uses other libraries or languages (like C++/Java), which does not support the transformation codes we generate in SQLFlow Python codes.

  • Method 2. Modify the training iteration of XGBoost, and insert transformation codes during each iteration. But it is not easy to modify the training iteration of XGBoost. Moreover, it is also not suitable for online serving for the same reason as method 1.

  • Method 3. Combine data pre-processing and model training as a sklearn pipeline. Since sklearn pipeline can be saved as a PMML file by sklearn2pmml or Nyoka, this method is suitable for standalone training, offline prediction, offline evaluation and online serving. Distributed training of sklearn pipeline can be performed using Dask. However, distributed training pipeline cannot be saved as a PMML file directly. It is because Dask does not use but mocks native sklearn APIs to build a pipeline, and these mocked APIs cannot be saved. Another problem is that sklearn pipeline only supports very few data pre-processing transformers. For example, hashing a string to a single integer is not supported in sklearn. Of course, we can add more data pre-processing transformers, but these new added transformers cannot be saved as a PMML file.

In summary, one of the most critical things is how data pre-processing transformers can be saved for online serving. After investigating the online serving platform in the company (Arks, etc), data pre-processing steps are usually not saved in PMML or Treelite files. The online serving platform provides plugins for users to choose their data pre-processing steps instead of loading them from PMML or Treelite files. Therefore, we prefer to choose Method 1 to implement feature column in XGBoost models.

The feature column transformers in Python can be implemented as:

  1. class BaseFeatureColumnTransformer(object):
  2. def __call__(self, inputs):
  3. raise NotImplementedError()
  4. def set_column_names(self, column_names):
  5. self.column_names = column_names
  6. class NumericColumnTransformer(BaseFeatureColumnTransformer):
  7. # `key` is the column name inside `SELECT` statement
  8. def __init__(self, key):
  9. self.key = key
  10. def set_column_names(self, column_names):
  11. BaseFeatureColumnTransformer.set_column_names(column_names)
  12. self.index = self.column_names.index(self.key)
  13. # `inputs` are all raw column data
  14. # NumericColumnTransformer would only take the column indicated by `index`
  15. def __call__(self, inputs):
  16. return inputs[self.index]
  17. # CategoricalColumnTransformer is the base class of all category columns
  18. # This base class is design to do some check. For example, `INDICATOR`
  19. # would only accept category column as its input.
  20. class CategoricalColumnTransformer(BaseFeatureColumnTransformer): pass
  21. class BucketizedColumnTransformer(CategoricalColumnTransformer):
  22. def __init__(self, key, boundaries, default_value=None):
  23. self.key = key
  24. self.boundaries = boundaries
  25. self.default_value = default_value
  26. def set_column_names(self, column_names):
  27. BaseFeatureColumnTransformer.set_column_names(column_names)
  28. self.index = self.column_names.index(self.key)
  29. def __call__(self, inputs):
  30. input = inputs[self.index]
  31. if input < boundaries[0]:
  32. return 0
  33. for idx, b in enumarate(boudaries):
  34. if input >= b
  35. return idx
  36. return len(boundaries)
  37. class CrossedColumnTransformer(BaseFeatureColumnTransformer):
  38. def __init__(self, keys, hash_bucket_size):
  39. self.keys = keys
  40. self.hash_bucket_size = hash_bucket_size
  41. def set_column_names(self, column_names):
  42. BaseFeatureColumnTransformer.set_column_names(column_names)
  43. self.column_indices = [self.column_names.index(key) for key in self.keys]
  44. def _cross(self, transformed_inputs): ...
  45. def __call__(self, inputs):
  46. selected_inputs = [inputs[idx] for idx in self.column_indices]
  47. self._cross(selected_inputs)
  48. class ComposedFeatureColumnTransformer(BaseFeatureColumnTransformer):
  49. def __init__(self, *transformers):
  50. self.transformers = transformers
  51. def set_column_names(self, column_names):
  52. BaseFeatureColumnTransformer.set_column_names(column_names)
  53. for t in self.transformers:
  54. t.set_column_names(column_names)
  55. def __call__(self, inputs):
  56. return [t(inputs) for t in self.transformers]

For example, the column clause COLUMN INDICATOR(CATEGORY_HASH(string_column1, 10)), int_column2 would be finally transformed into Python calls:

  1. transform_fn = ComposedFeatureColumnTransformer(
  2. IndicatorColumnTransformer(CategoryColumnWithHashBucketTransformer(key="string_column1", hash_bucket_size=10)),
  3. NumericColumnTransformer(key="int_column2")
  4. )

Then we pass transform_fn to runtime.xgboost.train method. Inside runtime.xgboost.train, we transform the raw data from db.db_generator(...) by calling transform_fn.__call__ method. Method set_column_names would be called once when the table schema is obtained in runtime, so that the index of key can be inferred in Python runtime. The transformed data would be writen into SVM file, then it can be loaded in the following train step.

Another concern is that we should perform the same data pre-processing in prediction/evaluation stage. So we should save the feature columns of training, so that it can be loaded in prediction/evaluation stage. Besides, the codegen during prediction/evaluation stage should also generate the same transformation codes as training stage.

It should be noticed that EMBEDDING is not supported in this design doc. It is because that the EMBEDDING feature column may contain trainable parameters, and these parameters cannot be updated in XGBoost training process.

Export the XGBoost models to PMML/Treelite file

XGBoost supports 2 kinds of APIs to train a model:

  • xgboost.train . We use this API in our current implementation. The returned Booster can be saved to the format that can be loaded by Treelite APIs but not by PMML APIs.

  • xgboost.XGBClassifier/XGBRegressor/XGBRanker . Sklearn2pmml or Nyoka can only export models built by these APIs to PMML format. But this APIs may be not very easy to use, because:

    • We must distinguish whether the model is a classifier/regressor/ranker beforehand.
    • The constructors of xgboost.XGBClassifier/XGBRegressor/XGBRanker mix up the Booster parameters and training parameters together. For example, booster is one of the Booster parameters, and n_estimators is one of the training parameters, but both of them appear in the constructors. This makes us hard to distinguish model parameters and training parameters in SQLFlow codes.
    • Names of some of the parameters in xgboost.XGBClassifier/XGBRegressor/XGBRanker are different from xgboost.train . For example, n_estimators in xgboost.XGBClassifier/XGBRegressor/XGBRanker is the same as num_boost_round in xgboost.train.

Therefore, we prefer to use xgboost.train API to perform training in this design, and export PMML/Treelite files in the following ways:

  • PMML file can be exported by:
    • Call Booster.load_model() to load trained model.
    • Check the Booster objective to build one of xgboost.XGBClassifier/XGBRegressor/XGBRanker .
    • Call xgboost.XGBClassifier/XGBRegressor/XGBRanker.load_model() to load the trained model again.
    • Build a sklearn pipeline using the pre-built xgboost.XGBClassifier/XGBRegressor/XGBRanker .
    • Save the pipeline using Sklearn2pmml or Nyoka as a PMML file.
  • Treelite file can be exported by Model.from_xgboost using the Booster saved by Booster.save_model() .