Training Logistic Regression with FTRL on Spark on Angel

FTRL (Follow-the-regularized-leader) is an optimization algorithm which is widely deployed by online learning. Employing FTRL is easy in Spark-on-Angel and you can train a model with billions, even ten billions, dimensions once you have enough machines.

If you are not familiar with how to programming on Spark-on-Angel, please first refer to Programming Guide for Spark-on-Angel;

FTRL Optimizer

The FTRL algorithm takes into account the advantages of both FOBOS and RDA algorithms. It not only guarantees high precision with FOBOS, but also produces better sparsity with loss of certain precision.The update formula for the feature weight of the algorithm (Reference 1) is:

FTRL - 图1

where the FTRL - 图2}=\sum_{s=1}^t{G^{s}}) represents the gradient of loss function.

The update formula for the feature weight of the algorithm can be decomposed into N independent scalar minimization problems for each dimension of feature weight.

FTRL - 图3

FTRL - 图4

where the FTRL - 图5 and FTRL - 图6 are updated as follows:

FTRL - 图7}={z_i}^{(t-1)}+{g_i}^t-(\frac{1}{{\eta_i}^{(t)}}-\frac{1}{{\eta_i}^{(t-1)}}){w_i}^{(t)})

FTRL - 图8}={n_i}^{(t-1)}+({g_i}^{(t)})^2)

Using the FTRL Optimizer

  1. import com.tencent.angel.ml.math2.utils.RowType
  2. import com.tencent.angel.spark.ml.online_learning.FTRL
  3. // allocate a ftrl optimizer with (lambda1, lambda2, alpha, beta)
  4. val optim = new FTRL(lambda1, lambda2, alpha, beta)
  5. // initializing the model
  6. optim.init(dim)

There are four hyper-parameters for the FTRL optimizer, which are lambda1, lambda2, alpha and beta. We allocate a FTRL optimizer with these four hyper-parameters. The next step is to initialized a FTRL model. There are three vectors for FTRL, including z, n and w. In the aboving code, we allocate a sparse distributed matrix with 3 rows and dim columns.

set the dimension

In the scenaro of online learning, the index of features can be range from (long.min, long.max), which is usually generated by a hash function. In Spark-on-Angel, you can set the dim=-1 when your feature index range from (long.min, long.max) and rowType is sparse. If the feature index range from [0, n), you can set the dim=n.

Training with Spark

loading data

Using the interface of RDD to load data and parse them to vectors.

  1. val data = sc.textFile(input).repartition(partNum)
  2. .map(s => (DataLoader.parseLongDouble(s, dim), DataLoader.parseLabel(s, false)))
  3. .map {
  4. f =>
  5. f._1.setY(f._2)
  6. f._1
  7. }

training model

  1. val size = data.count()
  2. for (epoch <- 1 to numEpoch) {
  3. val totalLoss = data.mapPartitions {
  4. case iterator =>
  5. // for each partition
  6. val loss = iterator
  7. .sliding(batchSize, batchSize)
  8. .map(f => optim.optimize(f.toArray)).sum
  9. Iterator.single(loss)
  10. }.sum()
  11. println(s"epoch=$epoch loss=${totalLoss / size}")
  12. }

saving model

  1. output = "hdfs://xxx"
  2. optim.weight
  3. optim.saveWeight(output)
  4. optim.save(output + "/back")

The example code can be find at https://github.com/Angel-ML/angel/blob/master/spark-on-angel/examples/src/main/scala/com/tencent/angel/spark/examples/cluster/FTRLExample.scala

References

  1. H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young. Ad Click Prediction: a View from the Trenches.KDD’13, August 11–14, 2013