Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models


Authors:

Abstract:

As the size of data sets used to build classifiers steadily increases, training a linear model efficiently with limited memory becomes essential. Several techniques deal with this problem by loading blocks of data from disk  one at a time, but usually take a considerable number of iterations to  converge to a reasonable model. Even the best block minimization techniques [1] require many block loads since they treat all training examples uniformly. As disk I/O is expensive, reducing the amount of disk access can dramatically decrease the training time.


This paper introduces a selective block minimization (SBM) algorithm, a block minimization method that makes use of selective sampling. At each step, SBM updates the model using data consisting of two parts: (1) new data loaded from disk and (2) a set of informative samples already in memory from previous steps. We prove that, by updating the linear model in the dual form, the proposed method fully utilizes the data in memory and converges to a globally optimal solution on the entire data. Experiments show that the SBM algorithm dramatically reduces the number of blocks loaded from disk and consequently obtains an accurate and stable model quickly on both binary and multi-class classification.  

Note that we include proofs and additional experiments in the appendices of this paper.

Citation:

K.-W. Chang and D. Roth, Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models. KDD  (2011)

Bibitem:

@inproceedings{ChangRo11,
  author = {K.-W. Chang and D. Roth},
  title = {Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models},
  booktitle = {KDD},
  year = {2011},
  acceptance = {17.5\%},
  url = "http://cogcomp.cs.illinois.edu/papers/ChangRo11(4).pdf",
  funding = {BL, MR},
}