The SNoW (Sparse Network of Winnows) learning architecture is a multi-class classifier that is specifically tailored for large scale learning tasks and fpr domains in which the potential number of features taking part in decisions is very large, but may be unknown a priori. It learns a sparse network of linear functions in which the targets concepts (class labels) are represented as linear functions over a common feature space.
Several update rules, Winnow, Perceptron and naive Bayes, can be used within SNoW. The SNoW learning architecture inherits its generalization properties from the update rule being used. In this way, when using Winnow, it is a feature efficient learning algorithm, in that it scales linearlly with the number of relevant features, and linearly with the number of features active in the domain.
However, there are a few differences worth mentioning relative to simply using the basic update rule, which we briefly describe here in the context of the Winnow update rule, the most successful one in most applications.
SNoW has been used successfully on a variety of large scale learning tasks in the natural language domain and, recently, in the visual processing domain. It can learn and generalize from a small number of examples and thus adapts well to new environments. Some more details about the architecture, its interpretation as a relational system, the sparse update rules incorporated into it (Winnow, naive Bayes, Perceptron) and their theoretical justification, the relations to Valiant's neuroidal model and other computational properties are described in the following papers.