Jump to content

Random forest: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
A923812 (talk | contribs)
No edit summary
A923812 (talk | contribs)
Line 98: Line 98:


(Typically, random forest is best-suited for use with categorical features, but continuous features were used in this illustration because they were easier to visualize.)
(Typically, random forest is best-suited for use with categorical features, but continuous features were used in this illustration because they were easier to visualize.)

== Feature selection with regularized random forest ==
Although random forest produces an importance score for each variable, it does not provide a feature subset.
A recent method called regularized random forest (RRF) can be used for feature subset selection.
RRF penalizes using a variable similar to the variables selected at previous tree nodes for splitting the current node.
RRF naturally handles numerical and categorical features, interactions and nonlinearities. It is invariant to attribute scales (units) and insensitive to outliers, and thus, requires little
data preprocessing such as normalization. It only requires building one tree ensemble and thus is computationally efficient. <ref>{{cite journal
|author=Deng,H.|coauthors=Runger, G.
|url=http://enpub.fulton.asu.edu/hdeng3/GeneSelectionRRF.pdf
|title=Gene Selection with Regularized Random Forest
|journal=technical report|year=2012}}</ref>


== See also ==
== See also ==

Revision as of 08:37, 31 January 2012

Random forest (or random forests) is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the classes output by individual trees. The algorithm for inducing a random forest was developed by Leo Breiman[1] and Adele Cutler, and "Random Forests" is their trademark. The term came from random decision forests that was first proposed by Tin Kam Ho of Bell Labs in 1995. The method combines Breiman's "bagging" idea and the random selection of features, introduced independently by Ho[2][3] and Amit and Geman[4] in order to construct a collection of decision trees with controlled variation.

The selection of a random subset of features is an example of the random subspace method, which, in Ho's formulation, is a way to implement stochastic discrimination[5] proposed by Eugene Kleinberg.

Learning algorithm

Each tree is constructed using the following algorithm:

  1. Let the number of training cases be N, and the number of variables in the classifier be M.
  2. We are told the number m of input variables to be used to determine the decision at a node of the tree; m should be much less than M.
  3. Choose a training set for this tree by choosing n times with replacement from all N available training cases (i.e. take a bootstrap sample). Use the rest of the cases to estimate the error of the tree, by predicting their classes.
  4. For each node of the tree, randomly choose m variables on which to base the decision at that node. Calculate the best split based on these m variables in the training set.
  5. Each tree is fully grown and not pruned (as may be done in constructing a normal tree classifier).

For prediction a new sample is pushed down the tree. It is assigned the label of the training sample in the terminal node it ends up in. This procedure is iterated over all trees in the ensemble, and the average vote of all trees is reported as random forest prediction.

Features and Advantages

The advantages of random forest are:

  • It is one of the most accurate learning algorithms available. For many data sets, it produces a highly accurate classifier.[6]
  • It runs efficiently on large databases.
  • It can handle thousands of input variables without variable deletion.
  • It gives estimates of what variables are important in the classification.
  • It generates an internal unbiased estimate of the generalization error as the forest building progresses.
  • It has an effective method for estimating missing data and maintains accuracy when a large proportion of the data are missing.
  • It has methods for balancing error in class population unbalanced data sets.
  • Prototypes are computed that give information about the relation between the variables and the classification.
  • It computes proximities between pairs of cases that can be used in clustering, locating outliers, or (by scaling) give interesting views of the data.
  • The capabilities of the above can be extended to unlabeled data, leading to unsupervised clustering, data views and outlier detection.
  • It offers an experimental method for detecting variable interactions.

Disadvantages

  • Random forests have been observed to overfit for some datasets with noisy classification/regression tasks.[7]
  • For data including categorical variables with different number of levels, random forests are biased in favor of those attributes with more levels. Therefore, the variable importance scores from random forest are not reliable for this type of data. Methods such as a partial permutation test was used to solve the problem. [8]

Visualization

Training data consisting of two Gaussian point clouds.
A visualization of the Random Forest model-space after training on these data.
For comparision, a logistic regression model was also trained on the same data.

In order to form an intuitive visualization of the model-space represented by a random forest, a dataset consisting of 200 random points (100 green points and 100 red points) was created. The green points were drawn from a Gaussian distribution with a centroid at (0,1), and the red points were drawn from a Gaussian distribution with a centroid at (1,0). In both cases, the variance was circular with an average radius of 1.

A Random Forest model, consisting of 50 trees, was trained on this data. The purity of the color indicates the portion of the 50 trees that voted in agreement. Significant over-fit can be observed in this visualization.

For contrast, a logistic regression model (which is somewhat less-prone to over-fit) was also trained on this same data.

(Typically, random forest is best-suited for use with categorical features, but continuous features were used in this illustration because they were easier to visualize.)

See also

References

  1. ^ Breiman, Leo (2001). "Random Forests". Machine Learning. 45 (1): 5–32. doi:10.1023/A:1010933404324.
  2. ^ Ho, Tin (1995). Random Decision Forest (PDF). 3rd Int'l Conf. on Document Analysis and Recognition. pp. 278–282.
  3. ^ Ho, Tin (1998). "The Random Subspace Method for Constructing Decision Forests" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 20 (8): 832–844. doi:10.1109/34.709601.
  4. ^ Amit, Y.; Geman, D. (1997). "Shape quantization and recognition with randomized trees" (PDF). Neural Computation. 9 (7): 1545–1588. doi:10.1162/neco.1997.9.7.1545.
  5. ^ Kleinberg, Eugene (1996). "An Overtraining-Resistant Stochastic Modeling Method for Pattern Recognition" (PDF). Annals of Statistics. 24 (6): 2319–2349. doi:10.1214/aos/1032181157. MR 1425956.
  6. ^ Caruana, Rich; Karampatziakis, Nikos; Yessenalina, Ainur (2008). An empirical evaluation of supervised learning in high dimensions (PDF). Proceedings of the 25th International Conference on Machine Learning (ICML).
  7. ^ Segal, Mark R. (2004). Machine Learning Benchmarks and Random Forest Regression. Center for Bioinformatics & Molecular Biostatistics. {{cite book}}: Unknown parameter |month= ignored (help)
  8. ^ Deng,H. (2011). Bias of importance measures for multi-valued attributes and solutions (PDF). Proceedings of the 21st International Conference on Artificial Neural Networks (ICANN). pp. 293–300. {{cite conference}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)

Commercial implementation

  • [1] Random Forests.

Open source implementations

  • The Original RF by Breiman and Cutler. Written in Fortran 77. May be difficult to configure. GNU General Public License
  • ALGLIB contains an implementation of a modified random forest algorithm in C#, C++, Pascal, VBA. GPL 2+
  • orngEnsemble module within Orange data mining software suite. licenses
  • PARF Written in Fortran 90. Can distribute work over a cluster of computers using MPI.
  • party An implementation of Breiman's random forest based on conditional inference trees for R.
  • randomForest for R.
  • Oblique random forests for R based on multivariate decision trees.
  • TMVA Toolkit for Multivariate Data Analysis implements random forests.
  • milk for Python implements random forests.
  • [2] Matlab version. GNU GPL v2
  • Nimbus A Ruby gem implementing random forest for genomic selection contexts.
  • Apache Mahout. Apache license
  • Stochastic Bosque A Matlab implementation.
  • The Waffles machine learning toolkit includes an implementation of random forest.
  • RF-ACE uses Random Forests for feature filtering and Gradient Boosting Trees for data prediction. Written in C++. Apache License 2.0.
  • RRF uses regularized random forest for feature selection e.g. gene selection.