AdaBoost

AdaBoost, short for Adaptive Boosting, is a machine learning meta-algorithm formulated by Yoav Freund and Robert Schapire, who won the 2003 Gödel Prize for their work. It can be used in conjunction with many other types of learning algorithms to improve performance. The output of the other learning algorithms ('weak learners') is combined into a weighted sum that represents the final output of the boosted classifier.

Machine learningEnsemble learningBoosting

true

Contributor(s)

Initial contribute: 2020-12-16

Classification(s)

Method-focused categoriesData-perspectiveIntelligent computation analysis

Detailed Description

English {{currentDetailLanguage}} English

Quoted from: http://image.diku.dk/imagecanon/material/cortes_vapnik95.pdf

Overview

Problems in machine learning often suffer from the curse of dimensionality — each sample may consist of a huge number of potential features (for instance, there can be 162,336 Haar features, as used by the Viola–Jones object detection framework, in a 24×24 pixel image window), and evaluating every feature can reduce not only the speed of classifier training and execution, but in fact reduce predictive power. Unlike neural networks and SVMs, the AdaBoost training process selects only those features known to improve the predictive power of the model, reducing dimensionality and potentially improving execution time as irrelevant features don't need to be computed.

Training

AdaBoost refers to a particular method of training a boosted classifier. A boost classifier is a classifier in the form

where each  is a weak learner that takes an object  as input and returns a value indicating the class of the object. For example, in the two-class problem, the sign of the weak learner output identifies the predicted object class and the absolute value gives the confidence in that classification. Similarly, the th classifier is positive if the sample is in a positive class and negative otherwise.

Each weak learner produces an output hypothesis, , for each sample in the training set. At each iteration , a weak learner is selected and assigned a coefficient  such that the sum training error  of the resulting -stage boost classifier is minimized.

Here  is the boosted classifier that has been built up to the previous stage of training,  is some error function and  is the weak learner that is being considered for addition to the final classifier.

Weighting

At each iteration of the training process, a weight  is assigned to each sample in the training set equal to the current error  on that sample. These weights can be used to inform the training of the weak learner, for instance, decision trees can be grown that favor splitting sets of samples with high weights.

Derivation

This derivation follows Rojas (2009):

Suppose we have a data set  where each item  has an associated class , and a set of weak classifiers  each of which outputs a classification  for each item. After the -th iteration our boosted classifier is a linear combination of the weak classifiers of the form:

Where the class will be the sign of . At the -th iteration we want to extend this to a better boosted classifier by adding another weak classifier , with another weight :

So it remains to determine which weak classifier is the best choice for , and what its weight  should be. We define the total error  of  as the sum of its exponential loss on each data point, given as follows:

Letting  and  for , we have:

We can split this summation between those data points that are correctly classified by  (so ) and those that are misclassified (so ):

Since the only part of the right-hand side of this equation that depends on  is , we see that the  that minimizes  is the one that minimizes  [assuming that ], i.e. the weak classifier with the lowest weighted error (with weights ).

To determine the desired weight  that minimizes  with the  that we just determined, we differentiate:

Setting this to zero and solving for  yields:

Proof —

because  does not depend on 

We calculate the weighted error rate of the weak classifier to be , so it follows that:

which is the negative logit function multiplied by 0.5.

Thus we have derived the AdaBoost algorithm: At each iteration, choose the classifier , which minimizes the total weighted error , use this to calculate the error rate , use this to calculate the weight , and finally use this to improve the boosted classifier  to .

{{htmlJSON.HowtoCite}}

{{htmlJSON.Copy}}

Contributor(s)

Initial contribute : 2020-12-16

{{htmlJSON.CoContributor}}

History

Last modify time
2020-12-17

QR Code

×

{{curRelation.overview}}
{{curRelation.author.join('; ')}}
{{curRelation.journal}}









{{htmlJSON.RelatedItems}}

{{htmlJSON.LinkResourceFromRepositoryOrCreate}}{{htmlJSON.create}}.

Drop the file here, orclick to upload.
Select From My Space
+ add

{{htmlJSON.authorshipSubmitted}}

Cancel Submit
{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
{{htmlJSON.Localizations}} + {{htmlJSON.Add}}
{{ item.label }} {{ item.value }}
{{htmlJSON.ModelName}}:
{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
Name:
Version:
Model Type:
Model Domain:
Scale:
Purpose:
Principles:
Incorporated models:

Model part of

larger framework

Process:
Information:
Initialization:
Hardware Requirements:
Software Requirements:
Inputs:
Outputs:
{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
Title Author Date Journal Volume(Issue) Pages Links Doi Operation
{{htmlJSON.Cancel}} {{htmlJSON.Submit}}
{{htmlJSON.Add}} {{htmlJSON.Cancel}}

{{articleUploading.title}}

Authors:  {{articleUploading.authors[0]}}, {{articleUploading.authors[1]}}, {{articleUploading.authors[2]}}, et al.

Journal:   {{articleUploading.journal}}

Date:   {{articleUploading.date}}

Page range:   {{articleUploading.pageRange}}

Link:   {{articleUploading.link}}

DOI:   {{articleUploading.doi}}

Yes, this is it Cancel

The article {{articleUploading.title}} has been uploaded yet.

OK
{{htmlJSON.Cancel}} {{htmlJSON.Confirm}}