Voting mechanism for classification?

Hi, I’d like to ask a question on “boosting for classification”. After the classification model has been trained, when we want to do prediction on a new data point, what’s going to happen? Will there be a voting mechanism like what was presented in the previous section on bagging?
For example, here we want to predict the class of the red mark:


The left hand model predicts “orange”, the model at the center predicts “orange”, and the right hand side model predicts “blue”. So we have 2 votes for “orange” vs 1 vote for “blue”, hence the final prediction is “orange”. Does it work in this way? Thank you!

Indeed, it depends on the algorithm used. I will take the case of the gradient boosting decision tree (GBDT) with the deviance loss (default in scikit-learn).

As presented in a later lecture, in a GBDT, we sequentially fit some decision tree regressors to fit the error or the previous trees.

At predict time, the ensemble will output a real value for each of the trees that will be added up. We then use the same approach than to go from linear regression to logistic regression: we use the logistic function to get a probability estimate and threshold at 0.5 to get hard prediction.

Thank you glemaitre58 for your reply and explication!