Aggregation method for Boosting

Hello,
When using a bagging method for regression, I understood that the aggregation method is taking the average. For classification, it’s a vote.
However, I’m not sure what the “aggregation” could be for boosting (for both classification and regression) as the models are applied sequentially (if I understood correctly).

In boosting, the predictions are aggregated by taking a weighted sum of the predictions of each learner. However, what is really different is the type of predictions of each learner: in general, a learner try to correct the error of the weighted sum of the previous learners. You can look at the gradient-boosting notebook for more details but the idea in GBDT is that each regressor tree try to predict the residuals.