Precision for question 9

In question 9, it should be better to precise the threshold separating better and equivalent. From my numbers, the difference is only 0.03 (and could be even lower) while it is usually set at 0.04 in the previous exercises.

I answered correctly but remained puzzled to know what was the expected answer from the numbers.

Agreed, we should be more explicit in this question for example say better by how much (as in the previous questions) …

personnaly i obtained a score of 0.579 for the HistGradientBoostingClassifier alone and a score of 0.580 for the BalancedBaggingClassifier with an HistGradientBoostingClassifier as a base_estimator.
To obtain the score stated in the solution we have to use the previously used HistGradientBoostingClassifier as a base_estimator.
It ll be great if you change also the question to be clearer

In my case i get Test score BalancedBoostingC vs Test score HistogramGradientBoostingC
0.588 vs 0.579

from imblearn.ensemble import BalancedBaggingClassifier
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.model_selection import cross_validate

balanced_baggingc = BalancedBaggingClassifier(
                    base_estimator=HistGradientBoostingClassifier(
                    early_stopping=True, random_state=0 ),
                    random_state=0)

cv_results_bbc = cross_validate(
    balanced_baggingc, data, target,
    scoring="balanced_accuracy",
    cv=10,
    n_jobs=2,
)

When can we say if one of them is better than other?
From how many decimal places?

Hi GermanCid,

first i thought the difference beetween the expected score and yours was due to the difference in the max_iter argument (you have to use the same HistGradientBoostingClassifier that you used before in the notebook) but I tested with the default max_iter and I still obtained the same result than the expected one (mean score of 0.6).

Personnally i obtained a score of 0.579 with HGBT alone and 0.6 with the BBC.

I did try it also locally and i obtained exactly the same results

The ensemble wrap-up quiz has been reworked (different dataset)