Error in Hyperparameter tuning lecture

Not sure whether I am just reading this wrong but there seems to be an error in the first lecture. It says the best model is the one with the the largest max depth (=5) and n_estimators (=30), when in fact that is a combination that only leads to mediocre performance. It is the n_estimators = 10 and the max_depth = 3 that leads to the best performance…?

Unbenannt

PS: I reset the notebook and restarted the kernel before running this.

Hi Michif,
As I noticed elsewhere the problem come here from the fact we should not read “mean_test_score” but “mean_test_error”. If you keep that in mind the conclusion is correct: “the largest max_depth together with the largest n_estimators led to the best statistical performance” since they have the lowest test errors.

If you take a look to the code before this table, the scoring is ‘neg_mean_absolute_error’ but the column is called ‘mean_test_score’ and the value are transformed to be positive. That lead to confusion and it’ll be great if in next version of the mooc the column be called mean_test_error.

Yep indeed we should do something similar to what we do in module 2, see M2: Over/Under fitting for more details.

PR in Simplify table with confusing score/error/rank_score by lesteve · Pull Request #554 · INRIA/scikit-learn-mooc · GitHub