Hyperparameter and testing score

I would like to know if the best_parameters that we find after hyperparameter tuning could be different with the one that we have after outer cross_validation process. If yes, what is the opportunity to do hyperparameter tuning as all parameters will be included in outer cross_validation process to find the best mean testing score. Many thanks for any answer!

I am not sure to understand your question, but maybe you can try to fetch the most recent version of this notebook by hand. You will have to click on the “File” menu of the jupyter notebook and then select “Reset to original”.

The reason why I mention this is because in the latest version of the notebook we mention:

The optimal regularization strength is not necessarily the same on all cross-validation iterations. But since we expect each cross-validation resampling to stem from the same data distribution, it is common practice to choose the best alpha to put into production as lying in the range defined by:

print(f"Min optimal alpha: {np.min(best_alphas):.2f} and "
      f"Max optimal alpha: {np.max(best_alphas):.2f}")

Does this answer your question?

Yes. I understand now. After hyperparameter tuning, we get a set of best alphas(not unique alpha as I thought) that will bé included in the cross_validation process.