Thank you for this course!

After quite a marathon these last days, i’ve just reached the 60%. This course was a lot of fun and I can only continue after the deadline towards full completion, otherwise what’s the point :slight_smile:

Videos and notebooks were made with care by the whole team and this is greatly appreciated.

My feedback “à chaud”:

  1. I remember being surprised to read about hyperparameter tuning in Module 3 while Module 2 exercise and quizz already significantly dealt with them. This kind of surprise: “Oh, i’ve learned that by myself yesterday, for the quizz from before” :slight_smile:
    It went fine for me though.

  2. Overall, the hardest part, to me, often was getting the results out. Should it be this way? Model, check, pipeline, preprocessing, check, cross-validation, check… Now getting the features and weights out of the pipeline results object -a very common analysis protocol-, requires what feels like way too much error-prone wet work through dataframe comprehension and reconstruction. I have tried to make a function for myself but this all depended too much on the pipeline composition, apparently, for a simple def to handle it.
    Do you plan on easing estimator extraction in the future? Otherwise i’ll find a way to do it.

1 Like

Indeed this is a long term ongoing work to improve those aspects in the scikit-learn API. For instance the last couple of releases introduced the get_feature_names_out on all transformers to make it easier to interpret the meaning of the generated columns at any step in a pipeline but there is still more work to do to make more intuitive to conduct model inspection on complex pipelines.

True I’ve noticed the get_results_out had been evolving recently, since my Anaconda wasn’t just up to date with this function.Thanks for your answer; i’ll be looking forward to this!

Thanks you for this course to the team and OVH. I was short in time to finish it and I wait the next session to finish it.
Thanks again