Quiz5

Hello,

I have a question, do we need to compare the mean if the test score returned by the cross validation or , the number of times the estimator(x) had a better performance than the the estimator (y)?

Thanks!

Hello,

The accuracy of each classifier is indeed the mean of the cross validated scores, so that is the one thing to compare between models.
Also remember from M2 that our notion of “worst” or “better” model is not only with respect to the mean of the cross validation distribution, but also that different distributions corresponding to different estimators are distant enough to neglect their overlap.

Hope that clarifies the concept!