Question 2 _ Quiz M7.05

Hi everyone, I have a problem about this question, please help me if I’m wrong :


As we should reverse the sign of the error (a) , then we should do this even when we are using the metrics with “neg_” prefix (c) right ? like this part of exercise:

I think it would be great if answer (c) changes to something like this:
c)… should start with the prefix neg_ and reverse the sign of error.

Thanks

Just to be sure I understand your comment completely:

  • you did not tick c) because you thought it was not fully accurate
  • you would have ticked c) if it contained something about reversing the sign of the error

Is that correct?

yes @lesteve exactly

OK thanks, I’ll tag this topic so that we try to improve things for the next MOOC session.

Maybe something we could make more clear is that this question is only about cross_validate parameters and not what happens afterwards?

1 Like

Nice thank U

I actually have another suggestion. This question asks “should”, which in my reading refers what should be the recommended approach, what one should do. But the right answer then actually answers a could question. I’d suggest the question is rephrased.

3 Likes

Hi,

The question 2 concerns the “cross_validate” method.
However, from what I understand from the sklearn documentation, “make_scorer” is used for GridSearchCV and cross_val_score (not for for cross_valdiate).

Did I misunderstand?

A subsidiary question :
The max_error is also a metric denoting an error, without the prefix “neg”. So the 3rd proposition in the quizz can be a bit confusing. But maybe I’m quibbling :stuck_out_tongue_winking_eye:

Thank you in advance.
BRAVO for the whole team, this MOOC is excellent!

Here the sentence from the documentation
" This factory function wraps scoring functions for use in GridSearchCV and cross_val_score."

Hey @metssye,
Yes Ur right there is no direct example of using make_scorer in cross_validate also in here, but U can pass a function for scoring in cross_validate too, try this code :

from sklearn.model_selection import cross_validate
from sklearn.metrics import mean_squared_error
from sklearn.metrics import make_scorer
import numpy as np

def my_custom_loss_func(y_true, y_pred):
diff = np.abs(y_true - y_pred).max()
return np.log1p(diff)

here you can use Ur custom loss function or something like mean_squared_error:

my_scorer = make_scorer(my_custom_loss_func,
greater_is_better=False)

cv_results = cross_validate(regressor_model, data, target,
cv=10, scoring=my_scorer,
return_train_score=True,
return_estimator=True)
cv_results[‘test_score’].mean()

BTW, I’m agree with U, 3rd answer need’s a bit more clarification and as dear @lesteve said, its going to be fixed for the next MOOC. :hibiscus: :hibiscus:

Hello @Meysam_Amini,

Thank you very much for your answer.
It is indeed becoming clearer for me :+1: :slightly_smiling_face:

1 Like

Solved in Sign in · GitLab