The framework and why do we need it

I’m sorry but I don’t understand the subtelty behind the use of scoring="neg_mean_absolute_error"
I think there should be more explainations regarding this matter…Unfortunately the “Tip” inset does not really help decyphering it :frowning:

Do you think that this formulation would be more helpful:

A score is a metric for which higher values mean better results. On the
contrary, an error is a metric for which lower values mean better results.
The parameter scoring in cross_validate always expect a function that is
a score.

To make it easy, all error metrics in scikit-learn, like
mean_absolute_error, can be transformed into a score to be used in
cross_validate. To do so, you need to pass a string of the error metric
with an additional neg_ string at the front to the parameter scoring;
for instance scoring="neg_mean_absolute_error". In this case, the negative
of the mean absolute error will be computed which would be equivalent to a
score.

Yes it’s much better ! Thanks

Great, thanks! I used @glemaitre58’s wording and updated the tip.