Question of the code to print validation curve

What explains the difference in the code to plot the validation curve in the lecture notebook to that in quiz notebook. Both the models are classifier model so I am assuming the template code should have been same, except a few arguments.
Please correct me if I am wrong. Thanks.

On the lecture notebook Overfit-generalization-underfit we are calling the validation_curve with scoring="neg_mean_absolute_error", as the DecisionTreeRegressor is a regression.

In the case of the Wrap-up quiz 2, Q7 asks you to use scoring=balanced_accuracy which is in fact used for classification problems.

You may also want to give a look at the documentation on available metrics here.

I hope this answers your question.