Tree Classifier Grid Search still looks crazy

Even though the plot for

param_grid = {"max_depth": np.arange(2, 10, 1)}
tree_clf = GridSearchCV(DecisionTreeClassifier(), param_grid=param_grid)

is optimized, the plot looks just as overfit as the plot with max depth set to 30. Any reason for that?

If you run this snippet of code right after the optimal depth plot you to obtain the validation curve

from sklearn.model_selection import validation_curve
from sklearn.model_selection import cross_validate, ShuffleSplit

cv = ShuffleSplit(n_splits=30, test_size=0.2)

max_depth = [1, 5, 10, 15, 20, 25, 30]
train_scores, test_scores = validation_curve(
    DecisionTreeClassifier(),
    data_clf[data_clf_columns],
    data_clf[target_clf_column],
    param_name="max_depth", param_range=max_depth,
    cv=cv, n_jobs=2)

plt.plot(max_depth, train_scores.mean(axis=1),
         label="Training error")
plt.plot(max_depth, test_scores.mean(axis=1),
         label="Testing error")
plt.legend()

plt.xlabel("Maximum depth of decision tree")
plt.ylabel("Accuracy")
_ = plt.title("Validation curve for decision tree")

you will obtain the following output

validation_curve

Both the train and test errors become steady above max_depth=7. Indeed, the training errors have a perfect accuracy, meaning that the model is overfitting above this region.