Optimistic accuracy

How is the train-test split model accuracy optimistic when compared to the score obtained on a held-out test set in the above example?

Regards

When we used the same data to train and evaluate our model, it makes a correct prediction for approximately 82 samples out of 100. When we score the model on a held-out test set the resulting accuracy is 0.804, i.e. 80 samples out 100. It is the same model, but the accuracy decreased. Indeed, the former score was “too optimistic” in the sense that it’s good performance was just a consequence of memorizing the data and not an ability to predict it.