ShuffleSplit and nested cross-validation

In the previous module, after introducing the cross-validation, you argued ShuffleSplit makes the model results even more generic. I was wondering how come you did not use ShuffleSplit with nested cross-validation?

ShuffleSplit is the natural extension of the train_test_split. However, some samples can be used several times as a test sample. In this regard, it might be better to use k-fold cross-validation or the stratified k-fold for classification to avoid this behaviour.

Indeed, having multiple times a test sample would just imply that we need to use more splits with a ShuffleSplit than with a KFold in order to have a good estimation of the statistical performance.