Hi,
I was wondering how learning_curve chooses the part of the data when it is not 100%.
For example, take train_size = 0.1.
Does learning_curve choose arbitrarily 10% of the dataset and does the cross-validation with always the same 10% of the dataset ? Or for each iteration of the cross-validation, does it choose a different 10% of the database ?
Thank you !