How many splits are enough in cross (and nested-cross) validation?

It seems to me that, since the introduction of the notion of cross validation, there was no discussion on how to choose the number of splits. Sorry if I missed it.

Here is mentioned that having a low-value of splits is a bad idea in general.

So I was wondering, how does one pick the right value of splits?

I guess it is also related to the amount of data one got. Since it is an expensive process (nested cross-validation even more so), is there a scenario for which doing cross-validation is no more practical? What to do in that case?

We do not focus on this aspect at first.

In general, you need a large enough number of splits to assess the variation of the score. I would say that it is even recommended to do a RepeatedKFold (repeat k-fold cross-validation using some shuffling).

As you said, it comes with a cost and the larger is your dataset, the more costly will be the evaluation. On a huge dataset, cross-validation is therefore annoying however not applying any will not allow you to conclude whether or not you got lucky with the random split.

In general, you need a large enough number of splits to assess the variation of the score. I would say that it is even recommended to do a RepeatedKFold (repeat k-fold cross-validation using some shuffling).

Are there rules of thumb (at least for the starting value)? Like, for ~100_000 records, 10 splits at least as a start (I am making up numbers). Or is it just a guess and try process?

[…] cross-validation is therefore annoying however not applying any will not allow you to conclude whether or not you got lucky with the random split.

Sorry for the slight off-topic: is that the case in deep-learning too, where training is really costly?

10-fold cross-validation is commonly used.

Yes exactly. People are doing a single train-test split thinking that the sample size will be enough to not get an overfitted model. However, there is no guarantee that this is the case. @GaelVaroquaux has some opinions on the topic and would advocate making cross-validation even in these cases.