Increasing the number of splits in simple KFold

I experimented with the KFold split using the datset that the MOOC provided and I found that even in the first experiment with KFold without shuffling or stratification if we increase the number of splits the accuracy form 0% increases to 95%.
That makes sense right?
Because it will only take a few samples for testing and then it will keep for the training set almost all the other samples so every class will be represented in the training set.

Correct me if I am wrong.

Thanks for the great effor guys and thanks for the badge!