 How does cross-validation performance change with the increase in the number of folds K? So cross-validation involves splitting data into K partitions, using one part for testing and the rest for training. We repeat this K times by selecting different partitions for the test set. Now simulations really help here. For small data sets, increasing the value of K can actually help better the valuation. For larger data sets, though, the number of folds K that you choose actually becomes less important. The cross-validation performance is kind of the same whether you take smaller or larger values of K.