Gamma influence in the learning curve

I have tried to set gamma in the model to different values (10e-3, 1, 100, …) and I have seen some changes in the plot of the learning curve, but even with the choice of optimal gamma (gamma = 1) the accuracy does not grow up with increasing the sample size.
May someone tell us something more about the relation between hyperparameter tuning and training dataset size in the search of the best test_score?

Usually, one would expect that increasing the number of samples allows improving the testing score. The improvement will happen until we reach a plateau corresponding to an unreducable error. On the exercise, it seems that the problem is difficult and that we cannot even improve the test score (or reduce the error) by increasing the dataset size. The large standard deviation backup the intuition that the problem at hand is difficult.

1 Like