In the solution of exercise M1.05 in the section dealing with “Scaling numerical features” it is evaluated the cross validation accuracy when the pipeline scales the numerical features.
I did the exact same code, but the accuracy reported in the example is mean=0.874 +/- 0.003, a little different from mine mean=0.8733 +/- 0.003.
Is it something I should care of?
It seems to me I wrote the exact same code. I thought I should expect the exact same value.
Also the computational time is quite different (25sec instead of 5 sec, as reported in the example), but I thought it can be caused by different HW (jupiter servere shouldn’t be the same?).