Feature subsampling

Hi experts,

I have a question that we know the difference between baggin and random forest is that random forest uses random features for training each tree. However, the default max_feature of the RandomForestRegressor is 1, and there is no feature subsampling. In this case, what is the differences beteen it and bagging regressor? Thank you.

In the case of regression, BaggingRegressor and RandomForestRegressor are the same. It is due to some historical reason as it was the default proposed by Breiman in the random forest paper.

Since this is quite surprising, we intend to maybe change this behaviour in scikit-learn: