Inductive bias

what exactly is inductive bias i’m unable to get it .

I think that the answer in the Wikipedia page is nice: Inductive bias - Wikipedia

The inductive bias (also known as learning bias) of a learning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered.

2 Likes

Another way to frame it: in cases you have a predictions in region of the feature space where you have few labeled samples in the training set, the model would have the flexibility to draw the decision boundary where it likes most. Different models have different preferences:

  • Logistic Regression will draw straight lines that are equivariant to a random rotation matrix applied to the features of the training set;
  • Decision Trees would draw straight segments that are orthogonal to the axis of the feature space, and therefore would not be invariant to a random rotation of the feature space;
  • k-nearest neighbors with the default Euclidean metric would draw a non straight curve that would be equivariant to random rotations of the feature space.

Which one is right depends on assumptions you can make on the data generating process. In general the inductive bias of (ensembles of) decision trees is great for tabular data where columns have different physical units. Rotation-equivariance is interesting when all features stem from a recording of an homogeneous multivariate signal (e.g. the pixel intensities of a camera).

2 Likes