Additional intercept

Hello!
Could you clarify please - does additional intercept in linear models mean bias of prediction?

Sorry I don’t understand your question, it does not seem like the cross-validation framework notebook mentions intercept at all …

Can you add more details, in particular which notebook your question is about, and maybe copy the part of the text you want to ask a question about?

Oh, excuse me. It looks like my question fell in a wrong forum folder.
My question is about linear models parameters. There are two categories - weights and interception.
I study at Data Science course at the same time in other place. At the theoretical part there was mentioned that the slope of prediction line in linear models is defined by weights and the position (whether the prediction line is situated higher or lower roughly) is defined by bias.
Finally I am a bit confused with understanding what parameters determine general shape of prediction line/function.

I apologize for possible mistakes in terms in advance. May be I took smth completely wrong, cuz I am an embrio in Data Science and my professional background was quiet far from STEM domain.

P.S. Thank you, I really enjoy the course!

It seems like this is an unfortunate collision in vocabulary. We call intercept the value of the prediction when all the features are equal to 0. I am guessing others call it bias.

This is a different thing than “bias” in prediction bias. For example, the same this distinction is noted here:

“Prediction bias” is a different quantity than bias (the b in wx + b).

Thank you for explanation!