Coding With Fun
Home Docker Django Node.js Articles Python pip guide FAQ Policy

How does ridge regression penalize the regression model?


Asked by Zaid Cherry on Dec 10, 2021 FAQ



Ridge regression shrinks the regression coefficients, so that variables, with minor contribution to the outcome, have their coefficients close to zero. The shrinkage of the coefficients is achieved by penalizing the regression model with a penalty term called L2-norm, which is the sum of the squared coefficients.
Besides,
When the issue of multicollinearity occurs, least-squares are unbiased, and variances are large, this results in predicted values to be far away from the actual values. The cost function for ridge regression: Lambda is the penalty term. λ given here is denoted by an alpha parameter in the ridge function.
In addition, The cost function for ridge regression: Lambda is the penalty term. λ given here is denoted by an alpha parameter in the ridge function. So, by changing the values of alpha, we are controlling the penalty term.
In this manner,
Linear regression models that use these modified loss functions during training are referred to collectively as penalized linear regression. One popular penalty is to penalize a model based on the sum of the squared coefficient values ( beta ). This is called an L2 penalty.
Consequently,
"Squared magnitude" of coefficient as penalty term is added to the loss function by ridge regression. In the formula above, if lambda is zero, then we get OLS. However, the high value of lambda will add too much weight. Which will result in model under-fitting .