In linear regression problems, we can use a method called Normal Equation to fit the parameters.
Suppose we have a training set like this:
X=[(x(1))T(x(2))T...(x(m))T] X = \left[\begin{matrix}(x^{(1)})^T \\ (x^{(2)})^T \\ ... \\ (x^{(m)})^T\end{matrix}\right] X=⎣⎢⎢⎡(x(1))T(x(2))T...(x(m))T⎦⎥⎥⎤
where:
x(i)=[x0(i)x1(i)...xn(i)] x^{(i)} = \left[\begin{matrix}x_0^{(i)} \\ x_1^{(i)} \\ ... \\ x_n^{(i)}\end{matrix}\right] x(i)=⎣⎢⎢⎢⎡x0(i)x1(i)...xn(i)⎦⎥⎥⎥⎤
and the label set:
y=[y(1)y(2)...y(m)] y = \left[\begin{matrix}y^{(1)} \\ y^{(2)} \\ ... \\ y^{(m)}\end{matrix}\right] y=⎣⎢⎢⎡y(1)y(2)...y(m)
Linear Regression - Normal Equation and Regularization
最新推荐文章于 2025-08-13 18:48:13 发布