linear regression - Gradient Descent implementation in Python -


i got stucked @ point implementing gradient descent in python.

the formula gradient descent is:

for iter in range(1, num_iters):     hypo_function = np.sum(np.dot(np.dot(theta.t, x)-y, x[:,iter]))      theta_0 = theta[0] - alpha * (1.0 / m) * hypo_function     theta_1 = theta[1] - alpha * (1.0 / m) * hypo_function 

got error:

---> hypo_function = np.sum(np.dot(np.dot(theta.t, x)-y, x[:,iter])) valueerror: shapes (1,97) , (2,) not aligned: 97 (dim 1) != 2 (dim 0)

ps: here x (2l, 97l), y (97l,) theta (2l,).

np.dot(a,b) takes inner product of , b if , b vectors (1-d arrays) if , b 2d arrays, np.dot(a,b) matrix multiplication.

it throw valueerror if there mismatch between size of last dimension of , second last dimension of b. have match.

in case trying multiply 97 array 2 array in 1 of dot products, there mismatch. need fix input data dot product/matrix multiply computable.


Comments