Math Problem Statement
def gradient_descent(A, b, x_init, n_iter, alpha=0.1):
x = x_init
f_values = []
for t in range(n_iter):
Compute the gradient
grad_f = A.dot(x) - b
Update the solution
x = x - alpha * grad_f
Store the function value
f_values.append(0.5 * x.T.dot(A).dot(x) - b.T.dot(x))
return x, f_values
Gradient descent for n = 20
A_20 = np.diag(np.arange(1, 21))
b_20 = np.ones(20)
x_init_20 = np.zeros(20)
x_gd_20, f_gd_20 = gradient_descent(A_20, b_20, x_init_20, 20)
Gradient descent for n = 100
A_100 = np.diag(np.arange(1, 101))
b_100 = np.ones(100)
x_init_100 = np.zeros(100)
x_gd_100, f_gd_100 = gradient_descent(A_100, b_100, x_init_100, 100) in this can we use the function optimal solution
Solution
Ask a new question for Free
By Image
Drop file here or Click Here to upload
Math Problem Analysis
Mathematical Concepts
Gradient Descent
Linear Algebra
Optimization
Quadratic Functions
Formulas
Gradient of f(x): ∇f(x) = Ax - b
Optimal solution: x_opt = A⁻¹b
Quadratic function: f(x) = 0.5 xᵀAx - bᵀx
Gradient Descent Update: x = x - α∇f(x)
Theorems
Optimality condition: ∇f(x) = 0
Suitable Grade Level
Undergraduate (Math or Computer Science)
Related Recommendation
Optimal Step Size in Gradient Descent for Quadratic Function Minimization
Gradient Descent for Quadratic Function Minimization
Gradient Descent Optimization for Multivariable Functions
Optimal Step Size Calculation for Gradient Descent Algorithm
Gradient of Quadratic Function f(x) with Matrix A and Vector b