Solved-Artificial Intelligence Lab 6- Solution

$30.00 $19.00

Q1) Gradient Descent in 1-D: Consider the function to be f(w) = 12 w2 Perform gradient descent to find the minimum of f1. For = 0:1, plot the output of the algorithm at each step. [25 Marks] Plot the output of the algorithm for = 0:1, = 1, = 1:5, = 2, = 2:5 .…

You’ll get a: . zip file solution

 

 

Description

5/5 – (2 votes)

Q1) Gradient Descent in 1-D: Consider the function to be f(w) = 12 w2

  1. Perform gradient descent to find the minimum of f1. For = 0:1, plot the output of the algorithm at each step. [25 Marks]

  2. Plot the output of the algorithm for = 0:1, = 1, = 1:5, = 2, = 2:5 . [15 Marks]

  1. Implement gradient descent with line search. [10 Marks]

Q2) Repeat previous question for a) f(x) =

1

w2

5w + 3. [20 Marks]

2

b) f(x) =

1

. [10 Marks]

1+e w

Q3) Gradient Descent in 2D: Let x 2 R2. Consider the functions f1(w) = w(1)2 + w(2)2 + 5w(1) 3w(2) 2 and f2(w) = 10w(1)2 + w(2)2

  1. Show the gradient and contour plots for f1 and f2 [10 Marks]

  2. Perform gradient descent to find the minimum of f1 and f2. [10 Marks]

The gradient descent procedure in 1-dimnension is given by

dL

(1)

wt+1 = wtdw jw=wt

The gradient in d-dimension is denoted by rL, and it is a function from Rd ! Rd, i.e., at any input point in Rd, the gradient function output the direction of maximum change (the direction is a vector in Rd). Thus at input w0 2 Rd, the gradient outputs

rL(w0) = ( @w@L(1) jw(1)=w0(1); @w@L(2) jw(2)=w0(2); : : : ; @w@L(d) jw(d)=w0(d)). The gradient descent procedure in d-dimension is given by

wt+1 = wt

rL(wt);

(2)

which is same as

@L

(3)

wt+1(i) = wt(i)

jw(i)=wt(i)

@w(i)