1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Proving recursion relations. BFGS non linear optimization

  1. Apr 29, 2012 #1
    1. The problem statement, all variables and given/known data

    Please see attached thumbnail
    Here's what I know.
    1)Bk is the Hessian
    2) sk = [itex]\alpha[/itex]*p
    3)pk is the search direction
    4) Alpha is the step size

    2. Relevant equations

    yk = [itex]\nabla[/itex]f(xk+1) -[itex]\nabla[/itex]f(xk
    Bk+1(xk+1-xk) = [itex]\nabla[/itex]f(xk+1) -[itex]\nabla[/itex]f(xk
    3. The attempt at a solution
    Bk+1(xk+1-xk) = yk

    and then somehow from there I have to use the above to prove the Hk+1 equation
     

    Attached Files:

    Last edited: Apr 29, 2012
  2. jcsd
  3. Apr 29, 2012 #2

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    You don't know that B_k is the Hessian; you only know that it is a current approximation to the Hessian. In a purely quadratic model with exact line searches, each line search produces a better approximation to the Hessian, so starting from B_0 = Identity matrix (for example), you will have B_k = exact Hessian for some k <= n; that is, in at most n line searches you will get the Hessian---all of this without ever computing a second derivative! Going from B_k to B_{k+1} is a rank-1 update. There are formulas for how to change the inverse after a rank-1 update. I don't remember them, but they are widely available in the linear algebra/nonlinear optimization literature.

    RGV
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Proving recursion relations. BFGS non linear optimization
Loading...