Use of a derivative or a gradient to minimize a function

Click For Summary

Discussion Overview

The discussion revolves around the methods for minimizing a function, specifically comparing the standard approach of setting the derivative to zero with the gradient descent method. The scope includes theoretical considerations and practical examples related to function minimization.

Discussion Character

  • Exploratory, Technical explanation, Conceptual clarification, Debate/contested

Main Points Raised

  • Some participants note that the standard method for minimization involves setting the derivative of the function f(x) to zero and solving for x.
  • Others introduce gradient descent as an alternative method that involves multiple steps, suggesting it may be necessary in certain situations.
  • One participant points out that a common scenario for using gradient descent arises when the function cannot be differentiated analytically, necessitating numerical approximations.
  • A later reply asks for a simple example to illustrate the challenges of applying the standard method.
  • Another participant suggests that situations where data is presented as pairs from an experiment, rather than as an analytic function, could complicate the use of the standard method.

Areas of Agreement / Disagreement

Participants express differing views on the applicability of the standard method versus gradient descent, indicating that multiple competing perspectives exist regarding when each method is appropriate.

Contextual Notes

Limitations include the assumption that the function is differentiable and the potential challenges in applying the standard method when dealing with experimental data rather than analytic functions.

Who May Find This Useful

This discussion may be of interest to those exploring optimization techniques in mathematics, particularly in contexts involving experimental data or numerical methods.

onako
Messages
86
Reaction score
0
Given certain function f(x), a standard way to minimize it is to set its derivative to zero, and solve for x. However, in certain cases the method of gradient descent is used; compared to the previous method (call it 'method I')that simply sets the derivative to zero and solves for x, the gradient descent takes multiple steps.

Why could not one use only the 'method I' for minimization? Could you give an example illustrating the difficulty of applying 'mehtod I'?
 
Physics news on Phys.org
The standard situation is where you cannot differentiate the function analytically and have to use a numerical approximation.
 
Could you provide a simple example?
 
Any situation in which your data is given as a set of pairs from an experient rather than as an analytic function.
 

Similar threads

  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
14
Views
3K