Minimizing as a function of variables

In summary, the question asks for the minimum of a function of five variables, which can be handled by taking the derivative with respect to each variable and setting each one equal to zero.
  • #1
Lady M
9
0
Homework Statement
I am given an integral with the bound of -1 to 1, of e^x minus a string of polynomials , x^n, running from 0 to 4, all times constants b[n], running from 0 to 4, all of which is squared. (sorry I don't know how to type functions into this interface yet, but I will include a photo of the question below).

I am asked to "minimize it as a function of 5 variables". Then it wants me to compare the polynomial I get to the Taylor series of e^x to 4 terms, as well as another series approximation of e^x obtained earlier using the inner product of Legendre polynomials.
Relevant Equations
The problem is that I am not sure what's being asked here. Specifically, I am not sure what "minimize the function" means in this context.

I found this post here (https://www.physicsforums.com/threads/minimizing-a-integral.533829/) from a while back, which seems to be addressing a similar question. However, it looks like the "resolution" to the question came from using shifted Legendre polynomials, and then using that to determine the coefficients. Because my class has not done something like that yet, I doubt that's what she wanted us to do.
As promised, here is the original question, with the integral written in a more legible form.

1574048598431.png
 
Physics news on Phys.org
  • #2
Lady M said:
Homework Statement: I am given an integral with the bound of -1 to 1, of e^x minus a string of polynomials , x^n, running from 0 to 4, all times constants b[n], running from 0 to 4, all of which is squared. (sorry I don't know how to type functions into this interface yet, but I will include a photo of the question below).

I am asked to "minimize it as a function of 5 variables". Then it wants me to compare the polynomial I get to the Taylor series of e^x to 4 terms, as well as another series approximation of e^x obtained earlier using the inner product of Legendre polynomials.
Homework Equations: The problem is that I am not sure what's being asked here. Specifically, I am not sure what "minimize the function" means in this context.

I found this post here (https://www.physicsforums.com/threads/minimizing-a-integral.533829/) from a while back, which seems to be addressing a similar question. However, it looks like the "resolution" to the question came from using shifted Legendre polynomials, and then using that to determine the coefficients. Because my class has not done something like that yet, I doubt that's what she wanted us to do.

As promised, here is the original question, with the integral written in a more legible form.

View attachment 252974
First, recognize that your integral is a function of the ##b_i##s, ##f(b_0, b_1, b_2, b_3, b_4)##. How would you find the minimum of a function of five variables?
 
  • #3
Perhaps I'm being naive, but I think I would take the div of the function with respect to each variable (in this case b0,..., b4) and set it equal to zero.
 
  • #4
Lady M said:
Perhaps I'm being naive, but I think I would take the div of the function with respect to each variable (in this case b0,..., b4) and set it equal to zero.
Almost. You don't need the div (I assume you mean ##\nabla \cdot f##). Just take the derivative with respect to each variable and set each one equal to zero. That would be ##\nabla f = 0##. In other words, find the point where the gradient of the function is zero.
 
Last edited:
  • #5
Okay, so would I do that with the entire function in the integral (ex - (b0+b1 x + b2 x2 + b3 x3 +b4 x4))2, or just ex - (b0+b1 x + b2 x2 + b3 x3 +b4 x4)?

Assuming I take the derivative of the squared function (because otherwise the min value is just zero except for b0 which is weird), I'm left with a system of 5 equations. What do I do with them? I'm not allowed to treat it like a system of equations and solve for each of the b's, am I?
 
  • #6
Lady M said:
Okay, so would I do that with the entire function in the integral (ex - (b0+b1 x + b2 x2 + b3 x3 +b4 x4))2, or just ex - (b0+b1 x + b2 x2 + b3 x3 +b4 x4)?

Assuming I take the derivative of the squared function (because otherwise the min value is just zero except for b0 which is weird), I'm left with a system of 5 equations. What do I do with them? I'm not allowed to treat it like a system of equations and solve for each of the b's, am I?
Your function is
$$f(b_0, b_1, b_2, b_3, b_4) = \int_{-1}^1(e^x-(b_0+b_1x+b_2x^2+b_3x^3+b_4x^4)^2 dx$$
So when you differentiate it with respect to ##b_i##, you get
$$\frac d {db_i} f(b_0, b_1, b_2, b_3, b_4) = \frac d {db_i}\int_{-1}^1(e^x-(b_0+b_1x+b_2x^2+b_3x^3+b_4x^4)^2 dx$$
Fortunately, in this case you can interchange the differentiation and integration operators. The point is, you still need to do the integration, but once you complete the differentiation you have something that is easily integrable.
EDIT: And yes, once you have done the integration(s) you have a system of 5 equations in 5 unknowns to solve.
 
Last edited:
  • #7
Kind of weird: like doing polynomial regression for ##e^x## but ultimately you can throw in infinitely-many terms and your regression improves with each.
 

1. What is "minimizing as a function of variables"?

"Minimizing as a function of variables" refers to the process of finding the minimum value of a mathematical function by manipulating the values of the independent variables.

2. Why is minimizing as a function of variables important?

Minimizing as a function of variables is important because it allows us to optimize a system or process by finding the most efficient or effective combination of variables.

3. How is minimizing as a function of variables different from minimizing a single variable?

Minimizing as a function of variables involves finding the minimum value of a function with multiple independent variables, whereas minimizing a single variable involves finding the minimum value of a function with only one independent variable.

4. What is the relationship between minimizing as a function of variables and gradient descent?

Gradient descent is a commonly used method for minimizing as a function of variables. It involves iteratively adjusting the values of the variables in the direction of steepest descent to find the minimum value of the function.

5. What are some applications of minimizing as a function of variables?

Minimizing as a function of variables has various applications in fields such as economics, engineering, and machine learning. It can be used to optimize processes, design efficient systems, and train models to make accurate predictions.

Similar threads

  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
  • Classical Physics
Replies
1
Views
103
Replies
2
Views
777
  • Calculus and Beyond Homework Help
Replies
1
Views
554
  • Calculus and Beyond Homework Help
Replies
3
Views
732
  • Calculus and Beyond Homework Help
Replies
3
Views
994
Back
Top