Maximizing Area with a Curve Length Constraint

In summary, given that y = b - ax^2, a>0, b>0, y > 0, and that b-a is a rational function, the values of a and b that maximize the area are $a=\frac2{15}c\sqrt{1+4c^2}$ and $b=3.335$.
  • #1
jacobi1
48
0
Given:

\(\displaystyle y=b-ax^2, \ a>0, \ b>0, \ y \geq 0\)

Find the values of a and b that maximize the area, subject to the constraint that the length of the curve above the x-axis is 10.

We integrate over

\(\displaystyle \left [- \sqrt{\frac{b}{a}}, \sqrt{\frac{b}{a}} \right ] \),

which gives the area as \(\displaystyle \frac{4}{3}b \sqrt{\frac{b}{a}}\).
Setting up and evaluating the arc length integral, we have

\(\displaystyle 2 \sqrt{4a^2 b^2 + ab} + \operatorname{arcsinh} 2 \sqrt{ab}=10a\)

as the constraint on a and b.
I used the method of Langrange multipliers to maximize this.
Set \(\displaystyle \Lambda=f+\lambda g\), where f is the area and g is the constraint set equal to zero.
We now solve the system

\(\displaystyle \nabla_{a,b,\lambda} \Lambda=0\).

Using Wolfram Alpha for the partial derivatives, we get

\(\displaystyle \nabla_{\lambda} \Lambda=0 \implies 2 \sqrt{4a^2 b^2 + ab} + \operatorname{arcsinh} 2 \sqrt{ab}=10a\) (the original constraint)

\(\displaystyle \nabla_{a} \Lambda=0 \implies -\frac{2b \sqrt{ab}+3 \lambda \operatorname{arcsinh} 2 \sqrt{ab}}{3a^2}=0\)

and

\(\displaystyle \nabla_{b} \Lambda=0 \implies \frac{2b+2 \lambda \sqrt{4ab+1}}{\sqrt{ab}}=0\)
as our system.
Wolfram times out on it, but plugging it into this link gives the solution \(\displaystyle a=b= \lambda =0\).
What has happened?
 
Last edited:
Physics news on Phys.org
  • #2
Not trusting Wolfram Alpha, I did this by hand. I prefer to use the logarithm rather than the inverse hyperbolic function, and I found that the equation for the length is $\sqrt{ab + 4a^2b^2} + \frac12\ln\bigl(2\sqrt{ab} + \sqrt{1 + 4ab}\bigr) = 10a.$

Looking at that, it seems convenient to use $\sqrt{ab}$ instead of $b$ for one of the parameters. So let $c = \sqrt{ab}$, and then the equation for the length is $c\sqrt{1+4c^2} + \frac12\ln\bigl(2c + \sqrt{1 + 4c^2}\bigr) = 10a.$ In terms of $a$ and $c$, the expression for the area is $\frac43c^3a^{-2}.$

Now let $f(a,c,\lambda) = \frac43c^3a^{-2} + \lambda \bigl(c\sqrt{1+4c^2} + \frac12\ln\bigl(2c + \sqrt{1 + 4c^2}\bigr) - 10a\bigr)$. The equations from the Lagrange multiplier method are $$\nabla_a\,f = -\tfrac83 c^3a^{-3} - 10\lambda = 0,$$ $$ \nabla_c\,f = 4c^2a^{-2} + \lambda\left( \sqrt{1+4c^2} + \frac{4c^2}{\sqrt{1+4c^2}} + \frac{2 + \frac{4c}{\sqrt{1+4c^2}}}{2\bigl(2c+ \sqrt{1+4c^2}\bigr)} \right) = 0.$$ The $\lambda$-equation is just the expression for the length. The $c$-equation looks horrendous, but the expression inside the large parentheses simplifies drastically, and the equation becomes $2c^2a^{-2} + \lambda\sqrt{1+4c^2} = 0.$ Eliminating $\lambda$ from the $a$- and $c$-equations leads to the equation $a = \frac2{15}c\sqrt{1+4c^2}.$ When you substitute that into the equation for the length, you get this equation for $c$: $$2c\sqrt{1+4c^2} - 3\ln\bigl(2c + \sqrt{1+4c^2}\bigr) = 0.$$ My graphing calculator gives the positive root of that equation as $c\approx 0.973$, from which $a\approx 0.284$ and $b\approx 3.335.$ The graph of $y = b - ax^2$ then looks like this, and it appears that the length is indeed about $10$:

[GRAPH]rsk9zojftw[/GRAPH]
 
  • #3
Oh, I see...I missed a factor of two in the integral. (Blush)
Also, instead of subtracting 10a before doing the partial derivative, I divided by a and then subtracted 10 before taking the partial derivative for some reason.
Anyway, thanks for explaining it!
 

1. What is multivariable optimization?

Multivariable optimization is a mathematical problem-solving technique that involves optimizing a function with multiple independent variables. The goal is to find the optimal values for each variable that will result in the highest or lowest value of the function.

2. What are some common applications of multivariable optimization?

Multivariable optimization is used in various fields such as engineering, economics, and machine learning. Some common applications include optimizing production processes, portfolio management, and neural network training.

3. What are the challenges of multivariable optimization?

One of the main challenges of multivariable optimization is the complexity of the problem. As the number of variables increases, the problem becomes more computationally expensive and difficult to solve. Additionally, there may be multiple local optima, making it challenging to find the global optimum.

4. What are the different methods used in multivariable optimization?

There are various methods used in multivariable optimization, including gradient descent, Newton's method, and genetic algorithms. These methods differ in their approach to finding the optimum and have their advantages and limitations.

5. How can multivariable optimization be applied in machine learning?

In machine learning, multivariable optimization is used to find the optimal values for the parameters of a model, such as the weights in a neural network. This allows the model to better fit the training data and make more accurate predictions on new data.

Similar threads

Replies
4
Views
383
Replies
3
Views
350
Replies
1
Views
957
Replies
2
Views
2K
Replies
2
Views
284
  • Calculus
Replies
4
Views
966
  • Calculus
Replies
5
Views
1K
  • Calculus
Replies
4
Views
2K
  • Calculus
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
305
Back
Top