Gianmarco said:
Thank you, I think I got it right this time. The function is always positive and increasing, so the maximum will be at the point [3,4]. The inequality is then 125/3 ≤ (32 + 42)*(32 + 42)½=125.
I would like to ask you something though. In this case we were dealing with a simple function, so it was easy to draw qualitative conclusions about its behaviour and find its maximum along the segment. But if the function was more complicated, would there be a systematic approach at finding its extrema along a segment?
When you have inequality constraints present you cannot necessarily set derivatives to zero. Let me illustrate by using your example, in several forms. Your problem is to maximize ##f = f(x,y) = x^2 + y^2## on the segment joining (0,0) to (3,4).
By far the easiest way in this case is to put (as you did) ##(x,y) = (3t,4t), 0 \leq t \leq 1##. That gives ##f = 25 t^2##, whose maximum on ##t \in [0,1]## is obviously at ##t = 1##. Note, though, that the derivative is not zero at that point; that is due to the effect of having an inequality constraint.
Another, harder, way, is what you also tried, which was to maximize ##f## subject to the constraint ##g = (4/3)x - y = 0##. To respect the endpoints, we also need inequality constraints, which we can take as ##0 \leq x \leq 3##. Thus, you have a problem with mixed equality and inequality constraints:
\begin{array}{rl}\max f =& x^2 + y^2 \\<br />
\text{s.t.} \:\: g = & (4/3) x - y = 0 \\<br />
& x \geq 0, x \leq 3<br />
\end{array}<br />
Construct a Lagrangian: ##L = f - u g = x^2 + y^2 - u((4/3)x - y)##; here, I have used "##u##" instead of "##\lambda##", just because it is easier to type (and, anyway, is often used in the Optimization literature). Since ##y## has no explicit inequality constraint, but ##x## does, we need to be careful when writing the optimality conditions. Letting ##L_x \equiv \partial L / \partial x## and ##L_y = \partial L / \partial y##, the conditions for a maximum at ##(x,y)## are:
\begin{array}{ccll} L_x \leq 0 &\text{if}& x = 0 & \cdots (1)\\<br />
L_x \geq 0 &\text{if} & x = 3 & \cdots (2) \\<br />
L_x = 0 & \text{if} & 0 < x < 3 & \cdots (3)\\<br />
L_y = 0 && &\cdots(4)\\<br />
g(x,y) = 0 && & \cdots(5)<br />
\end{array}<br />
(For a minimization problem, the ##L_x## inequalities would be swapped.)
As you noted, setting ##L_x = 0, L_y = 0, g = 0## gives ##(x,y,u) = (0,0,0)##, and this does satisfy (1), (4), (5); however, it would also satisfy the conditions with the inequalities on ##L_x## swapped, so it satisfies conditions for both a constrained max or a constrained min. (It turns out that second-order necessary conditions reject the maximization characteristic, so it is actually a constrained minimum.)
So, if we apply the conditions for ##0 < x < 3## we obtain ##x = 0##, but there is no inconsistency in this case. The only other candidate is ##x = 3##, where the conditions are that ##x = 3, L_y = 0 = 2y+u##, so ##(x,y) = (3,-u/2)##. Putting that into ##g = 0## we get ##u = -8## and hence ##(x,y) = (3,4)##. All the conditions (2), (4) and (5) are satisfied. Note also that at ##(x,y,u) = (3,4,-8)## we have ##L_x = 50/3##, which is certainly not anywhere near 0.