I Dot product constrained optimization

thecage411
Messages
2
Reaction score
0
Problem:

Fix some vector ##\vec{a} \in R^n \setminus \vec{0}## and define ##f( \vec{x} ) = \vec{a} \cdot \vec{x}##. Give an expression for the maximum of ##f(\vec{x})## subject to ##||\vec{x}||_2 = 1##.

My work:

Seems like a lagrange multiplier problem.

I have ##\mathcal{L}(\vec{x},\lambda) = \vec{a} \cdot \vec{x} - \lambda(||x||_2 - 1)##

Then ##D_{xi} \mathcal{L}(\vec{x},\lambda) = a_i - 1/2\lambda(\vec{x} \cdot \vec{x})^{-1/2}2x_i = a_i - \lambda x_i/||x|| = 0##. Solving for ##x_i## yields ##x_i = a_i||x||/\lambda##
Also ##D_{\lambda} \mathcal{L}(\vec{x},\lambda) = -||x|| + 1 = 0,## so ||x|| = 1.
Plugging that into the above expression I get ##x_i=ai/\lambda##.

But this answer doesn't make sense to me. For one, lambda should fall out, right? Also, just thinking about it -- wouldn't we want to set ##x_i = 1## for the max ##a_i## and have all ##j\neq i, x_j = 0##, because any deviation from that would be smaller?
 
Physics news on Phys.org
thecage411 said:
Seems like a lagrange multiplier problem.
I think that is like killing a fly with a cannon ball, as we say. (The problem does not require such a heavy tool for its solution.)
 
That's fair -- I gave an argument at the end not using the lagrange multiplier. I guess my question is -- why aren't those two approaches matching up?
 
thecage411 said:
That's fair -- I gave an argument at the end not using the lagrange multiplier. I guess my question is -- why aren't those two approaches matching up?
The simple (second) answer is wrong. x\cdot a is maximum, for fixed length x, when x is parallel and in the same direction as a.
 

Similar threads

Replies
2
Views
2K
Replies
8
Views
3K
Replies
12
Views
2K
Replies
4
Views
1K
Replies
2
Views
2K
Replies
7
Views
2K
Replies
2
Views
2K
Back
Top