Why is L1 norm harder to optimize than L2 norm?

  • Thread starter Thread starter pamparana
  • Start date Start date
  • Tags Tags
    L2 Norm
pamparana
Messages
123
Reaction score
0
Hi all,

I have a basic optimisation question. I keep reading that L2 norm is easier to optimise than L1 norm. I can see why L2 norm is easy as it will have a closed form solution as it has a derivative everywhere.

For the L1 norm, there is derivatiev everywhere except 0, right? Why is this such a problem with optimisation. I mean, there is a valid gradient everywhere else.

I am really having problems convincing myself why L1 norm is so much harder than l2 norm minimisation. L1 is convex and continupus as well and only has one point which does not have a derivative.

Any explanation would be greatly appreciated!

Thanks,

Luca
 
Physics news on Phys.org
If the problem is a linear program then the L1 norm is a single program while the L2 norm is a sequence of linear programs that finds the efficient frontier.
 
Back
Top