Hi all,(adsbygoogle = window.adsbygoogle || []).push({});

I have a basic optimisation question. I keep reading that L2 norm is easier to optimise than L1 norm. I can see why L2 norm is easy as it will have a closed form solution as it has a derivative everywhere.

For the L1 norm, there is derivatiev everywhere except 0, right? Why is this such a problem with optimisation. I mean, there is a valid gradient everywhere else.

I am really having problems convincing myself why L1 norm is so much harder than l2 norm minimisation. L1 is convex and continupus as well and only has one point which does not have a derivative.

Any explanation would be greatly appreciated!

Thanks,

Luca

**Physics Forums - The Fusion of Science and Community**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# L1 and L2 norm minimisation

Loading...

Similar Threads - norm minimisation | Date |
---|---|

A Sobolev norm | Dec 1, 2016 |

I How to minimise this function? | Jun 10, 2016 |

Which vectorial norm should I use? | Dec 4, 2015 |

[College Level] Norm of Matrices | Feb 11, 2015 |

Matrix norms | Dec 18, 2013 |

**Physics Forums - The Fusion of Science and Community**