- #1
ashah99
- 60
- 2
- TL;DR Summary
- I would appreciate some help on proofs involving the Lp norm below, please. I looked up Holder's inequality, which might help with this problem, but I'm not sure how to set up the proofs.
How would I prove part (1)? By inspection, the l1 norm will be greater that the lp form when p > 1, since l1 is just the sum of the magnitudes...just not sure how to show work that is proof-based.anuttarasammyak said:[tex](a_1+a_2+a_3)^\alpha \ge a_1^\alpha+(a_2+a_3)^\alpha \ge a_1^\alpha + a_2^\alpha + a_3^\alpha [/tex]
[tex]a_1+a_2+a_3 \ge [a_1^\alpha + a_2^\alpha + a_3^\alpha]^{\frac{1}{\alpha} }[/tex]
and so on.
Bonus
[tex]f(x,y):=(x+y)^a-x^a-y^a[/tex]
[tex]x,y \ge 0 ,\ \ a \ge 1[/tex]
[tex]\frac{\partial f}{\partial x} \ge 0,\ \ \frac{\partial f}{\partial y} \ge 0, \ \ f(0,0)=0[/tex]
I see that, how does this relate to part 1? I’m slow at understanding it seems.anuttarasammyak said:I stopped at 3 but you should continue upto N
[tex](a_1+a_2+...+a_N)^\alpha \ge a_1^\alpha + a_2^\alpha + ... +a_N^\alpha[/tex]
[tex]a_1+a_2+...+a_N \ge (a_1^\alpha + a_2\alpha + ... +a_N^\alpha)^\frac{1}{\alpha}[/tex]
Let ##a_k=|x_k|##
Okay. this makes sense. Thanks. How would part 2 be proved? I'm thinking I would need to choose some a_i and α and by using the convexity of the function f = x^α. Any thoughts here?anuttarasammyak said:[tex](|x_1|+|x_2|+...+|x_N|)^p \ge |x_1|^p +|x_2|^p + ... +|x_N|^p[/tex]
[tex]l_1\ norm=|x_1|+|x_2|+...+|x_N| \ge (|x_1|^p +|x_2|^p + ... +|x_N|^p)^\frac{1}{p}=l_p \ norm[/tex]
Why don’t you do it ?ashah99 said:I'm thinking I would need to choose some a_i and α and by using the convexity of the function f = x^α.
since you know the Lp norm is a norm, you should be able to recognize that (1) is implied by triangle inequality, i.e.ashah99 said:How would I prove part (1)? By inspection, the l1 norm will be greater that the lp form when p > 1, since l1 is just the sum of the magnitudes...just not sure how to show work that is proof-based.
Are your hints regarding parts 2 & 3?
The Lp norm is a mathematical concept used to measure the size or magnitude of a vector or function. It is defined as the pth root of the sum of the absolute values of the vector or function raised to the power of p. In other words, it is a way to quantify the distance between two points in a multi-dimensional space.
The Lp norm is commonly used in various fields such as signal processing, image processing, statistics, and machine learning. It is used to measure the similarity between two signals or images, to find the best fit for a statistical model, and to compare the performance of different machine learning algorithms.
The L1 norm, also known as the Manhattan norm, is the sum of the absolute values of the vector or function. It is commonly used in applications where the direction of the vector is not important, such as in image processing. On the other hand, the L2 norm, also known as the Euclidean norm, is the square root of the sum of the squares of the vector or function. It is commonly used in applications where the direction of the vector is important, such as in signal processing.
The triangle inequality states that the Lp norm of the sum of two vectors or functions is less than or equal to the sum of their individual Lp norms. To prove this, we can use the Minkowski inequality which states that the pth power of the sum of two numbers is less than or equal to the sum of their individual pth powers. By applying this inequality to each component of the vectors or functions, we can prove the triangle inequality for Lp norm.
The dual of Lp norm is the Lq norm, where q is the conjugate of p. This means that if p is a positive real number greater than 1, then q is its reciprocal. The dual norm is used in optimization problems and in the definition of weak solutions in partial differential equations.