What is an Extreme Point in a Convex Set?

mikepol
Messages
19
Reaction score
0
Hi,

I've been trying to show that the set of matrices that preserve L1 norm (sum of absolute values of each coordinate) are the complex permutation matrices. Complex permutation matrix is defined as permutation of the columns of complex diagonal matrix with magnitude of each diagonal element equal 1. (This problem is from Matrix Analysis by Horn and Johnson).

I've spend quite a lot of time trying to solve this but without much success. It is easy to show that complex permutation matrix keeps the L1 norm of a vector the same, but I can't show the opposite, that every such matrix must be a complex permutation matrix. The only thing I came up with is to take a vector x=(1-\epsilon,\epsilon), and just say that L1 norm of Bx can't be linear for all values of \epsilon \ge 1. But that is not really a proof.

Also it is easy to show from scalar product that such set of transformations for L2 norm are the unitary matrices. If I could somehow show that if a matrix preserves L1 norm then it must necessarily preserve L2 norm, then norm preserving matrices for L1 must be a subset of norm preserving matrices for L2. Then it is fairly easy to show that such a unitary matrix must be complex permutation matrix. But I can't seem to show that L1 preserving matrix must be L2 preserving.

Please, if someone could give me some hints or suggestions, it would be very appreciated!
Thanks.
 
Physics news on Phys.org
Preserves norm (and linear) -> preserves distances.

Therefore, it maps the extreme points of the unit ball to extreme points of the unit ball.

In the l_1 norm, those extreme points are just scalar of modulus 1 times basis vector. That should be a start.
 
g_edgar thanks for your answer.

I'm not sure what you mean by "extreme points" of a unit ball. For example, in real 2D space, unit ball for L1 norm is a rhombus. You are saying that its extreme points are plus/minus the x and y unit vectors. But I don't see why the image of a unit vector along an axis under linear isometry should also be a unit vector along an axis.

Thanks.
 
Ok I think I finally got it. I will post my solution later today.
 
g_edgar thanks for your hints, I think I have a solution now. By extreme points you mean the points on L1 unit ball that maximize L2 norm. By using that and drawing a rhombus, I finally saw how to produce a contradiction by showing that if transformation is not a complex permutation matrix, then two opposite extreme points will map to the same quadrant. This proof is just putting that idea into algebra:

First, by regarding complex space as real 2D space and using Cauchy Schwartz inequality, it is possible to prove that triangle inequality
|a+b| \le |a| + |b|
becomes an iquality for fixed non-zero complex number 'a' and complex number 'b' of fixed non-zero magnitude iff
b = \beta a
for \beta real and strictly positive.

Now suppose that B is an isometry for L1 norm, so that for any x:
||Bx|| = ||x||
B must be invertible, so if each column of B contains exactly one nonzero element, then they must occur in different rows and have magnitude 1, so B must be a complex permutation martix. Assume then that first column of B contains at least two nonzero elements at rows i and j, and that second column contains a nonzero element at row i (it is always possible to find two such columns if B is not a complex permutation matrix).

So B e_1 = x where x_i \ne 0, x_j \ne 0, and B e_2 = y where y_i \ne 0. Then
||B(e_1 + e_2)|| = ||x + y|| = \sum_k |x_k + y_k| = ||e_1 + e_2|| = 2
But by triangle inequality, the above sum will equal 2 if and only if
y_i = \beta x_i
for some real \beta > 0.

Now consider B(-e_2) = z, where by above assumption and linearity z_i \ne 0. Similarly:
||B(e_1-e_2)|| = ||x+z|| = \sum_k |x_k + z_k| = ||e_1 - e_2|| = |1| + |-1| = 2
And also by triangle inequality, the sum is equal to 2 if and only if
z_i = \gamma x_i
for some real \gamma > 0.

But by linearity of B:
z = B(-e_2) = -B(e_2) = -y, so we must have \gamma = -\beta. Hence there is a contradiction. Therefore B must be complex permutation matrix.

This came out a little long, but I tried to include a few steps in between to make sure I'm not missing something, and I think it's correct. If anyone sees a problem with this proof, please post something, I'll be interested in hearing your opinion.

And again, thanks to g_edgar for the hints.
 
In general, an extreme point of a convex set C is a point c \in C such that for all a,b \in C, if c = ta+(1-t)b with 0 < t < 1, then a = b = c .
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top