Maximum inner product between two orthgonal vectors (in standard dot procut))

FoldHellmuth
Messages
4
Reaction score
0
Hello buddies,

Here is my question. It seems simple but at the same time does not seem to have an obvious answer to me.

Given that you have two vectors \mathbf{u},\mathbf{v}.

  • They are orthogonal \mathbf{u}^T\mathbf{v}=0 by standard dot product definition.
  • They have norm one ||\mathbf{u}||=||\mathbf{v}||=1 by standard dot product definition.
  • Define the weighted inner product as \mathbf{u}^T\left(\begin{matrix}\lambda_1\\&amp;\ddots\\&amp;&amp;\lambda_M\end{matrix}\right)\mathbf{v} where M is the number of components. Then the norm according to this inner product is also one for both vectors \mathbf{u}^T\left(\begin{matrix}\lambda_1\\&amp;\ddots\\&amp;&amp;\lambda_M\end{matrix}\right)\mathbf{u}=<br /> \mathbf{v}^T\left(\begin{matrix}\lambda_1\\&amp;\ddots\\&amp;&amp;\lambda_M\end{matrix}\right)\mathbf{v}=1. Notice the dot product is a particular case where the matrix is the identity.
  • Edit: I forgot to add this \lambda_1+\cdots+\lambda_M=M. It does not make the problem more complicated as it just narrows the possible lambdas.

What is then the maximum inner product (in absolute value) among two vectors satisfying the previous conditions? I.e.
\operatorname{max}\limits_{\mathbf{u},\mathbf{v}} <br /> \left|<br /> \mathbf{u}^T\left(\begin{matrix}\lambda_1\\&amp;\ddots\\&amp;&amp;\lambda_M\end{matrix}\right)\mathbf{v}<br /> \right|Cheers
 
Last edited:
Physics news on Phys.org
Do you require that the BOTH norms, the standard one and the weighted one are 1?
 
Hawkeye18 said:
Do you require that the BOTH norms, the standard one and the weighted one are 1?

Yes.
 
Take M=2

write u=\begin{array}{c}\cos(\alpha)\\ \sin(\alpha)\end{array}; v=\begin{array}{c}-\sin(\alpha)\\ \cos(\alpha)\end{array}

The weighted norm is then
u^T\lambda u = \lambda_1 \cos^2(\alpha) + \lambda_2 \sin^2(\alpha) =1
v^T\lambda v = \lambda_1 \sin^2(\alpha) + \lambda_2 \cos^2(\alpha)=1

The sum between these gives \lambda_1+\lambda_2 = 2

The difference gives (\lambda_1 - \lambda_2)(\cos^2(\alpha)-\sin^2(\alpha))=0

Either you use the standard norm, \lambda_1=\lambda_2=1 or \alpha=\pi/2 (or the 3 other quadrants) and no further restrictions on \lambda_{1,2}

Then u^T \lambda v = \frac{1}{2}(\lambda_2 - \lambda_1)

For M>2, find the two \lambda with the largest difference but sum 1 - but I am not entirely sure that there cannot be another, larger solution.

BTW, is this a homework problem?
 
M Quack said:
The weighted norm is then
u^T\lambda u = \lambda_1 \cos^2(\alpha) + \lambda_2 \sin^2(\alpha) =1
v^T\lambda v = \lambda_1 \sin^2(\alpha) + \lambda_2 \cos^2(\alpha)=1

The sum between these gives \lambda_1+\lambda_2 = 2

The difference gives (\lambda_1 - \lambda_2)(\cos^2(\alpha)-\sin^2(\alpha))=0

Either you use the standard norm, \lambda_1=\lambda_2=1 or \alpha=\pi/2 (or the 3 other quadrants) and no further restrictions on \lambda_{1,2}

I think you meant \alpha=\pi/4 (or \pi/4+\pi)
For M=2, the solution only allows these values for the lambdas. I am interested in a generic M which is less obvious.

I actually forgot to mention \lambda_1+\cdots+\lambda_M=M, i.e. the average lambda is one.

M Quack said:
For M>2, find the two \lambda with the largest difference but sum 1 - but I am not entirely sure that there cannot be another, larger solution.

But you derived this conditions imposing orthogonality and norm-1 for vectors of two components. This probably does not carry on for bigger M.

M Quack said:
BTW, is this a homework problem?

Not at all. I am a theoretical radar engineer.
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top