I Why is there a Matrix A that satisfies F(x,y)=<Ax,y>?

nightingale123
Messages
25
Reaction score
2
I'm having trouble understanding a step in a proof about bilinear forms
Let ## \mathbb{F}:\,\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}## be a bilinear functional.
##x,y\in\mathbb{R}^{n}##
##x=\sum\limits^{n}_{i=0}\,x_{i}e_{i}##
##y=\sum\limits^{n}_{j=0}\;y_{j}e_{j}##
##F(x,y)=F(\sum\limits^{n}_{i=1}\,x_{i}e_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j})=\sum\limits^{n}_{1=i,j}F(x_{i}e_{i},y_{j}e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}F(e_{i},e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}a_{ji}##
##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

Could someone explain to me how we got that
##F(e_{i},e_{j})=a_{ji}##
I couldn't find the explanation anywhere in my notebook
thank you
Edit: ( added some stuff I missed )

##A=[a_{ij}]_{i,j=1,...,n}##

##<Ax,y>=<A(\sum\limits^{n}_{i=1}x_{i}e_{i}),\sum\limits^{n}_{j=1}y_{j}e_{j}>=\sum\limits^{n}_{i=1}x_{i}Ae_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j}>=\sum\limits^{n}_{i,j=1}x_{i}y_{j}<Ae_{i},e_{j}>##

##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

I understand why ##\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}=<Ax,y>##,
however I don't understand why ##<Ax,y>=F(x,y)##
Edit(Edit).
In other words I don't understand why there exists a Matrix A so that ##F(x,y)=<Ax,y>##
 
Last edited:
Physics news on Phys.org
##F(e_i, e_j)## is a number. Define that number to be ##a_{ij}##.
 
nightingale123 said:
I'm having trouble understanding a step in a proof about bilinear forms
Let ## \mathbb{F}:\,\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}## be a bilinear functional.
##x,y\in\mathbb{R}^{n}##
##x=\sum\limits^{n}_{i=0}\,x_{i}e_{i}##
##y=\sum\limits^{n}_{j=0}\;y_{j}e_{j}##
##F(x,y)=F(\sum\limits^{n}_{i=1}\,x_{i}e_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j})=\sum\limits^{n}_{1=i,j}F(x_{i}e_{i},y_{j}e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}F(e_{i},e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}a_{ji}##
##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

Could someone explain to me how we got that
##F(e_{i},e_{j})=a_{ji}##

I assume that ##\mathbf e_k## refers to standard basis vectors, i.e.

##\mathbf I =
\bigg[\begin{array}{c|c|c|c}
\mathbf e_1 & \mathbf e_2 &\cdots & \mathbf e_{n}
\end{array}\bigg]##

I would start real simple. Suppose we had the constraint that ##\mathbf x := \mathbf e_k## and ##\mathbf y := \mathbf e_r##.

What happens when you do ##\mathbf e_r^T \mathbf A \mathbf e_k = \mathbf y^T \mathbf A \mathbf x##? This equation "grabs" one cell from ##\mathbf A##, which you should verify by inspection.

nightingale123 said:
In other words I don't understand why there exists a Matrix A so that ##F(x,y)=<Ax,y>##

From here recall that the standard basis vectors in fact form a basis, so let's loosen up our restriction and let ##\mathbf x## be anything (while keeping its dimension and field the same, of course.)

##\mathbf x = \gamma_1 \mathbf e_1 + \gamma_2 \mathbf e_2 + ... + \gamma_n \mathbf e_n##

##\mathbf y^T \mathbf A \mathbf x = \mathbf y^T \mathbf A\big(\gamma_1 \mathbf e_1 + \gamma_2 \mathbf e_2 + ... + \gamma_n \mathbf e_n\big)##

now loosen up the constraint on ##\mathbf y## so it too can be anything

##\mathbf y = \eta_1 \mathbf e_1 + \eta_2 \mathbf e_2 + ... + \eta_n \mathbf e_n##

##\mathbf y^T \mathbf A \mathbf x = \big(\eta_1 \mathbf e_1 + \eta_2 \mathbf e_2 + ... + \eta_n \mathbf e_n \big)^T \mathbf A\big(\gamma_1 \mathbf e_1 + \gamma_2 \mathbf e_2 + ... + \gamma_n \mathbf e_n\big) = \sum \sum \eta_j \gamma_i a_{j,i} = \sum \sum \eta_j \gamma_i F(e_{i},e_{j}) = \sum \sum y_j x_i F(e_{i},e_{j}) ##

If you work through that, any bi-linear form your want (over reals with finite dimension) is just "grabbing" and scaling stuff from ##\mathbf A## and then summing them up so you get a scalar result.
 
nightingale123 said:
I'm having trouble understanding a step in a proof about bilinear forms
Let ## \mathbb{F}:\,\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}## be a bilinear functional.
##x,y\in\mathbb{R}^{n}##
##x=\sum\limits^{n}_{i=0}\,x_{i}e_{i}##
##y=\sum\limits^{n}_{j=0}\;y_{j}e_{j}##
##F(x,y)=F(\sum\limits^{n}_{i=1}\,x_{i}e_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j})=\sum\limits^{n}_{1=i,j}F(x_{i}e_{i},y_{j}e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}F(e_{i},e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}a_{ji}##
##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

Could someone explain to me how we got that
##F(e_{i},e_{j})=a_{ji}##
I couldn't find the explanation anywhere in my notebook
thank you
Edit: ( added some stuff I missed )

##A=[a_{ij}]_{i,j=1,...,n}##

##<Ax,y>=<A(\sum\limits^{n}_{i=1}x_{i}e_{i}),\sum\limits^{n}_{j=1}y_{j}e_{j}>=\sum\limits^{n}_{i=1}x_{i}Ae_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j}>=\sum\limits^{n}_{i,j=1}x_{i}y_{j}<Ae_{i},e_{j}>##

##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

I understand why ##\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}=<Ax,y>##,
however I don't understand why ##<Ax,y>=F(x,y)##
Edit(Edit).
In other words I don't understand why there exists a Matrix A so that ##F(x,y)=<Ax,y>##

If A is linear, then <Ax,y> is bilinear and so it has a representation as a matrix, as a quadratic form. Is that your question?
 
follow-up thought:

my posting had ##\mathbf y^T \mathbf {Ax} = \mathbf y^T\big( \mathbf {Ax}\big) = <\mathbf y, \mathbf{Ax}>##

though OP actually asked for

##< \mathbf{Ax}, \mathbf y>##

inner products over Reals are symmetric, so

##< \mathbf{Ax}, \mathbf y> = <\mathbf y, \mathbf{Ax}>##

(over complex numbers inner products are symmetric, plus complex conjugation at the end)

note that Chapter 7 of Linear Algebra Done Wrong has a very good discussion of bilinear and quadratic forms. Freely available by the author here:

https://www.math.brown.edu/~treil/papers/LADW/book.pdf
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top