Why is there a Matrix A that satisfies F(x,y)=<Ax,y>?

In summary: Yes, that is correct. By representing the bilinear form as a matrix, we are able to simplify and understand the step in the proof better.
  • #1
nightingale123
25
2
I'm having trouble understanding a step in a proof about bilinear forms
Let ## \mathbb{F}:\,\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}## be a bilinear functional.
##x,y\in\mathbb{R}^{n}##
##x=\sum\limits^{n}_{i=0}\,x_{i}e_{i}##
##y=\sum\limits^{n}_{j=0}\;y_{j}e_{j}##
##F(x,y)=F(\sum\limits^{n}_{i=1}\,x_{i}e_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j})=\sum\limits^{n}_{1=i,j}F(x_{i}e_{i},y_{j}e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}F(e_{i},e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}a_{ji}##
##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

Could someone explain to me how we got that
##F(e_{i},e_{j})=a_{ji}##
I couldn't find the explanation anywhere in my notebook
thank you
Edit: ( added some stuff I missed )

##A=[a_{ij}]_{i,j=1,...,n}##

##<Ax,y>=<A(\sum\limits^{n}_{i=1}x_{i}e_{i}),\sum\limits^{n}_{j=1}y_{j}e_{j}>=\sum\limits^{n}_{i=1}x_{i}Ae_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j}>=\sum\limits^{n}_{i,j=1}x_{i}y_{j}<Ae_{i},e_{j}>##

##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

I understand why ##\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}=<Ax,y>##,
however I don't understand why ##<Ax,y>=F(x,y)##
Edit(Edit).
In other words I don't understand why there exists a Matrix A so that ##F(x,y)=<Ax,y>##
 
Last edited:
Physics news on Phys.org
  • #2
##F(e_i, e_j)## is a number. Define that number to be ##a_{ij}##.
 
  • #3
nightingale123 said:
I'm having trouble understanding a step in a proof about bilinear forms
Let ## \mathbb{F}:\,\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}## be a bilinear functional.
##x,y\in\mathbb{R}^{n}##
##x=\sum\limits^{n}_{i=0}\,x_{i}e_{i}##
##y=\sum\limits^{n}_{j=0}\;y_{j}e_{j}##
##F(x,y)=F(\sum\limits^{n}_{i=1}\,x_{i}e_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j})=\sum\limits^{n}_{1=i,j}F(x_{i}e_{i},y_{j}e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}F(e_{i},e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}a_{ji}##
##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

Could someone explain to me how we got that
##F(e_{i},e_{j})=a_{ji}##

I assume that ##\mathbf e_k## refers to standard basis vectors, i.e.

##\mathbf I =
\bigg[\begin{array}{c|c|c|c}
\mathbf e_1 & \mathbf e_2 &\cdots & \mathbf e_{n}
\end{array}\bigg]##

I would start real simple. Suppose we had the constraint that ##\mathbf x := \mathbf e_k## and ##\mathbf y := \mathbf e_r##.

What happens when you do ##\mathbf e_r^T \mathbf A \mathbf e_k = \mathbf y^T \mathbf A \mathbf x##? This equation "grabs" one cell from ##\mathbf A##, which you should verify by inspection.

nightingale123 said:
In other words I don't understand why there exists a Matrix A so that ##F(x,y)=<Ax,y>##

From here recall that the standard basis vectors in fact form a basis, so let's loosen up our restriction and let ##\mathbf x## be anything (while keeping its dimension and field the same, of course.)

##\mathbf x = \gamma_1 \mathbf e_1 + \gamma_2 \mathbf e_2 + ... + \gamma_n \mathbf e_n##

##\mathbf y^T \mathbf A \mathbf x = \mathbf y^T \mathbf A\big(\gamma_1 \mathbf e_1 + \gamma_2 \mathbf e_2 + ... + \gamma_n \mathbf e_n\big)##

now loosen up the constraint on ##\mathbf y## so it too can be anything

##\mathbf y = \eta_1 \mathbf e_1 + \eta_2 \mathbf e_2 + ... + \eta_n \mathbf e_n##

##\mathbf y^T \mathbf A \mathbf x = \big(\eta_1 \mathbf e_1 + \eta_2 \mathbf e_2 + ... + \eta_n \mathbf e_n \big)^T \mathbf A\big(\gamma_1 \mathbf e_1 + \gamma_2 \mathbf e_2 + ... + \gamma_n \mathbf e_n\big) = \sum \sum \eta_j \gamma_i a_{j,i} = \sum \sum \eta_j \gamma_i F(e_{i},e_{j}) = \sum \sum y_j x_i F(e_{i},e_{j}) ##

If you work through that, any bi-linear form your want (over reals with finite dimension) is just "grabbing" and scaling stuff from ##\mathbf A## and then summing them up so you get a scalar result.
 
  • #4
nightingale123 said:
I'm having trouble understanding a step in a proof about bilinear forms
Let ## \mathbb{F}:\,\mathbb{R}^{n}\times\mathbb{R}^{n}\to \mathbb{R}## be a bilinear functional.
##x,y\in\mathbb{R}^{n}##
##x=\sum\limits^{n}_{i=0}\,x_{i}e_{i}##
##y=\sum\limits^{n}_{j=0}\;y_{j}e_{j}##
##F(x,y)=F(\sum\limits^{n}_{i=1}\,x_{i}e_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j})=\sum\limits^{n}_{1=i,j}F(x_{i}e_{i},y_{j}e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}F(e_{i},e_{j})=\sum\limits^{n}_{1=i,j}x_{i}y_{j}a_{ji}##
##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

Could someone explain to me how we got that
##F(e_{i},e_{j})=a_{ji}##
I couldn't find the explanation anywhere in my notebook
thank you
Edit: ( added some stuff I missed )

##A=[a_{ij}]_{i,j=1,...,n}##

##<Ax,y>=<A(\sum\limits^{n}_{i=1}x_{i}e_{i}),\sum\limits^{n}_{j=1}y_{j}e_{j}>=\sum\limits^{n}_{i=1}x_{i}Ae_{i},\sum\limits^{n}_{j=1}\;y_{j}e_{j}>=\sum\limits^{n}_{i,j=1}x_{i}y_{j}<Ae_{i},e_{j}>##

##=\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}##

I understand why ##\sum\limits^{n}_{ji}x_{i}y_{j}a_{ji}=<Ax,y>##,
however I don't understand why ##<Ax,y>=F(x,y)##
Edit(Edit).
In other words I don't understand why there exists a Matrix A so that ##F(x,y)=<Ax,y>##

If A is linear, then <Ax,y> is bilinear and so it has a representation as a matrix, as a quadratic form. Is that your question?
 
  • #5
follow-up thought:

my posting had ##\mathbf y^T \mathbf {Ax} = \mathbf y^T\big( \mathbf {Ax}\big) = <\mathbf y, \mathbf{Ax}>##

though OP actually asked for

##< \mathbf{Ax}, \mathbf y>##

inner products over Reals are symmetric, so

##< \mathbf{Ax}, \mathbf y> = <\mathbf y, \mathbf{Ax}>##

(over complex numbers inner products are symmetric, plus complex conjugation at the end)

note that Chapter 7 of Linear Algebra Done Wrong has a very good discussion of bilinear and quadratic forms. Freely available by the author here:

https://www.math.brown.edu/~treil/papers/LADW/book.pdf
 

1. What is the meaning of "Matrix A satisfies F(x,y)="?

The statement "Matrix A satisfies F(x,y)=" means that when we multiply Matrix A with any vector x and y, the result will be equal to the inner product of x and y. In other words, the matrix A preserves the inner product of x and y.

2. Why is it important to have a matrix that satisfies F(x,y)=?

Having a matrix A that satisfies F(x,y)= is important because it allows us to represent the inner product of two vectors in a more efficient and concise way. It also allows us to perform various mathematical operations and transformations on the vectors using the matrix A.

3. How do we know if a matrix satisfies F(x,y)=?

A matrix A satisfies F(x,y)= if and only if it is a symmetric positive definite matrix. This means that the matrix A must be square, its elements must be symmetric, and all its eigenvalues must be positive.

4. What is the significance of a symmetric positive definite matrix?

A symmetric positive definite matrix is significant because it has many useful properties that make it a valuable tool in linear algebra and other fields of mathematics. It is also a key component in various algorithms and equations used in data analysis, signal processing, and optimization problems.

5. Can any matrix be a symmetric positive definite matrix?

No, not all matrices can be symmetric positive definite. Only square matrices with symmetric elements and positive eigenvalues can be considered as symmetric positive definite matrices. Other types of matrices, such as rectangular matrices or matrices with non-symmetric elements, cannot satisfy F(x,y)=.

Similar threads

Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
791
  • Linear and Abstract Algebra
Replies
1
Views
760
  • Linear and Abstract Algebra
Replies
1
Views
909
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
883
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
960
Back
Top