Matrix representation of an operator with a change of basis

In summary, we discussed the second line in (5.185) and attempted to prove it using the closure relation (5.63a) and the definition of inner product in terms of the Dirac delta function. However, this approach did not work and we had to use a different method to prove the result. Additionally, we mentioned that handling infinite dimensions and the continuous case in quantum mechanics requires a more intuitive approach.
  • #1
Happiness
679
30
Why isn't the second line in (5.185) ##\sum_k\sum_l<\phi_m\,|\,A\,|\,\psi_k><\psi_k\,|\,\psi_l><\psi_l\,|\,\phi_n>##?

Screen Shot 2015-12-24 at 6.52.37 am.png


My steps are as follows:

##<\phi_m\,|\,A\,|\,\phi_n>##
##=\int\phi_m^*(r)\,A\,\phi_n(r)\,dr##
##=\int\phi_m^*(r)\,A\,\int\delta(r-r')\phi_n(r')\,dr'dr##

By the closure relation (5.63a) below,
##=\int\phi_m^*(r)\,A\big(\int\sum_l\psi_l(r)\psi_l^*(r')\phi_n(r')\,dr'\big)dr##
##=\int\phi_m^*(r)\,A\big(\sum_l\psi_l(r)\int\psi_l^*(r')\phi_n(r')\,dr'\big)dr##

Since ##A## is a linear operator,
##=\int\phi_m^*(r)\sum_lA\big(\psi_l(r)\int\psi_l^*(r')\phi_n(r')\,dr'\big)dr##
##=\sum_l\big[\int\phi_m^*(r)\,A\big(\psi_l(r)\int\psi_l^*(r')\phi_n(r')\,dr'\big)dr\big]##

Since ##A## acts on ##r## and not ##r'##,
##=\sum_l\big(\int\phi_m^*(r)\,A\,\psi_l(r)dr\,\,\times\,\,\int\psi_l^*(r')\phi_n(r')\,dr'\big)##
##=\sum_l\big[\int\phi_m^*(r)\,A\big(\int\delta(r-r'')\psi_l(r'')dr''\big)dr\,\,\times\,\,\int\psi_l^*(r')\phi_n(r')\,dr'\big]##

By the closure relation (5.63a) below,
##=\sum_l\big[\int\phi_m^*(r)\,A\big(\int\sum_k\psi_k(r)\psi_k^*(r'')\psi_l(r'')dr''\big)dr\,\,\times\,\,\int\psi_l^*(r')\phi_n(r')\,dr'\big]##
##=\sum_l\big[\int\phi_m^*(r)\,A\big(\sum_k\psi_k(r)\int\psi_k^*(r'')\psi_l(r'')dr''\big)dr\,\,\times\,\,\int\psi_l^*(r')\phi_n(r')\,dr'\big]##

Since ##A## is a linear operator,
##=\sum_l\big[\int\phi_m^*(r)\sum_kA\big(\psi_k(r)\int\psi_k^*(r'')\psi_l(r'')dr''\big)dr\,\,\times\,\,\int\psi_l^*(r')\phi_n(r')\,dr'\big]##
##=\sum_k\sum_l\big[\int\phi_m^*(r)\,A\big(\psi_k(r)\int\psi_k^*(r'')\psi_l(r'')dr''\big)dr\,\,\times\,\,\int\psi_l^*(r')\phi_n(r')\,dr'\big]##

Since ##A## acts on ##r## and not ##r''##,
##=\sum_k\sum_l\big(\int\phi_m^*(r)\,A\,\psi_k(r)\,dr\,\,\times\,\,\int\psi_k^*(r'')\psi_l(r'')\,dr''\,\,\times\,\,\int\psi_l^*(r')\phi_n(r')\,dr'\big)##
##=\sum_k\sum_l<\phi_m\,|\,A\,|\,\psi_k><\psi_k\,|\,\psi_l><\psi_l\,|\,\phi_n>##

Derivation of the closure relation (5.63a):
Screen Shot 2015-12-24 at 6.55.45 am.png
 
Last edited:
Physics news on Phys.org
  • #2
Happiness said:
Why isn't the second line in (5.185) ##\sum_k\sum_l<\phi_m\,|\,A\,|\,\psi_k><\psi_k\,|\,\psi_l><\psi_l\,|\,\phi_n>##?
You can write it like that, except it is not useful for finding the transformation law. You need to link the matrix element [itex]\langle \phi_{m}|A|\phi_{n}\rangle[/itex] to [itex]\langle \psi_{i}|A|\psi_{j}\rangle[/itex].
 
  • #3
samalkhaiat said:
You can write it like that, except it is not useful for finding the transformation law. You need to link the matrix element [itex]\langle \phi_{m}|A|\phi_{n}\rangle[/itex] to [itex]\langle \psi_{i}|A|\psi_{j}\rangle[/itex].

I believe the book get the second line by using [itex]\sum_k\,|\psi_{k}\rangle\langle \psi_{k}| = I[/itex], but how can we prove this in terms of the Dirac delta function [itex]\delta(r-r')[/itex] and the definition of the inner product: [itex]\langle \phi\,|\,A\,|\,\psi\rangle[/itex] = [itex]\int\phi^*(r)\,A\,\psi(r)\,dr[/itex]?

My steps above seem to suggest that the second line cannot be written in any other way, where ##A## acts on something else.
 
  • #4
Well, your expression:
[itex]\sum_k\sum_l\langle\phi_m\,|\,A\,|\,\psi_k\rangle\langle\psi_k\,|\,\psi_l\rangle\langle\psi_l\,|\,\phi_n\rangle[/itex]
simplifies to just
[itex]\sum_l\langle\phi_m\,|\,A\,|\,\psi_l\rangle \langle\psi_l\,|\,\phi_n\rangle[/itex]
since [itex]\langle\psi_k\,|\,\psi_l\rangle = \delta_{kl}[/itex].

So to get the result in the book, you need to convince yourself that:

[itex]\langle\phi_m\,|\,A\,|\,\psi_l\rangle = \sum_k \langle\phi_m\,|\psi_k\rangle \langle \psi_k|\,A\,|\,\psi_l \rangle[/itex]
 
  • Like
Likes Happiness
  • #5
Happiness said:
but how can we prove this in terms of the Dirac delta function [itex]\delta(r-r')[/itex] and the definition of the inner product: [itex]\langle \phi\,|\,A\,|\,\psi\rangle[/itex] = [itex]\int\phi^*(r)\,A\,\psi(r)\,dr[/itex]?

For foundational type issues in QM best to work in finite dimensional spaces and later generalise to the continuous case using Rigged Hilbert spaces. You get the Dirac Delta function by using <bi|bj> = ∂ij then letting Δ in |bi>/√Δ go to zero.

Unfortunately rigour in the continuous case requires some advanced functional analysis not suitable for the beginner and has to be done at an intuitive level.

Thanks
Bill
 
  • #6
stevendaryl said:
So to get the result in the book, you need to convince yourself that:

[itex]\langle\phi_m\,|\,A\,|\,\psi_l\rangle = \sum_k \langle\phi_m\,|\psi_k\rangle \langle \psi_k|\,A\,|\,\psi_l \rangle[/itex] -- (*)

[itex]\sum_l\langle\phi_m\,|\,A\,|\,\phi_n\rangle=\sum_l\langle\phi_m\,|\,A\,|\,\psi_l\rangle \langle\psi_l\,|\,\phi_n\rangle[/itex] -- (1) (proven directly using Dirac delta function as shown above)

I managed to prove (*), by considering [itex]\langle\phi_m\,|\,A\,|\,\psi_l\rangle=\langle\psi_l\,|\,A^t\,|\,\phi_m\rangle^*[/itex] and then using (1)!

But interestingly, I can't prove (*) directly using the Dirac delta function [itex]\delta(r-r')[/itex] and the definition of the inner product: [itex]\langle \phi\,|\,A\,|\,\psi\rangle[/itex] = [itex]\int\phi^*(r)\,A\,\psi(r)\,dr[/itex].
 
  • #7
Happiness said:
I believe the book get the second line by using [itex]\sum_k\,|\psi_{k}\rangle\langle \psi_{k}| = I[/itex],

This must be the case because ∑<x|bi><bi|y> = <x|Σ|bi><bi|y> = <x|y> in finite dimensions. As I said above you handle infinite dimensions and the continuous case at an intuitive by taking limits of the finite case.

Thanks
Bill
 
  • #8
bhobba said:
This must be the case because ∑<x|bi><bi|y> = <x|Σ|bi><bi|y> = <x|y> in finite dimensions. As I said above you handle infinite dimensions and the continuous case at an intuitive by taking limits of the finite case.

Thanks
Bill

[itex]\sum_k\,|\psi_{k}\rangle\langle \psi_{k}| = I[/itex] -- (**)

I guess you are showing the following proof:
Screen Shot 2015-12-24 at 9.17.22 am.png


But this only prove (**) is true when there is no operator ##A##. When ##A## is present, the proof doesn't work because ##A## has to act on ##r## and so it will always be acting on ##\psi_n(r)## and always remains inside the integral with respect to ##r##. But to show that (**) is true, we have to show that we may shift ##A## to be inside the integral with respect to ##r'##.
 
  • #9
Happiness said:
[itex]\sum_l\langle\phi_m\,|\,A\,|\,\phi_n\rangle=\sum_l\langle\phi_m\,|\,A\,|\,\psi_l\rangle \langle\psi_l\,|\,\phi_n\rangle[/itex] -- (1) (proven directly using Dirac delta function as shown above)

I managed to prove (*), by considering [itex]\langle\phi_m\,|\,A\,|\,\psi_l\rangle=\langle\psi_l\,|\,A^t\,|\,\phi_m\rangle^*[/itex] and then using (1)!

But interestingly, I can't prove (*) directly using the Dirac delta function [itex]\delta(r-r')[/itex] and the definition of the inner product: [itex]\langle \phi\,|\,A\,|\,\psi\rangle[/itex] = [itex]\int\phi^*(r)\,A\,\psi(r)\,dr[/itex].

Hmm. It should work the same:

[itex]\langle \phi_m|A|\psi_l \rangle = \int dx \phi^*_m(x) A \psi_l(x)[/itex]

[itex]= \int dx (\int dx' \phi^*_m(x') \delta(x-x')) A \psi_l(x)[/itex]

[itex]= \int dx (\int dx' \phi^*_m(x') \sum_k \psi_k^*(x) \psi_k(x')) A \psi_l(x)[/itex]

[itex]= \sum_k (\int dx' \phi^*_m(x') \psi_k(x')) \int dx \psi_k^*(x) A \psi_l(x)[/itex]

[itex]= \sum_k \langle \phi_m| \psi_k\rangle \langle \psi_k |A| \psi_l\rangle[/itex]
 
  • Like
Likes Happiness and bhobba
  • #10
Happiness said:
But this only prove (**) is true when there is no operator ##A##.
.
Think in terms of finite dimensional spaces then generalise, intuitively if necessary, to infinite dimensional otherwise you will 100% for sure run into problems.

In the finite dimensional case <x|Σ|bi><bi||y> = <x|y> implies Σ|bi><bi| = 1 as can be rigorously proven very easily. But I will let you think about it.

Thanks
Bill
 
  • #11
Really, all this stuff is so much more straightforward using the Dirac notation and the rule:

[itex]\sum_k |\psi_k \rangle\langle \psi_k| = I[/itex]

Of course, the ease is due to the fact that you're hiding a lot of math involved in making sense of these manipulations.
 
  • Like
Likes bhobba
  • #12
stevendaryl said:
Of course, the ease is due to the fact that you're hiding a lot of math involved in making sense of these manipulations.

Exactly.
http://arxiv.org/pdf/quant-ph/9907069.pdf

To the OP if you want full rigour you can have it:
http://artssciences.lamar.edu/_files/documents/physics/webdis.pdf

But as you can see its very advanced.

Softly softly at first. Do it by intuition and work your way up to full rigour if that's your wont. Its exactly what's done in Calculus - you intuitively understand calculus before doing analysis. Same here.

I fell into this trap and did a sojourn into full blown Rigged Hilbert Spaces before understanding the physics. Its the wrong way to do it - I now wish I had done it the other way around. Don't make my mistake.

That said I believe coming to grips with distribution theory now will pay off handsomely in understanding many areas eg Fourier Transforms:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

Thanks
Bill
 
  • #13
Happiness said:
I believe the book get the second line by using [itex]\sum_k\,|\psi_{k}\rangle\langle \psi_{k}| = I[/itex], but how can we prove this in terms of the Dirac delta function [itex]\delta(r-r')[/itex] and the definition of the inner product: [itex]\langle \phi\,|\,A\,|\,\psi\rangle[/itex] = [itex]\int\phi^*(r)\,A\,\psi(r)\,dr[/itex]?

My steps above seem to suggest that the second line cannot be written in any other way, where ##A## acts on something else.

Use
[tex]\sum_{i}\psi_{i}^{*}(x)\psi_{i}(\bar{x}) = \delta (x-\bar{x}) ,[/tex]
to write
[tex]\phi_{m}^{*}(x) = \sum_{i} \int d \bar{x} \ \psi_{i}^{*}(x) \psi_{i}(\bar{x}) \phi_{m}^{*}(\bar{x}) ,[/tex]
[tex]\phi_{n}(x) = \sum_{j} \int dy \ \psi_{i}^{*}(y) \psi_{i}(x) \phi_{n}(y) .[/tex]
Now, substitute these in
[tex]\langle \phi_{m}|A|\phi_{n}\rangle = \int dx \phi_{m}^{*}(x) A \phi_{n}(x) ,[/tex] and rearrange the factors
[tex]\langle \phi_{m}|A|\phi_{n}\rangle = \sum_{i,j}\left( \int d\bar{x} \phi_{m}^{*}(\bar{x}) \psi_{i}(\bar{x}) \right) \left( \int dx \psi_{i}^{*}(x) A \psi_{j}(x) \right) \left( \int dy \psi_{j}^{*}(y) \phi_{n}(y) \right) .[/tex]
This, you can write as
[tex]\langle \phi_{m}|A|\phi_{n}\rangle = \sum_{i,j} \langle \phi_{m}|\psi_{i}\rangle \langle \psi_{i}|A| \psi_{j}\rangle \langle \psi_{j}|\phi_{n}\rangle .[/tex]
 
  • Like
Likes Happiness and bhobba
  • #14
bhobba said:
In the finite dimensional case <x|Σ|bi><bi||y> = <x|y> implies Σ|bi><bi| = 1 as can be rigorously proven very easily. But I will let you think about it.

Thanks
Bill

Does the proof you have in mind use matrices and vectors instead of the Dirac delta function and integration?

I figured out we can prove ##\sum|\,b_i><b_i\,| = I## using the matrices-and-vectors notation, which is permitted since the eigenvectors in a complete set can be made orthogonal. We then choose a basis of eigenvectors such that each member [itex]b_i[/itex] is represented by a vector that has exactly one 1 entry and the rest of the entries 0, eg., [itex]b_2=(0, 1, 0, ... , 0)[/itex].

Next consider
[tex]<\phi\,|\,\psi>\,\,= \begin{pmatrix}\phi_1&\phi_2&...&\phi_n\end{pmatrix}\begin{pmatrix}\psi_1\\\psi_2\\\vdots\\\psi_n\end{pmatrix}\\
=\phi_1\psi_1+\phi_2\psi_2+...+\phi_n\psi_n[/tex]

In our specially selected basis [itex]\{b_i\}[/itex], the above is easily shown to be
[tex]=\sum_{i=1}^n<\phi\,|\,b_i><b_i\,|\,\psi>[/tex]

Using the associative and distributive laws of matrix multiplication, we have
[tex]=\,\,<\phi\,|\big(\sum_{i=1}^n|\,b_i><b_i\,|\big)|\,\psi>[/tex]

This suggests [itex]\sum_{i=1}^n|\,b_i><b_i\,|=I_n[/itex] where [itex]I_n[/itex] is the [itex]n[/itex]-dimensional identity matrix.

Indeed, it is easily seen that [itex]\sum_{i=1}^n|\,b_i><b_i\,|=I_n[/itex] by directly substituting the [itex]b_i[/itex]'s in our specially selected basis [itex]\{b_i\}[/itex].

Finally [itex]\sum_{i=1}^n|\,b_i><b_i\,|=I_n[/itex] in any orthogonal basis of dimension [itex]n[/itex] because of a theorem in linear algebra (What's the name of this theorem?).
 
Last edited:
  • #15
You are on the right track but over complicating it.

<x|Σ|bi><bi||y> = <x|y> ie <bj|Σ|bi><bi||bk> = I the identity matrix so Σ|bi><bi| = I.

The theorem you are thinking of is the Matrix Representation Theorem:
http://web.maths.unsw.edu.au/~danielch/linear12/lecture12.pdf

I however would not use matrices. (<x|Σ|bi><bi|)|y> = (<x)|y> implies <x|Σ|bi><bi| = <x| by the definition of bra's. This means Σ|bi><bi| = I.

Once that's done that little theorem you refer to (the Matrix Representation Theorem that took two slides in the above to prove) is easy.
A = Σ|bi><bi|AΣ|bj><bj| = ΣΣ<bi|A|bj> |bi><bj| Hence if <bi|A|bj> is the identity matrix A must be the identity operator. See the power of the Dirac notation. Things that are more complex and require a bit of thinking in linear algebra are rather trivial using that notation.

Personally I believe all linear algebra should be done that way.

Thanks
Bill
 
Last edited:
  • #16
bhobba said:
You are on the right track but over complicating it.

My last proof is essentially just substituting the bi's into Σ|bi><bi| to get I, and handing the rest of the work over to a theorem in linear algebra. I don't see how simpler it can get.

bhobba said:
<x|Σ|bi><bi||y> = <x|y> ie <bj|Σ|bi><bi||bk> = I the identity matrix so Σ|bi><bi| = I.

I can hardly understand what you are trying to show. Why is <x|Σ|bi><bi||y> = <x|y>? And how does it imply <bj|Σ|bi><bi||bk> = I? And how does this imply Σ|bi><bi| = I?

bhobba said:
I however would not use matrices. (<x|Σ|bi><bi|)|y> = (<x)|y> implies <x|Σ|bi><bi| = <x| by the definition of bra's. This means Σ|bi><bi| = I.

Are your bras and kets shorthands for functions or vectors? It seems like I have correctly guessed you were thinking of them in terms of vectors when you said I was on the right track, but now you say you won't use matrices and this is very confusing! (Vectors are matrices.)

Are you using a cancellation law for that implication part? Do we have such cancellation law?

bhobba said:
A = Σ|bi><bi|AΣ|bj><bj| = ΣΣ<bi|A|bj> |bi><bj| Hence if <bi|A|bj> is the identity matrix A must be the identity operator.

How did you get the first equality? You conclude that A must be the identity operator, but you did not define what A is!
 
Last edited:
  • #17
Happiness said:
Are your bras and kets shorthands for functions or vectors?

Hmmm. I think you need to start from scratch with Bras and Kets.

A Ket is an element of a vector space and is written |x>. The space of linear functional's defined on that space is called its dual and its easy to see its also a vector space. These vectors are called Bra's and are written as <x|. A Bra acting on a Ket is written <y|x> and is the linear functional that is the Bra applied to the Ket. Now a very important theorem associated with this is the Riesz theorem:
https://en.wikipedia.org/wiki/Riesz_representation_theorem

This says under certain conditions the Bra's and Ket's can be put into one to one correspondence such that <x|y> = conjugate <y|x> ie the usual properties of an inner product. It doesn't apply for all spaces, for example it doesn't apply to Rigged Hilbert spaces or to Hilbert spaces unless a further condition of boundedness is imposed, but it applies to finite dimensional spaces which is one reason it's much easier to assume them when dealing with this stuff.

Now we will extend the notation to operators and define the operator |x><y|. When it acts on a Ket |u> you get |x><y|u> and when it acts on a Bra <u| you get <u|x><y|.

One of the most important relations from this notation that is used to prove all sorts of things is if |bi> is an orthonormal basis Σ|bi><bi| = I.

Its easy to prove in the Bra Ket notation. Since the |bi> are a basis any vector |x> = Σci |bi> for some ci. <bj|x> = Σ ci<bj|bi> = cj. So we have |x> = Σ <bi|x>|bi>.
This means <y|x> = Σ <y|bi><bi|x> = <y|Σbi><bi|x>. Now from the definition of a Bra as a linear operator this means <y| = <y|Σ|bi><bi| so Σ|bi><bi| = I.

This is just a brief introduction to it. You can find a much more complete treatment in chapter 1 of Ballentine. There he explains other texts make statements the validity of which are open to question, which is my experience as well. When I started out this sent me on a sojourn into exotica to resolve it. Don't do that - Ballentine will make it much clearer.

If its still unclear then I will have to leave it up to others - but I urge you to read Ballentine.

Thanks
Bill
 
Last edited:
  • Like
Likes Happiness
  • #18
Happiness said:
Does the proof you have in mind use matrices and vectors instead of the Dirac delta function and integration?

I figured out we can prove ##\sum|\,b_i><b_i\,| = I## using the matrices-and-vectors notation ...

The discrete set [itex]\big\{ |\phi_{i}\rangle ; \ i \in I \big\}[/itex] or the continuous one [itex]\big\{ |\alpha\rangle ; \ \alpha \in \mathbb{R} \big\}[/itex] are called orthonormal if and only if
[tex]\langle \phi_{i}|\phi_{j}\rangle = \delta_{ij}, \ \ \mbox{or} \ \ \ \langle \alpha |\beta \rangle = \delta (\alpha - \beta) . \ \ \ \ (1)[/tex]
Furthermore, the sets are called complete if and only if the following expansions hold for arbitrary state vector
[tex]|\Psi \rangle = \sum_{i} C_{i} \ |\phi_{i}\rangle , \ \ \ \ \ (2a)[/tex]
or
[tex]|\Psi \rangle = \int d\alpha \ C(\alpha) \ |\alpha \rangle . \ \ \ \ \ (2b)[/tex]
Now, using the orthomormality conditions (1), we get the following expressions for the coefficients
[tex]C_{i} = \langle \phi_{i} | \Psi \rangle , \ \ \ \ \ \ \ \ (3a)[/tex]
[tex]C(\beta) = \langle \beta | \Psi \rangle . \ \ \ \ \ \ \ \ (3b)[/tex]
Substituting these back in (2), we obtain
[tex]| \Psi \rangle = \sum_{i}|\phi_{i}\rangle \langle \phi_{i} | \Psi \rangle , \ \ \ \ \ (4a)[/tex]
[tex]| \Psi \rangle = \int d \alpha \ |\alpha \rangle \langle \alpha | \Psi \rangle . \ \ \ \ \ (4b)[/tex]
Since this hold for any state vector [itex]| \Psi \rangle[/itex], then the completeness condition (2) can be re-expressed by
[tex]\mathbb{I} = \sum_{i}|\phi_{i}\rangle \langle \phi_{i} | , \ \ \ \ \ (5a)[/tex]
[tex]\mathbb{I} = \int d \alpha \ |\alpha \rangle \langle \alpha | . \ \ \ \ \ (5b)[/tex]
So, we do not derive the completeness condition. We simply re-express (2a,b) by the equivalent relations (5a,b). In fact, finding complete orthonormal set is a very difficult problem in mathematical physics.
 
  • Like
Likes bhobba and Happiness
  • #19
samalkhaiat said:
So, we do not derive the completeness condition. We simply re-express (2a,b) by the equivalent relations (5a,b). In fact, finding complete orthonormal set is a very difficult problem in mathematical physics.

What I wanted to do was to show that if ##\{b_i\}## forms a complete orthonormal set, then the operator [itex]\sum_{i}|\phi_{i}\rangle \langle \phi_{i} |[/itex] is the identity operator.
 
  • #20
Happiness said:
What I wanted to do was to show that if ##\{b_i\}## forms a complete orthonormal set, then the operator [itex]\sum_{i}|\phi_{i}\rangle \langle \phi_{i} |[/itex] is the identity operator.
And I showed you that the word "complete" in the phrase "complete orthonormal set" simply means that [itex]\sum_{i}|\phi_{i}\rangle \langle \phi_{i} |[/itex] is the identity.
 
  • Like
Likes bhobba
  • #21
Happiness said:
What I wanted to do was to show that if ##\{b_i\}## forms a complete orthonormal set, then the operator [itex]\sum_{i}|\phi_{i}\rangle \langle \phi_{i} |[/itex] is the identity operator.

I proved it for the finite dimensional case. Samalkhaiat extended it to the infinite dimensional case. It is actually an alternative definition of complete orthonormal set.

Thanks
Bill
 
Last edited:

1. What is a matrix representation of an operator?

A matrix representation of an operator is a way of expressing a linear transformation in a more concise and organized manner. It involves representing the operator as a matrix, with each column and row corresponding to a specific basis vector. This allows for easier calculation and analysis of the operator's effects on vectors in a vector space.

2. Why is a change of basis important when representing an operator?

A change of basis is important because it allows us to view an operator in different ways. Different basis vectors can provide different insights and perspectives on the operator's behavior and properties. It also allows for more efficient computation and analysis of the operator, as different basis vectors may lead to simpler matrix representations.

3. How is a matrix representation of an operator with a change of basis calculated?

The matrix representation of an operator with a change of basis can be calculated by using a change of basis matrix. This matrix is formed by arranging the new basis vectors as columns, with each column representing the coordinates of the corresponding basis vector in the original basis. This change of basis matrix is then multiplied by the original matrix representation of the operator, resulting in the new matrix representation.

4. Can a change of basis affect the properties of an operator?

Yes, a change of basis can affect the properties of an operator. In particular, the eigenvalues and eigenvectors of an operator may change when the basis changes. This can potentially alter the behavior and characteristics of the operator, such as its invertibility or diagonalizability.

5. What are some real-world applications of matrix representation of an operator with a change of basis?

Matrix representation of an operator with a change of basis has various real-world applications in fields such as physics, engineering, and computer science. It is used in quantum mechanics to describe the behavior of particles, in signal processing to analyze and manipulate signals, and in computer graphics to transform and animate objects. It also has applications in data compression, image processing, and optimization problems.

Similar threads

Replies
1
Views
587
Replies
10
Views
6K
Replies
4
Views
1K
Replies
3
Views
843
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
2K
Replies
30
Views
3K
  • Linear and Abstract Algebra
Replies
7
Views
2K
Replies
1
Views
961
  • Quantum Physics
Replies
3
Views
2K
Back
Top