• WWGD
In summary, the conversation discusses how to determine if a linear operator is self-adjoint in a finite-dimensional vector space. The issue is that the relation L = L^T, which holds for self-adjoint operators, may not hold under a change of basis. However, a special basis can be found in a given vector space to determine self-adjointness. For real vector spaces, this is the standard basis, while for complex vector spaces, it is an orthonormal basis. An isomorphism between the vector space and a real or complex vector space can also be used to determine self-adjointness.
WWGD
Gold Member
Hi, Let V be a fin. dim. vector space over Reals or Complexes and let L: V-->V be a linear operator.
I am just curious about how to use a choice of basis for general V, to decide whether L is self-adjoint. The issue, specifically, is that the relation ## L= L^T ## ( abusing notation ; here L is a matrix representing L in some choice of basis ) which holds for self-adjoint operators in f.dim. space, will most likely not hold under a change of basis. But we may be able to find a special basis in a given V:

I think for ## \mathbb R^n ## , if we use the standard basis e_i=δi,j , then L is
self adjoint if , when it is represented as a matrix M in this basis, we have that ## M^T = M ## , i.e., M equals its transpose ( if V is complex, we need the resp. matrix to equal the transpose of the conjugate ) . (Phew !) Now, can we find some specific basis ## B_V ## in a general f.dim vector space V so that we can conclude L : V-->V is self adjoint if/when its representing matrix M satisfies ## M= M^T ## (or equals its conjugate transpose if the base field is C)? I thought we may use an vector space isomorphism between V and ## \mathbb R^n ## to pull back the basis {#e_i#} , and then this "pulled-back" basis would do the job?

Well, one way is to just find an orthonormal basis, if you want something that's equivalent to the standard basis (apply Graham-Schmidt process to any basis for an explicit construction). That's what you'd need to make your reasoning work because you need an isomorphism as an inner-product space, rather than just a vector space. Self-adjointness is a property that relates to the inner product, so that has to figure in somewhere. Similarly, for the complex case.

You could also use an arbitrary basis and check <v, Aw> = <Av,w> for all the basis elements, using bilinearity to extend that to everything else.

## 1. What is the concept of adjointness in linear algebra?

Adjointness in linear algebra refers to the relationship between a linear transformation and its corresponding dual transformation. It is a way of associating a linear transformation between two vector spaces with its "adjoint" transformation, which maps the dual space of the second vector space back to the dual space of the first vector space.

## 2. How is the adjoint of a linear transformation represented?

The adjoint of a linear transformation is typically represented using the notation T*, where T is the original linear transformation. It can also be represented using the transpose of the matrix representing T, or as the Hermitian conjugate in the case of complex vector spaces.

## 3. What is the significance of adjointness in quantum mechanics?

In quantum mechanics, the adjoint of a linear operator plays a crucial role in determining the properties of a system. It is used to define the concept of Hermitian operators, which represent observables in quantum mechanics. Adjointness also allows for the calculation of inner products and is essential in the study of quantum entanglement.

## 4. What is basis representation in linear algebra?

Basis representation in linear algebra refers to the representation of a vector or a linear transformation using a basis set. A basis is a set of linearly independent vectors that span a vector space. By representing a vector or linear transformation in terms of its basis vectors, we can easily perform calculations such as addition, multiplication, and inversion.

## 5. How does basis representation relate to adjointness?

Adjointness and basis representation are closely related in linear algebra. The adjoint of a linear transformation is often represented using the basis vectors of the vector spaces involved. Additionally, the concept of adjointness is used to define the dual basis, which is a set of vectors that spans the dual space of a given vector space. Basis representation is also useful in calculating the adjoint of a linear transformation.

• Linear and Abstract Algebra
Replies
3
Views
1K
• Calculus and Beyond Homework Help
Replies
0
Views
506
• Linear and Abstract Algebra
Replies
1
Views
650
• Calculus and Beyond Homework Help
Replies
15
Views
979
• Special and General Relativity
Replies
4
Views
311
• Quantum Physics
Replies
1
Views
1K
• Linear and Abstract Algebra
Replies
6
Views
1K
• Linear and Abstract Algebra
Replies
2
Views
959
• Linear and Abstract Algebra
Replies
9
Views
817
• Differential Geometry
Replies
1
Views
2K