Linear independency of operators

In summary: I'm not an expert in this area, so I'm trying to learn as much as possible in order to help someone else in the future.
  • #1
radou
Homework Helper
3,149
8
OK, I stumbled upon a problem, but I feel somehow stupid about writing the exact problem down, so I'll ask a more "general" question.

I have to see if three linear operators A, B and C from the vector space of all linear operators from R^2 to R^3 are linearly independent. The mappings are all known (i.e. A(x, y) = ... , etc.).

Well, I set up an equation aA + bB + cC = 0, or, more precise, (aA + bB + cC)(x, y) = 0(x, y), where a, b and c are scalars. This equation should hold for all ordered pairs (x, y) from R^2 in order for these operators to be equal. Now, this equation led to a system of three equations with three unknowns, a, b and c (of course, the coefficients of the system are terms involving linear combinations of x and y). Now, since this must hold for every (x, y) from R^2, my logic was to plug in any (x, y) and solve for a, b and c. If the solution is trivial, then the operators are independent.

Nevertheless, there is obviously something wrong with my logic, since, after plugging in, for example (x, y) = (0, 1), I obtain a solution in parametric form, i.e. there is no unique trivial solution!

Any help is appreciated.
 
Physics news on Phys.org
  • #2
Suppose (a',b',c') is a solution to (aB + bB + cC)(0, 1) = (0, 0, 0). Then so is (ta',tb',tc'); observe:

((ta')A + (tb')B + (tc')C)(0,1)
= (t(a'A) + t(b'B) + t(c'C))(0,1)
= t(a'A + b'B + c'C)(0,1)
= t(0,0,0) ... [since (a',b',c') is a sol'n]
= (0,0,0)

So in your situation, you've come up with the solutions being (ta',tb',tc') where t is a parameter. You found this after setting (x,y) = (0,1). This tells you that IF there is are non-trivial solutions to the equation aA + bB + cC = 0, they are of the form (ta',tb',tc') for some non-zero t. And it's clear that these are all solutions iff (a',b',c') is a solution. So compute:

(a'A + b'B + c'C)(1,0)

if you get (0,0,0), then you've found your non-trivial solution(s). If not, only the trivial solution exists.
 
  • #3
AKG said:
Suppose (a',b',c') is a solution to (aB + bB + cC)(0, 1) = (0, 0, 0). Then so is (ta',tb',tc'); observe:

((ta')A + (tb')B + (tc')C)(0,1)
= (t(a'A) + t(b'B) + t(c'C))(0,1)
= t(a'A + b'B + c'C)(0,1)
= t(0,0,0) ... [since (a',b',c') is a sol'n]
= (0,0,0)

So in your situation, you've come up with the solutions being (ta',tb',tc') where t is a parameter. You found this after setting (x,y) = (0,1). This tells you that IF there is are non-trivial solutions to the equation aA + bB + cC = 0, they are of the form (ta',tb',tc') for some non-zero t. And it's clear that these are all solutions iff (a',b',c') is a solution. So compute:

(a'A + b'B + c'C)(1,0)

if you get (0,0,0), then you've found your non-trivial solution(s). If not, only the trivial solution exists.

AKG, thank you for your reply.

I seem to have "temporarily forgotten" that the set of all solutions to a homogenous system is a vector space. :rolleyes:

I did as you suggested, and I found out that the operators are dependent, since I arrived at (a'A + b'B + c'C)(1,0) = (0, 0, 0).
 
  • #4
Well, a new dilemma is here.

I found an exercise which is similar and solved. It's, again, about "testing" the independency of three linear functionals (where all of them are well known and all are from R^3 --> R).

Anyway, at a point one obtains a(x1 - 2x2 + x3) + b(x1 + x2 + x3) + c(x1 - x2 - x3) = 0, for some scalars a, b, c, and this equation should hold for all x1, x2, x3 from R (of course, f1(x1, x2, x3) = x1 -2x2 + x3, etc.). Now, the solution says that we have to plug in (1, 0, 0), (0, 1, 0) and (0, 0, 1) into the equation in order to obtain a system of 3 equations with 3 unknowns which has a trivial solution a=b=c=0, so the functionals are independent.

This looks perfectly logical, but it doesn't seem to be in consistency with the upper posts (unless I'm missing something huge). Can it perhaps be that, if the equations hold for all the basis vectors from R^3, then they hold for all vectors from R^3, too? What is the exact reason why the solution looks like I said it looks? Could we have plugged in any other three linearly independent vectors in order to obtain a 3x3 system whose solution tells us about the dependency/independency of the given functionals?

Thanks in advance.

Edit: I'm sorry the thread looks so homework-ish, since this isn't a homework section.
 
Last edited:

1. What does it mean for operators to be linearly independent?

Linear independence of operators refers to a set of operators that do not have a linear relationship with each other. This means that none of the operators in the set can be expressed as a linear combination of the others. In other words, the actions of these operators cannot be duplicated by a single operator or combination of operators in the set.

2. How is linear independence of operators determined?

Linear independence of operators is determined by examining the set of operators and checking for a linear relationship between them. This can be done by performing operations with the operators and seeing if any of them can be expressed as a combination of the others. If no such relationship exists, then the operators are considered to be linearly independent.

3. Why is linear independence of operators important?

Linear independence of operators is important because it allows for a clear and unique representation of a set of operations. This is useful in various fields of science, such as physics and mathematics, where operators are used to describe physical systems and mathematical transformations. Additionally, linear independence is a fundamental concept in linear algebra, which is a powerful tool in many areas of science.

4. Is it possible for a set of operators to be both linearly independent and dependent?

No, it is not possible for a set of operators to be both linearly independent and dependent. These two concepts are mutually exclusive. If a set of operators is linearly independent, then by definition, they cannot have a linear relationship with each other. On the other hand, if a set of operators is linearly dependent, then there exists a linear combination of the operators that can reproduce the actions of another operator in the set.

5. How does linear independence of operators relate to linear transformations?

Linear independence of operators is closely related to linear transformations. In fact, a set of linearly independent operators can be used to define a linear transformation. This means that the actions of the operators on a vector space can be represented by a single linear transformation. Conversely, a linear transformation can be decomposed into a set of linearly independent operators. This relationship is important in understanding and analyzing linear systems and transformations in various fields of science.

Similar threads

  • Linear and Abstract Algebra
Replies
4
Views
837
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
843
  • Linear and Abstract Algebra
Replies
5
Views
852
  • Linear and Abstract Algebra
Replies
6
Views
837
  • Linear and Abstract Algebra
Replies
10
Views
1K
Back
Top