Is the set {e^x, x^2} linearly independent?

In the case of the set {x, 2x}, there is another solution, namely ##\alpha = 2## and ##\beta = -1##. In summary, to show linear independence, we need to show that the only solution to the equation ##\alpha f(x) + \beta g(x) = 0## is ##\alpha = \beta = 0##.
  • #1
member 587159
Hello all.

I have a question about linear dependency.

Suppose we have a set ##S## of functions defined on ##\mathbb{R}##.

##S = \{e^x, x^2\}##. It seems very intuitive that this set is linear independent. But, we did something in class I'm unsure about.

Proof:

Let ##\alpha, \beta \in \mathbb{R}##.
Suppose ##\alpha e^x + \beta x^2 = 0##
We need to show that ##\alpha = \beta = 0##

(Here comes the part I'm unsure about)

Let ##x = 0##, then ##\alpha e^0 + \beta 0^2 = 0##
##\Rightarrow \alpha = 0##

But if ##\alpha = 0## then follows that ##\beta = 0##.
So ##S## is linear independent.

My actual question:

Why can we conclude that the set is linear independent, just by saying that ##x = 0## makes it work? Shouldn't we show that it works for all ##x \in \mathbb{R}##?

Thanks in advance.
 
Physics news on Phys.org
  • #2
Math_QED said:
Why can we conclude that the set is linear independent, just by saying that ##x = 0## makes it work?
We can't. The conclusion is derived from ##\alpha = 0##, not from ##x=0##.
Shouldn't we show that it works for all ##x \in \mathbb{R}##?
Yes. This is the crucial point. The equation ##\alpha e^x + \beta x^2 = 0## has to hold for all ##x##, so especially for ##x=0##.
And if already ##x=0## imply ##\alpha = \beta = 0##, what chances are there for other values of ##x##? The coefficients do not depend on ##x##!
 
  • Like
Likes member 587159
  • #3
fresh_42 said:
We can't. The conclusion is derived from ##\alpha = 0##, not from ##x=0##.

Yes. This is the crucial point. The equation ##\alpha e^x + \beta x^2 = 0## has to hold for all ##x##, so especially for ##x=0##.
And if already ##x=0## imply ##\alpha = \beta = 0##, what chances are there for other values of ##x##? The coefficients do not depend on ##x##!

So we can conclude this because the coefficients do not depend on ##x##? From what I understood it mist hold for all x, so certainly for ##x = 0##? I still don't fully understand I think.

To complicate things even further, let me suppose that we consider these functions on the domain ##\mathbb{R}_0##, how do we show the linear dependency then?
 
  • #4
Math_QED said:
So we can conclude this because the coefficients do not depend on ##x##?
Yes.
From what I understood it must hold for all x, so certainly for ##x = 0##? I still don't fully understand I think.
Yes.
True for all ##x## implies true for a certain ##x## as well, and everything derived from a single instance has to be true. It might not be sufficient to hold for all ##x##, but it is necessary. And if something is wrong for one, it cannot be true for all.
To complicate things even further, let me suppose that we consider these functions on the domain ##\mathbb{R}_0##, how do we show the linear dependency then?
What do you mean by ##\mathbb{R}_0##? ##\mathbb{R} - \{0\}##?
If we have a ##0##, then the method above can be used.
If we don't have a ##0##, we have to do some more work. E.g. by solving the system ##\alpha e^x + \beta x^2 = 0 ## for values ##x \in \{1,2,-1,-2\}##. (I haven't done it, I simply listed enough values to be sure the system can only hold for ##\alpha = \beta = 0##.)

The domain where the coefficients ##\alpha \, , \, \beta## are taken from is essential.
Until now we discussed linear independence over ##\mathbb{Q}\, , \,\mathbb{R}## or ##\mathbb{C}##.
However, the two functions are not linear independent if we allowed the coefficients to be functions themselves.
We could get ##\alpha(x) e^x + \beta (x) x^2 = 0## with ##\alpha(x) = -x^2 \neq 0## and ##\beta(x) = e^x \neq 0##.

Let me cheat here a little bit, because I don't want to think about the question, in which coefficient domain this could be done, that is also a field. So let us consider quotients of rational polynomials in one variable instead, which is a field. (The exponential function complicates things here.)
Let us further take ##S=\{x,x^2\}##.
Then ##\alpha x + \beta x^2 = 0 \Longrightarrow \alpha = \beta = 0## if ##\alpha \, , \, \beta \in \mathbb{Q}##.
But ##\alpha x + \beta x^2 = 0 \nRightarrow \alpha = \beta = 0## if ##\alpha \, , \, \beta \in \mathbb{Q}(x)##.
In this case we have an equation ## \alpha x + \beta x^2 = 0## where we can choose ##\alpha = -x \neq 0## and ##\beta = 1 \neq 0##.
So the elements of ##S## are linear independent over ##\mathbb{Q}##, but linear dependent over ##\mathbb{Q}(x)##.
 
Last edited:
  • Like
Likes member 587159
  • #5
Math_QED said:
I have a question about linear dependency.

Suppose we have a set ##S## of functions defined on ##\mathbb{R}##.

##S = \{e^x, x^2\}##. It seems very intuitive that this set is linear independent. But, we did something in class I'm unsure about.

Proof:

Let ##\alpha, \beta \in \mathbb{R}##.
Suppose ##\alpha e^x + \beta x^2 = 0##
We need to show that ##\alpha = \beta = 0##
No, that's an incomplete summary of what you need to show. Suppose that your set is {x, 2x}.
Suppose ##\alpha x + \beta 2x = 0##
Then ##\alpha = 0## and ##\beta = 0## clearly work.

From this one might mistakenly conclude that the functions x and 2x are linearly independent, which is not true.
What you left out from "We need to show that ##\alpha = \beta = 0##" is that there can be no other solutions for these constants.
 
  • Like
Likes member 587159

What is the definition of linear independence?

Linear independence refers to a set of vectors in a vector space that cannot be written as a linear combination of each other. In other words, none of the vectors in the set can be expressed as a scalar multiple of another vector in the same set.

How do you determine if a set of vectors is linearly independent?

To determine if a set of vectors is linearly independent, you need to solve the linear combination equation where the coefficients of each vector are unknown. If the only solution is when all coefficients are equal to zero, then the set is linearly independent.

What is the significance of linear independence?

Linear independence is important in linear algebra because it allows us to determine if a set of vectors can serve as a basis for a vector space. A basis is a set of vectors that can be used to represent any vector in the vector space through a linear combination.

How do you prove that the set {e^x, x^2} is linearly independent?

To prove that a set of vectors is linearly independent, you need to show that the only solution to the linear combination equation is when all coefficients are equal to zero. In this case, you can solve the equation e^x + cx^2 = 0 for all values of x and show that c must be equal to zero for the equation to hold true.

Can a set of vectors be linearly dependent and linearly independent at the same time?

No, a set of vectors cannot be both linearly dependent and linearly independent. If a set is linearly dependent, it means that at least one vector in the set can be expressed as a linear combination of the other vectors. This violates the definition of linear independence, which requires that no vector can be written as a linear combination of the other vectors in the set.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
877
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
890
  • Linear and Abstract Algebra
Replies
11
Views
975
  • Linear and Abstract Algebra
Replies
15
Views
1K
Replies
2
Views
784
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
19
Views
2K
  • Linear and Abstract Algebra
Replies
11
Views
1K
Back
Top