Is the set {e^x, x^2} linearly independent?

Click For Summary

Discussion Overview

The discussion centers on the linear independence of the set of functions ##S = \{e^x, x^2\}## defined on ##\mathbb{R}##. Participants explore the conditions under which a set of functions can be considered linearly independent and the implications of evaluating the functions at specific points.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant suggests that the set ##S = \{e^x, x^2\}## appears intuitively to be linearly independent and presents a proof attempt based on evaluating the functions at ##x = 0##.
  • Another participant argues that concluding linear independence from the evaluation at ##x = 0## is insufficient, emphasizing that the equation must hold for all ##x \in \mathbb{R}##.
  • It is noted that if the coefficients ##\alpha## and ##\beta## are shown to be zero for one value of ##x##, it raises questions about their values for other points, given that the coefficients do not depend on ##x##.
  • A later reply introduces the idea of considering the functions on a different domain, ##\mathbb{R}_0##, and questions how linear dependency would be shown in that case.
  • Participants discuss the implications of allowing coefficients to be functions themselves, which could lead to different conclusions about linear independence.
  • One participant provides a counterexample involving the functions ##S = \{x, 2x\}## to illustrate the necessity of showing that no other solutions exist for the constants in the linear combination.

Areas of Agreement / Disagreement

Participants generally agree that the conclusion of linear independence cannot be drawn solely from evaluating at ##x = 0##. However, there is no consensus on the implications of considering different domains or the nature of the coefficients.

Contextual Notes

Participants highlight that the domain of the coefficients is crucial to the discussion of linear independence, and the nature of the functions involved may affect the conclusions drawn.

member 587159
Hello all.

I have a question about linear dependency.

Suppose we have a set ##S## of functions defined on ##\mathbb{R}##.

##S = \{e^x, x^2\}##. It seems very intuitive that this set is linear independent. But, we did something in class I'm unsure about.

Proof:

Let ##\alpha, \beta \in \mathbb{R}##.
Suppose ##\alpha e^x + \beta x^2 = 0##
We need to show that ##\alpha = \beta = 0##

(Here comes the part I'm unsure about)

Let ##x = 0##, then ##\alpha e^0 + \beta 0^2 = 0##
##\Rightarrow \alpha = 0##

But if ##\alpha = 0## then follows that ##\beta = 0##.
So ##S## is linear independent.

My actual question:

Why can we conclude that the set is linear independent, just by saying that ##x = 0## makes it work? Shouldn't we show that it works for all ##x \in \mathbb{R}##?

Thanks in advance.
 
Physics news on Phys.org
Math_QED said:
Why can we conclude that the set is linear independent, just by saying that ##x = 0## makes it work?
We can't. The conclusion is derived from ##\alpha = 0##, not from ##x=0##.
Shouldn't we show that it works for all ##x \in \mathbb{R}##?
Yes. This is the crucial point. The equation ##\alpha e^x + \beta x^2 = 0## has to hold for all ##x##, so especially for ##x=0##.
And if already ##x=0## imply ##\alpha = \beta = 0##, what chances are there for other values of ##x##? The coefficients do not depend on ##x##!
 
  • Like
Likes   Reactions: member 587159
fresh_42 said:
We can't. The conclusion is derived from ##\alpha = 0##, not from ##x=0##.

Yes. This is the crucial point. The equation ##\alpha e^x + \beta x^2 = 0## has to hold for all ##x##, so especially for ##x=0##.
And if already ##x=0## imply ##\alpha = \beta = 0##, what chances are there for other values of ##x##? The coefficients do not depend on ##x##!

So we can conclude this because the coefficients do not depend on ##x##? From what I understood it mist hold for all x, so certainly for ##x = 0##? I still don't fully understand I think.

To complicate things even further, let me suppose that we consider these functions on the domain ##\mathbb{R}_0##, how do we show the linear dependency then?
 
Math_QED said:
So we can conclude this because the coefficients do not depend on ##x##?
Yes.
From what I understood it must hold for all x, so certainly for ##x = 0##? I still don't fully understand I think.
Yes.
True for all ##x## implies true for a certain ##x## as well, and everything derived from a single instance has to be true. It might not be sufficient to hold for all ##x##, but it is necessary. And if something is wrong for one, it cannot be true for all.
To complicate things even further, let me suppose that we consider these functions on the domain ##\mathbb{R}_0##, how do we show the linear dependency then?
What do you mean by ##\mathbb{R}_0##? ##\mathbb{R} - \{0\}##?
If we have a ##0##, then the method above can be used.
If we don't have a ##0##, we have to do some more work. E.g. by solving the system ##\alpha e^x + \beta x^2 = 0 ## for values ##x \in \{1,2,-1,-2\}##. (I haven't done it, I simply listed enough values to be sure the system can only hold for ##\alpha = \beta = 0##.)

The domain where the coefficients ##\alpha \, , \, \beta## are taken from is essential.
Until now we discussed linear independence over ##\mathbb{Q}\, , \,\mathbb{R}## or ##\mathbb{C}##.
However, the two functions are not linear independent if we allowed the coefficients to be functions themselves.
We could get ##\alpha(x) e^x + \beta (x) x^2 = 0## with ##\alpha(x) = -x^2 \neq 0## and ##\beta(x) = e^x \neq 0##.

Let me cheat here a little bit, because I don't want to think about the question, in which coefficient domain this could be done, that is also a field. So let us consider quotients of rational polynomials in one variable instead, which is a field. (The exponential function complicates things here.)
Let us further take ##S=\{x,x^2\}##.
Then ##\alpha x + \beta x^2 = 0 \Longrightarrow \alpha = \beta = 0## if ##\alpha \, , \, \beta \in \mathbb{Q}##.
But ##\alpha x + \beta x^2 = 0 \nRightarrow \alpha = \beta = 0## if ##\alpha \, , \, \beta \in \mathbb{Q}(x)##.
In this case we have an equation ## \alpha x + \beta x^2 = 0## where we can choose ##\alpha = -x \neq 0## and ##\beta = 1 \neq 0##.
So the elements of ##S## are linear independent over ##\mathbb{Q}##, but linear dependent over ##\mathbb{Q}(x)##.
 
Last edited:
  • Like
Likes   Reactions: member 587159
Math_QED said:
I have a question about linear dependency.

Suppose we have a set ##S## of functions defined on ##\mathbb{R}##.

##S = \{e^x, x^2\}##. It seems very intuitive that this set is linear independent. But, we did something in class I'm unsure about.

Proof:

Let ##\alpha, \beta \in \mathbb{R}##.
Suppose ##\alpha e^x + \beta x^2 = 0##
We need to show that ##\alpha = \beta = 0##
No, that's an incomplete summary of what you need to show. Suppose that your set is {x, 2x}.
Suppose ##\alpha x + \beta 2x = 0##
Then ##\alpha = 0## and ##\beta = 0## clearly work.

From this one might mistakenly conclude that the functions x and 2x are linearly independent, which is not true.
What you left out from "We need to show that ##\alpha = \beta = 0##" is that there can be no other solutions for these constants.
 
  • Like
Likes   Reactions: member 587159

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 52 ·
2
Replies
52
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K