I Is the set {e^x, x^2} linearly independent?

member 587159
Hello all.

I have a question about linear dependency.

Suppose we have a set ##S## of functions defined on ##\mathbb{R}##.

##S = \{e^x, x^2\}##. It seems very intuitive that this set is linear independent. But, we did something in class I'm unsure about.

Proof:

Let ##\alpha, \beta \in \mathbb{R}##.
Suppose ##\alpha e^x + \beta x^2 = 0##
We need to show that ##\alpha = \beta = 0##

(Here comes the part I'm unsure about)

Let ##x = 0##, then ##\alpha e^0 + \beta 0^2 = 0##
##\Rightarrow \alpha = 0##

But if ##\alpha = 0## then follows that ##\beta = 0##.
So ##S## is linear independent.

My actual question:

Why can we conclude that the set is linear independent, just by saying that ##x = 0## makes it work? Shouldn't we show that it works for all ##x \in \mathbb{R}##?

Thanks in advance.
 
Physics news on Phys.org
Math_QED said:
Why can we conclude that the set is linear independent, just by saying that ##x = 0## makes it work?
We can't. The conclusion is derived from ##\alpha = 0##, not from ##x=0##.
Shouldn't we show that it works for all ##x \in \mathbb{R}##?
Yes. This is the crucial point. The equation ##\alpha e^x + \beta x^2 = 0## has to hold for all ##x##, so especially for ##x=0##.
And if already ##x=0## imply ##\alpha = \beta = 0##, what chances are there for other values of ##x##? The coefficients do not depend on ##x##!
 
  • Like
Likes member 587159
fresh_42 said:
We can't. The conclusion is derived from ##\alpha = 0##, not from ##x=0##.

Yes. This is the crucial point. The equation ##\alpha e^x + \beta x^2 = 0## has to hold for all ##x##, so especially for ##x=0##.
And if already ##x=0## imply ##\alpha = \beta = 0##, what chances are there for other values of ##x##? The coefficients do not depend on ##x##!

So we can conclude this because the coefficients do not depend on ##x##? From what I understood it mist hold for all x, so certainly for ##x = 0##? I still don't fully understand I think.

To complicate things even further, let me suppose that we consider these functions on the domain ##\mathbb{R}_0##, how do we show the linear dependency then?
 
Math_QED said:
So we can conclude this because the coefficients do not depend on ##x##?
Yes.
From what I understood it must hold for all x, so certainly for ##x = 0##? I still don't fully understand I think.
Yes.
True for all ##x## implies true for a certain ##x## as well, and everything derived from a single instance has to be true. It might not be sufficient to hold for all ##x##, but it is necessary. And if something is wrong for one, it cannot be true for all.
To complicate things even further, let me suppose that we consider these functions on the domain ##\mathbb{R}_0##, how do we show the linear dependency then?
What do you mean by ##\mathbb{R}_0##? ##\mathbb{R} - \{0\}##?
If we have a ##0##, then the method above can be used.
If we don't have a ##0##, we have to do some more work. E.g. by solving the system ##\alpha e^x + \beta x^2 = 0 ## for values ##x \in \{1,2,-1,-2\}##. (I haven't done it, I simply listed enough values to be sure the system can only hold for ##\alpha = \beta = 0##.)

The domain where the coefficients ##\alpha \, , \, \beta## are taken from is essential.
Until now we discussed linear independence over ##\mathbb{Q}\, , \,\mathbb{R}## or ##\mathbb{C}##.
However, the two functions are not linear independent if we allowed the coefficients to be functions themselves.
We could get ##\alpha(x) e^x + \beta (x) x^2 = 0## with ##\alpha(x) = -x^2 \neq 0## and ##\beta(x) = e^x \neq 0##.

Let me cheat here a little bit, because I don't want to think about the question, in which coefficient domain this could be done, that is also a field. So let us consider quotients of rational polynomials in one variable instead, which is a field. (The exponential function complicates things here.)
Let us further take ##S=\{x,x^2\}##.
Then ##\alpha x + \beta x^2 = 0 \Longrightarrow \alpha = \beta = 0## if ##\alpha \, , \, \beta \in \mathbb{Q}##.
But ##\alpha x + \beta x^2 = 0 \nRightarrow \alpha = \beta = 0## if ##\alpha \, , \, \beta \in \mathbb{Q}(x)##.
In this case we have an equation ## \alpha x + \beta x^2 = 0## where we can choose ##\alpha = -x \neq 0## and ##\beta = 1 \neq 0##.
So the elements of ##S## are linear independent over ##\mathbb{Q}##, but linear dependent over ##\mathbb{Q}(x)##.
 
Last edited:
  • Like
Likes member 587159
Math_QED said:
I have a question about linear dependency.

Suppose we have a set ##S## of functions defined on ##\mathbb{R}##.

##S = \{e^x, x^2\}##. It seems very intuitive that this set is linear independent. But, we did something in class I'm unsure about.

Proof:

Let ##\alpha, \beta \in \mathbb{R}##.
Suppose ##\alpha e^x + \beta x^2 = 0##
We need to show that ##\alpha = \beta = 0##
No, that's an incomplete summary of what you need to show. Suppose that your set is {x, 2x}.
Suppose ##\alpha x + \beta 2x = 0##
Then ##\alpha = 0## and ##\beta = 0## clearly work.

From this one might mistakenly conclude that the functions x and 2x are linearly independent, which is not true.
What you left out from "We need to show that ##\alpha = \beta = 0##" is that there can be no other solutions for these constants.
 
  • Like
Likes member 587159
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top