Linear Dependency Check: {e^x, e^{2x}}

  • Thread starter Thread starter evagelos
  • Start date Start date
  • Tags Tags
    Linear
evagelos
Messages
314
Reaction score
0
Prove whethere is linearly independent or not the following:

{e^x,e^{2x}}
 
Physics news on Phys.org
If not, then there must exist two scalars, a and b (they can't both be zero), such that

a e^x + b e^{2x} = 0

for all x. Can you find such scalars?
 
Since that is to be true for all x, you can get two simple equations to solve for a and b by choosing two values of x.
 
I think it's possible:
a e^x + b e^{2x} = 0 <==> a e^x = - b e^{2x} <==> ln (a e^x) = - ln (b e^{2x}) <==> ax = -2bx

So if we carefully choose a, we can definitely show your given set is linearly dependent.
 
jeff1evesque said:
I think it's possible:
a e^x + b e^{2x} = 0 <==> a e^x = - b e^{2x} <==> ln (a e^x) = - ln (b e^{2x}) <==> ax = -2bx

So above, if we carefully choose a, we can definitely show your given set is linearly dependent.

Except

\ln(a e^x) = \ln(a) + x
\ln(-b e^{2x}) = \ln(-b) + 2x

so you need

\ln(a) - \ln(-b) = x for all x!
 
jbunniii said:
Except

\ln(a e^x) = \ln(a) + x
\ln(-b e^{2x}) = \ln(-b) + 2x

so you need

\ln(a) - \ln(-b) = x for all x!

Bummer, so sorry.
 
jbunniii said:
If not, then there must exist two scalars, a and b (they can't both be zero), such that

a e^x + b e^{2x} = 0

for all x. Can you find such scalars?

Put x=0,then a+b=0 ====> a=-b

Hence the above are linearly dependent

CORRECT?

THANKS
 
No, not correct. You need to show that a=b=0. That is the DEFINITION of linear dependence.
 
matt grime said:
No, not correct. You need to show that a=b=0. That is the DEFINITION of linear dependence.


Mat grime ,please ,write down for me the definition ,when a set of n vectors are linearly independent
 
  • #10
Apologies, I missed out an 'in'. The functions e^x and e^2x are clearly linearly independent (over R), which is where I misunderstood what you're saying. (If you don't see it then just rearrange ae^x +be^2x=0 to see that you're claim of linear dependence implies that e^x = -a/b for all x.)
 
Last edited:
  • #11
evagelos said:
Put x=0,then a+b=0 ====> a=-b

Hence the above are linearly dependent

CORRECT?

THANKS
No, that is incorrect. You have shown that taking a= -b makes ae^x+ be^{2x}= 0 for x= 0. To show they are linearly dependent, you would have to find a and b so that was true for ALL x. And you can't. Taking x= 0 gives a+ b= 0, so a=-b, while taking x= 1 gives ae+ be^2= 0 so a= -be. Those two equations are only possible if a= b= 0.

Another way to prove it is this: if ae^x+ be^{2x}= 0 for all x, then it is a constant and its derivative must be 0 for all x: ae^x+ 2be^{2x}= 0. Taking x= 0 in both os those, a+ b= 0 and a+ 2b= 0. Again, those two equations give a= b= 0.
 
  • #12
HallsofIvy said:
No, that is incorrect. You have shown that taking a= -b makes ae^x+ be^{2x}= 0 for x= 0. To show they are linearly dependent, you would have to find a and b so that was true for ALL x. And you can't. Taking x= 0 gives a+ b= 0, so a=-b, while taking x= 1 gives ae+ be^2= 0 so a= -be. Those two equations are only possible if a= b= 0.

Another way to prove it is this: if ae^x+ be^{2x}= 0 for all x, then it is a constant and its derivative must be 0 for all x: ae^x+ 2be^{2x}= 0. Taking x= 0 in both os those, a+ b= 0 and a+ 2b= 0. Again, those two equations give a= b= 0.

Is not the definition for linear indepedence the following:

for all a,b and x ,real Nos and ae^x + be^{2x}=0\Longrightarrow a=0=b
 
  • #13
Yes, that is why you can't "prove e^x and e^{2x} are dependent": they are independent.
 
  • #14
Remember this is equal to the zero *function* in the vector space of real valued functions. The zero function is the one that is zero for all its inputs.
 
  • #15
evagelos said:
Is not the definition for linear indepedence the following:

for all a,b and x ,real Nos and ae^x + be^{2x}=0\Longrightarrow a=0=b

HallsofIvy said:
Yes, that is why you can't "prove e^x and e^{2x} are dependent": they are independent.



Then the negation of the above definition should imply for linear dependency

But the negation of the above definition is:

There exist a.b, and ,x such that ae^x + be^{2x} = 0 anda\neq 0 and b\neq 0.

HENCE if we put a=2 ,b=-2 ,x=0 ,we satisfy the linear dependency definition
 
  • #16
That isn't right. You have your quantifiers all kludged up into one it should have read

(for all a,b)(ae^x + be^2x=0 for all x => a=b=0)

the negation of which is

(there exists a,b)(ae^x + be^2x=0 for all x AND NOT(a=b=0))

The for all x is not part of the definition of linear (in)dependence, it is part of the definition of what it means for the function to be zero.
 
Last edited:
  • #17
matt grime said:
That isn't right. You have your quantifiers all kludged up into one it should have read

(for all a,b)(ae^x + be^2x=0 for all x => a=b=0)

.

Can you put that into a quantifier form??
 
  • #18
In what way is that not in a useful quantifier form (for those who like things like that)? Really, we shouldn't even bother with the quantifier "for all a,b", and putting things in the full on formal abstract quantifier notation just makes things far more opaque than they need to be.

You're making a very easy question seem very hard: can you find real numbers a and b not both equal to 0 so that

ae^x + be^2x

is the zero function? No - it has been proven several times in this thread.
 
  • #19
The point is that you are working in a vector space of functions. When we say "af(x)+ bg(x)= 0" we mean it is equal to the 0 function- 0 for all x.
 
  • #20
matt grime said:
That isn't right. You have your quantifiers all kludged up into one it should have read

(for all a,b)(ae^x + be^2x=0 for all x => a=b=0)

the negation of which is

(there exists a,b)(ae^x + be^2x=0 for all x AND NOT(a=b=0))

The for all x is not part of the definition of linear (in)dependence, it is part of the definition of what it means for the function to be zero.

The negation of :

(for all a,b)(ae^x + be^2x =0for all x ===.> a=b=0) is

(there exist a,b) [~(ae^x + be^2x = 0 for all x ====> a=b=0)] ======>

(there exist a,b) [ ae^x = be^2x =0 there exist x AND a=/=0 ,b=/=0]


Whether you put the quantifier infront ,like i did, or at the end ,like you did we still have the same result.

The negation of \forall xPx is : \exists x\neg Px .

In our case Px is ( ae^x + be^2x =0 ======> a=b=0) and the negation of Px is:


ae^x = be^2x =0 and a=/=0,b=/=0

When you want to show that the function ae^x +be^2 is zero for all xεR ,YOU write :

for all xεR, ae^x + be^2x =0 OR ae^x + be^2x = 0, for all x.

To say that: for all x, is not part of the definition ,then what is part of??
 
  • #21
evagelos said:
The negation of :

(for all a,b)(ae^x + be^2x =0for all x ===.> a=b=0) is

(there exist a,b) [~(ae^x + be^2x = 0 for all x ====> a=b=0)] ======>

(there exist a,b) [ ae^x = be^2x =0 there exist x AND a=/=0 ,b=/=0]


Whether you put the quantifier infront ,like i did, or at the end ,like you did we still have the same result.

The negation of \forall xPx is : \exists x\neg Px .

In our case Px is ( ae^x + be^2x =0 ======> a=b=0) and the negation of Px is:


ae^x = be^2x =0 and a=/=0,b=/=0

When you want to show that the function ae^x +be^2 is zero for all xεR ,YOU write :

for all xεR, ae^x + be^2x =0 OR ae^x + be^2x = 0, for all x.

To say that: for all x, is not part of the definition ,then what is part of??
You did not quote all of what he said. He said "The for all x is not part of the definition of linear (in)dependence" and then followed with "it is part of the definition of what it means for the function to be zero" and that was the point of my last response:

We are talking about functions of x. Saying that "ae^x+ be^{-x}= 0" means that the function f(x)= ae^x+ be^{-x} is equal to the "0 function": g(x)= 0.
 
  • #22
To add to what Halls said: you have not take the negation of A => B correctly. The positioning of the quantifiers is very important: the negation of A => B is A and not(B), so the "for all" in there is not changed. You have negated A=>B and gotten, well, goodness knows what in relation to A and B.
 
  • #23
if (e^x, e^{2x}) is linear denpendent, then e^{2x}=ke^{x},k\in R, k\ is\ constant \longrightarrow e^x=k,but e^x is not a constant, so (e^x, e^{2x}) is linear independent
 
  • #24
HallsofIvy said:
You did not quote all of what he said. He said "The for all x is not part of the definition of linear (in)dependence" and then followed with "it is part of the definition of what it means for the function to be zero" and that was the point of my last response:

We are talking about functions of x. Saying that "ae^x+ be^{-x}= 0" means that the function f(x)= ae^x+ be^{-x} is equal to the "0 function": g(x)= 0.

What is your definition of linear Independence ,in symbolical form or not ??
 
Last edited:
  • #25
Sigh: A set of vectors
\{v_1, v_2, \cdot\cdot\cdot, v_n\}
is independent
If the only set of scalars
\{a_1, a_2, \cdot\cdot\cdot, a_n\}
such that
a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_nv_n= 0
is
a_1= a_2= \cdot\cdot\cdot= a_n= 0[/itex]<br /> <br /> But the point you seem to be having trouble with is this: Since the left side of the first equation is a linear combination of vectors, the &quot;0&quot; on the right side is the 0 <b>vector</b>! <br /> <br /> When we talk about functions being &quot;independent&quot; or &quot;dependent&quot; we are talking about the functions as members of some vector space- the set of all polynomials, continuous functions, differentiable functions, infinitely differentiable functions, etc. and the 0 vector is the 0 <b>function</b>- i.e. the function f such that f(x)= 0 for <b>all</b> x.<br /> <br /> Given almost any set of functions, you can <b>always</b> find numbers such that the linear combination is 0 at <b>one</b> value of x. That has nothing to do with them being &quot;independent&quot;.
 
  • #26


O.K here is a proof having the for all x part:


Let b=1

Let a= -e^x

AND ae^x + be^{2x} = e^x( a + be^x) = e^x( -e^x + e^x) = e^x.0 = 0 for all x\in R >

Hence we have proved there exist a\neq 0,b\neq 0 and such that :

ae^x + be^{2x} = 0 for all x
 
  • #27
matt grime said:
: can you find real numbers a and b not both equal to 0 so that

ae^x + be^2x

is the zero function? No - it has been proven several times in this thread.

If you cannot find them it does not mean they do not exist
 
  • #28


evagelos said:
O.K here is a proof having the for all x part:


Let b=1

Let a= -e^x

AND ae^x + be^{2x} = e^x( a + be^x) = e^x( -e^x + e^x)
How do you get that last step? Did you set a= -e^x and b= 1? Of course, you can't do that. -e^x is not a constant.

= e^x.0 = 0 for all x\in R >

Hence we have proved there exist a\neq 0,b\neq 0 and such that :

ae^x + be^{2x} = 0 for all x
 
Last edited by a moderator:
  • #29
matt grime said:
In what way is that not in a useful quantifier form (for those who like things like that)? Really, we shouldn't even bother with the quantifier "for all a,b", and putting things in the full on formal abstract quantifier notation just makes things far more opaque than they need to be.

You're making a very easy question seem very hard: can you find real numbers a and b not both equal to 0 so that

ae^x + be^2x

is the zero function? No - it has been proven several times in this thread.

evagelos said:
If you cannot find them it does not mean they do not exist
I can only conclude that you have not understood anything anyone has said here. matt grime is saying clearly here that it has been proven several time in this thread that it is impossible to find such a and b. Your statement here is not at all responsive to that.
 
  • #30
HallsofIvy said:
I can only conclude that you have not understood anything anyone has said here. matt grime is saying clearly here that it has been proven several time in this thread that it is impossible to find such a and b. Your statement here is not at all responsive to that.


Proofs in forums ,Universities ,books, and in general in mathematical literature are wrong and right ,and not ALWAYS RIGHT..

AND to check them whether are correct or wrong it is an impossible matter because every line can be disputed ,since are not written in a formal way where every line of the proof is justified and hence checked .

CAN anybody in this forum produce a formal proof ,establishing the linear Independence of the said functions??

IF not ,then linear Independence can be doubted.

How can a,and b be constants when they can be quantified?
 
Back
Top