Undergrad Proving linear independence of two functions in a vector space

Click For Summary
The discussion centers on proving the linear independence of two functions, f(x) = x and g(x) = 1/x, in the vector space of functions from R+ to R. The author of the exercise claims these functions are linearly independent, but the participant initially believes they are dependent due to finding a specific value of x that satisfies the equation c1f + c2g = 0 with non-zero coefficients. The key misunderstanding is clarified: linear independence must hold for all x, not just for a particular instance. The participant realizes that the zero function must equal zero for all arguments, leading to the conclusion that the functions are indeed linearly independent.
fatpotato
TL;DR
Proving linear independence of two simple functions in a vector space. Exercice is from a textbook, but solution seems incoherent.
Hello,

I am doing a vector space exercise involving functions using the free linear algebra book from Jim Hefferon (available for free at http://joshua.smcvt.edu/linearalgebra/book.pdf) and I have trouble with the author's solution for problem II.1.24 (a) of page 117, which goes like this :

Prove that each set ##\{f,g\}## is linearly independant in the vector space of all functions ## \mathbb{R}^+ \rightarrow \mathbb{R}##. In this case at point (a) of exercise with ##f(x) = x , g(x) = \frac{1}{x}##

If I understand correctly, I need to show that the only solution to the linear combination of ##f## and ##g## such that ## c_1 f + c_2 g = 0## is the trivial solution, where ## c_1 = c_2 = 0##.

By choosing ##c_1 = 1## and ##c_2 = -1##, I get :

$$ c_1 f + c_2 g = 0 \iff c_1f = -c_2g \iff f = g \iff x = \frac{1}{x}$$

Now, there is indeed a value of ##x## such that this equation is satisfied for non-zero coefficients, with ##x=1##, thus implying that, for at least one value of ##x##, there is a solution to the previous linear combination, so the functions are linearly dependant.

However, the author clearly says that these functions are linearly independant (see correction at http://joshua.smcvt.edu/linearalgebra/jhanswer.pdf#ans.Two.II.1.24). What am I doing wrong? Should I only take the two functions in their most general sense, without evaluating them for a given ##x##?

Thank you.

Edit : spelling
 
Last edited by a moderator:
Physics news on Phys.org
The linear dependence has to hold for all x, not just some.
 
  • Like
Likes WWGD and fatpotato
Hello,

Thank you, I was not sure whether a single occurrence would prove linear dependance. I will keep in mind that it has to hold for all x.

Best regards
 
fatpotato said:
Summary:: Proving linear independance of two simple functions in a vector space. Exercice is from a textbook, but solution seems incoherent.

Hello,

I am doing a vector space exercise involving functions using the free linear algebra book from Jim Hefferon (available for free at http://joshua.smcvt.edu/linearalgebra/book.pdf) and I have trouble with the author's solution for problem II.1.24 (a) of page 117, which goes like this :

Prove that each set ##\{f,g\}## is linearly independant in the vector space of all functions ## \mathbb{R}^+ \rightarrow \mathbb{R}##. In this case at point (a) of exercise with ##f(x) = x , g(x) = \frac{1}{x}##

If I understand correctly, I need to show that the only solution to the linear combination of ##f## and ##g## such that ## c_1 f + c_2 g = 0## is the trivial solution, where ## c_1 = c_2 = 0##.

By choosing ##c_1 = 1## and ##c_2 = -1##, I get :

$$ c_1 f + c_2 g = 0 \iff c_1f = -c_2g \iff f = g \iff x = \frac{1}{x}$$

Now, there is indeed a value of ##x## such that this equation is satisfied for non-zero coefficients, with ##x=1##, thus implying that, for at least one value of ##x##, there is a solution to the previous linear combination, so the functions are linearly dependant.

However, the author clearly says that these functions are linearly independant (see correction at http://joshua.smcvt.edu/linearalgebra/jhanswer.pdf#ans.Two.II.1.24). What am I doing wrong? Should I only take the two functions in their most general sense, without evaluating them for a given ##x##?

Thank you.

Edit : spelling
Like @martinbn wrote, the 0 here is the zero _function_ , which is 0 for all arguments and not the _ number_ 0.
 
  • Like
Likes fatpotato
WWGD said:
which is 0 for all arguments
This is what clicked for me. As a shortcut, I assumed it had to be zero, and not zero for all arguments.

Thank you for your contribution!
 
  • Like
Likes WWGD
fatpotato said:
This is what clicked for me. As a shortcut, I assumed it had to be zero, and not zero for all arguments.

Thank you for your contribution!
With your definition of linear independence, how could any two functions ever be linearly indepedent? Let ##f(x)## and ##g(x)## be functions. Take any value ##x = a##. If ##f(a) = 0## or ##g(a) = 0##, then we have ##f(a) + 0.g(a) = 0## etc. And if ##f(a), g(a) \ne 0##, then ##f(a) - \frac{f(a)}{g(a)} g(a) = 0##.

In other words, any two numbers cannot be linearly independent.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K