# Linearly independent function sets

• A
Gold Member
It is well known that the set of exponential functions

##f:\mathbb{R}\rightarrow \mathbb{R}_+ : f(x)=e^{-kx}##,

with ##k\in\mathbb{R}## is linearly independent. So is the set of sine functions

##f:\mathbb{R}\rightarrow [-1,1]: f(x) = \sin kx##,

with ##k\in\mathbb{R}_+##.

What about other kinds of special functions, would something like the set of gamma functions ##\Gamma (kx)## or sine integrals ##Si (kx)## also be linearly independent? Or the exponential integrals ##E_n (x)## of different integer orders ##n##?

Are there any good sources in literature that handle these questions?

Cryo
Gold Member
Would Sturm-Liouville be a good starting point? If you have a set of functions that are solutions to differential equation, and you can show that the differential operator is self-adjoint, then the orthogonality of eigenfunctions follows automatically (e.g. Arfken & Weber "Mathematical Methodds for Physicists").

Indeed, both of your examples can be regrarded as eigen-functions of Laplace's equation.

hilbert2
Gold Member
Yes, this is true for the exponential and sine/cosine functions. But the gamma function is not a solution of any differential equation, neither is the sine integral ##Si (x)## as far as I know.

It's quite easy to show that any finite set of exponentials ##e^{k_1 x}, e^{k_2 x}, \dots,e^{k_n x}##, with all ##k_i## different numbers, is linearly independent, because looking at the equation

##C_1 e^{k_1 x} + C_2 e^{k_2 x} + \dots + C_n e^{k_n x} = 0##,

one of the multipliers ##k_i## is largest and therefore the term on the LHS having it in the exponent will eventually grow much faster than all the other terms when ##x## is made large enough. The other terms can't therefore cancel it and the equation can't be true. This approach doesn't require operator theory, and can probably be applied to a finite set of scaled gamma functions ##\Gamma (kx)##.

mfb
Mentor
The same argument works for all functions where ##\frac{f_{k_1}(x)}{f_{k_2}(x)} \to \infty## for ##x\to \infty## for all ##k_1 > k_2## and similar relations. A different limit for x works as well.

hilbert2
Stephen Tashi
What about other kinds of special functions

The property of being an independent set of functions isn't hard to satisfy. You can find much material about sets of functions that satisfy the stronger property of being orthogonal (with respect to some inner product). For example, there are many well know families of orthogonal polynomials.

One unifying way to look at special functions is to consider them as elements of matrices that implement representations of groups. https://bookstore.ams.org/mmono-22

StoneTemplePython, Cryo and hilbert2
Gold Member
The same argument works for all functions where ##\frac{f_{k_1}(x)}{f_{k_2}(x)} \to \infty## for ##x\to \infty## for all ##k_1 > k_2## and similar relations. A different limit for x works as well.

Is there any known condition for this to also hold for an infinite sequence of functions? Or a continuum set of functions?

Cryo
Gold Member
The same argument works for all functions where ##\frac{f_{k_1}(x)}{f_{k_2}(x)} \to \infty## for ##x\to \infty## for all ##k_1 > k_2## and similar relations. A different limit for x works as well.

Questions that touch on infinity are very much at the edge of my comfort zone. Can I therfore ask you a question to educate myself? I would expect that a finite set of functions with property ##\frac{f_{k_1}(x)}{f_{k_2}(x)} \to \infty## for ##x\to \infty## can be linearly independent following the logic of hilbert2. Should one be cautious about infinite sets, countably infinite sets etc.?

fresh_42
Mentor
2021 Award
Is there any known condition for this to also hold for an infinite sequence of functions? Or a continuum set of functions?
What do you really want to know? @mfb has just generalized your argument for ##e^{kx}## and strengthened that it applies to other families of functions, too. @Stephen Tashi has mentioned that linear independence is the standard, not the exception, i.e. a randomly gathered set of functions will a.a. be linear independent. Linear dependency is the zero set.

To me it is similar to the situation of transcendental numbers: almost all are. To prove it in certain cases is another question, which depends on the way those numbers are defined, since definitions tend to use an algebraic notation. Here we have a similar situation: almost all are (linearly independent), and a proof depends on their definition, which again is given by a restricted set of letters which are introduced to describe algebraic or analytic dependencies. So we even have a slightly better situation given, as our alphabet isn't restricted to linear relationships.

Nevertheless, a certain situation is driven by the definition of said functions. The general statement is: a.a. (randomly) collected functions are linear independent. If we drop the random aspect of it, then we will find ourselves in a certain situation, which requires a definition to be given - not just some examples.

mfb
Mentor
Linear independence means there is no finite sum that adds up to zero. It doesn't matter if we have an infinite set of functions to draw from or not: The finite sum will always have one function that dominates.

hilbert2
Gold Member
Ok, thanks, I think this clarified the matter.

Edit: I initially went in the trap of thinking that it would be difficult to find this kind of function sequences without exhausting the possibilities, but then again, the set of integers divisible by ##10^6## is as "large" as the set of all integers, and the Cantor set is as large as ##\mathbb{R}##, so this isn't really surprising after all.

Last edited:
fresh_42
Mentor
2021 Award
I think that given any set of countably many functions, the set of linear independent functions to them is dense in the usual, non-trivial topologies like metric induced or Zariski.