# Questions about linear differential equations

## Main Question or Discussion Point

What is the vector space under consideration when we say that a linear differential equation is of the form
$$Dy=f$$
where the differential operator D is a linear operator? What is its underlying field?

Also, is the Fourier series just an expansion (for a subspace of this vector space) in the "sine and cosine basis"?

Related Differential Equations News on Phys.org
What is the vector space under consideration when we say that a linear differential equation is of the form
$$Dy=f$$
where the differential operator D is a linear operator? What is its underlying field?
It depends very much from problem to problem. Basically, you need to ask yourself which solutions you accept. I think that usually differentiable functions are accepted, and as such you will work in the vector space of differentiable functions. But sometimes you will only be happy with smooth (= functions differentiable infinitely many times), and sometimes you will want distributional solutions. The underlying field is the same thing: something you want the coefficients to be real, sometimes complex.

Also, is the Fourier series just an expansion (for a subspace of this vector space) in the "sine and cosine basis"?
Well no, since a Fourier series is an infinite sum. An expansion in the sense of linear algebra is finite by definition. So it's not exactly the same thing (but it's related of course).

Which field of mathematics studies these spaces in general? Functional analysis?

Which field of mathematics studies these spaces in general? Functional analysis?
Yes. Function spaces are part of functional analysis. Many books on differential equations and integral equations will also introduce the necessary functional analysis.

Could you suggest some material on linear differential equations that keeps in mind this whole vector space approach?

HallsofIvy
Homework Helper
Infinite dimensional function spaces are in the realm of "Functional Analysis" and arise as solution space to linear partial differential equations. The solution spaces of ordinary linear (homogeneous) differential equations are finite dimensional and so more properly "Linear Algebra".

Infinite dimensional function spaces are in the realm of "Functional Analysis" and arise as solution space to linear partial differential equations. The solution spaces of ordinary linear (homogeneous) differential equations are finite dimensional and so more properly "Linear Algebra".
In which sense are the solution spaces to ordinary linear differential equations and partial linear differential equations, respectively, finite and infinite? Are you treating function spaces as vector spaces? In which case what are some of its basis sets?

Also, is the theory of linear operators "equivalent" to the theory of vector spaces? In other words, is it necessarily the case that if a map between two mathematical object is linear those mathematical objects are vectors in some vector space? Or can linear operators be maps between other things that aren't vectors?

pasmith
Homework Helper
In which sense are the solution spaces to ordinary linear differential equations and partial linear differential equations, respectively, finite and infinite? Are you treating function spaces as vector spaces? In which case what are some of its basis sets?

Also, is the theory of linear operators "equivalent" to the theory of vector spaces? In other words, is it necessarily the case that if a map between two mathematical object is linear those mathematical objects are vectors in some vector space? Or can linear operators be maps between other things that aren't vectors?
There is no concept of "linear" except for maps between vector spaces, in the same way that you can't talk about functions between arbitrary sets being "homomorphisms" or "continuous"; those notions only make sense for functions between groups and between topological spaces.

Given any set $X$, the set $F(X)$ of functions with domain $X$ and codomain $\mathbb{R}$ can be made into a vector space over $\mathbb{R}$ by defining vector addition and scalar multiplication pointwise:$$f + g : x \mapsto f(x) + g(x), \\ af : x \mapsto af(x).$$ If $X = \{x_1, \dots, x_n\}$ is finite then we can describe any $f \in F(X)$ as a linear combination of the functions $\phi_i : X \to \mathbb{R}$ where $$\phi_i(x) = \begin{cases} 1, & x = x_i, \\ 0, & x \neq x_i,\end{cases}$$ so that $$f = \sum_{i=1}^n f(x_i)\phi_i.$$ These functions $\phi_i$ are then a basis for $F(X)$. Since there are a finite number of them we say that $F(X)$ is finite-dimensional.

But if $X$ is not finite then we need at least a countable number of functions $\phi_y : X \to \mathbb{R}$, with $\phi_y(x) = 1$ if $x = y$ and $\phi_y(x) = 0$ otherwise, to describe an arbitrary $f \in F(X)$ as a linear combination in the same say, and $F(X)$ is then infinite-dimensional.

Letting $X = \mathbb{R}$, we see that $\mathbb{R}$ is not finite, so the space $F(\mathbb{R})$ is infinite-dimensional, and the subspace $C^{\infty}(\mathbb{R})$ of smooth functions is also infinite-dimensional.

We can define a linear operator (or function, or map, the terms are equivalent) $D : C^{\infty}(\mathbb{R}) \to C^{\infty}(\mathbb{R}) : f \mapsto f'$ which takes each function to its derivative. If $P$ is a polynomial of order $n$ in $D$ with coefficients in $\mathbb{R}$ then we can consider $$L : C^{\infty}(\mathbb{R}) \to C^{\infty}(\mathbb{R}) : f \mapsto P(D)f$$ which is a linear operator, and $$L(f) = 0$$ is then a homogenous linear differential equation of order $n$.

The set $\{f \in C^{\infty}(\mathbb{R}) : L(f) = 0\}$ is a subspace called the kernel of $L$, and it turns out that this kernel is finite-dimensional. The basis depends on the roots of $P$. In general we can factorise $P$ over $\mathbb{R}$ to obtain $$P(\lambda) = A\prod_{i = 1}^{r} (\lambda - \lambda_i)^{n_i} \prod_{i=1}^{c} ((\lambda - p_i)^2 + q_i^2)^{m_i}$$ where $A \neq 0$ is a constant and $r$ and $c$ are respectively the number of distinct real roots $\lambda_i$ and distinct pairs of complex conjugate roots $p_i \pm iq_i$; $\sum_{i=1}^r n_i + 2\sum_{i=1}^c m_i = n$, the degree of $P$.

For each distinct $\lambda_i$ we acquire the basis functions $$e^{\lambda_i x}, xe^{\lambda_i x}, \dots, x^{n_i - 1}e^{\lambda_i x}$$ and for each distinct $(p_i, q_i)$ we acquire the basis functions $$e^{p_i x}\cos(q_i x), xe^{p_i x}\cos(q_i x), \dots, x^{m_i - 1}e^{p_i x}\cos(q_i x), \\ e^{p_i x}\sin(q_i x), xe^{p_i x}\sin(q_i x), \dots, x^{m_i - 1}e^{p_i x}\sin(q_i x).$$

• 1 person
That's awesome!!

So you are saying that for all functions in ##\{f\in{}C^\infty(\mathbb{R}):L(f)=0\}## (i.e. the kernel of ##L##) there is a finite subset of ##C^\infty(\mathbb{R})## that is that function's basis set? That can be found using the method you showed above?

Can all functions in ##C^\infty(\mathbb{R})## be expanded using this basis? If not, does the subset of ##C^\infty(\mathbb{R})## that can be expanded this way have a name? And are the ##f##'s in the kernel of some ##L## a subset of that set? Or are they the whole set?

Also, is the idea behind other expansions (power series, Fourier series) the same? But now the dimension is infinite?

Where can I read more about this? Is this functional analysis? I took linear algebra and it didn't cover any of this.. Thanks!

If $P$ is a polynomial of order $n$ in $D$ with coefficients in $\mathbb{R}$ then we can consider $$L : C^{\infty}(\mathbb{R}) \to C^{\infty}(\mathbb{R}) : f \mapsto P(D)f$$ which is a linear operator, and $$L(f) = 0$$ is then a homogenous linear differential equation of order $n$.
There is another thing.. Is it just the elements in the kernel of homogeneous linear differential equations with constant coefficients that can be factorized using the basis you mentioned? Or can I do it to any element in the kernel of any homogeneous linear differential equation?

HallsofIvy
Homework Helper
In which sense are the solution spaces to ordinary linear differential equations and partial linear differential equations, respectively, finite and infinite?
You are mis-quoting me. I did not say they were "finite and infinite". I said they were finite and infinite dimensional. That's completely different. The solution set to any differential equation is infinite in the sense that it contains an infinite number of distinct functions.

Are you treating function spaces as vector spaces? In which case what are some of its basis sets?
Yes, of course. The fact that, for example, the solution set to an nth order linear homogeneous equation forms an n dimensional vector space. That is typically proven by showing that the solution to the nth order linear homogenous equation satisfying y(0)= 1, y'(0)= 0, ..., y^(n-1)(0)= 0; the solution satisfying y(0)= 0, y'(0)= 1, ..., y^(n-1)(0)= 0, ...., to the solution satisfying y(0)= 0, y'(0)= 0, ..., y^(n-1)(0)= 1, form a basis for the vector space of all solutions.

(And notice that I did include "homogeneous" in my original post. The set of all solution to a nonhomogeneous linear differential equation form a "linear manifold". If you think of a vector space as a plane containing the origin, a linear manifold is a plane that does not contain the origin.)

Also, is the theory of linear operators "equivalent" to the theory of vector spaces? In other words, is it necessarily the case that if a map between two mathematical object is linear those mathematical objects are vectors in some vector space? Or can linear operators be maps between other things that aren't vectors?
The definition of "linear" pretty much requires that you be able to add objects and multiply by numbers- that they form a vector space.

Last edited by a moderator:
pasmith
Homework Helper
That's awesome!!

So you are saying that for all functions in ##\{f\in{}C^\infty(\mathbb{R}):L(f)=0\}## (i.e. the kernel of ##L##) there is a finite subset of ##C^\infty(\mathbb{R})## that is that function's basis set?
The function doesn't have a basis. Spaces have bases, and a function which is a member of a space can be expressed as a linear combination of basis vectors.

Can all functions in ##C^\infty(\mathbb{R})## be expanded using this basis?
No, only those in the kernel of $L$.

Also, is the idea behind other expansions (power series, Fourier series) the same? But now the dimension is infinite?
The idea behind Fourier series and expansions in Bessel functions, Legendre polynomials, or Chebyshev polynomials are similar, and are the subject of Sturm-Liouville theory.

There is another thing.. Is it just the elements in the kernel of homogeneous linear differential equations with constant coefficients that can be factorized using the basis you mentioned? Or can I do it to any element in the kernel of any homogeneous linear differential equation?
Not in general. For constant coefficients, we have available the fundamental theorem of algebra, so we can always obtain first-order factors. Thus the problem is reduced to solving $(D - \lambda)^k f = 0$ for $\lambda \in \mathbb{C}$; the linearly independent solutions are $x^re^{\lambda x}$ for $r = 0, 1, \dots, k-1$.

We have seen how $F(X)$ can be turned into a vector space. We can also turn it into a different algebraic object, namely a ring, by introducing pointwise multiplication of functions:
$$fg : x \mapsto f(x)g(x).$$ It follows that this ring is commutative since multiplication of real numbers is commutative. A linear differential operator with non-constant coefficients $$L = D^n + a_{n-1}(x)D^{n-1} + \dots + a_1(x)D + a_0(x)$$ is in effect a polynomial with coefficients in the ring $C^{\infty}(\mathbb{R})$. Determining when polynomials with coefficients in a given commutative ring can be factorized is an aspect of abstract algebra, and it isn't always possible. Also, unlike in the constant coefficient case, the operators $(D - p(x))$ and $(D - q(x))$ do not commute unless $p' = q'$. Thus we don't approach the solution of $n$th-order linear homogeneous ODEs by that route, although it can be shown by other means that the kernel of $L$ is finite-dimensional.

Where can I read more about this? Is this functional analysis? I took linear algebra and it didn't cover any of this.. Thanks!
There are various concepts in abstract algebra which arise; principally inner product spaces and normed spaces. But general results from algebra will only take you so far, and you do at some point have to do some functional analysis.

EDIT: One major difference between finite dimensional vector spaces and infinite dimensional vector spaces is that in the finite case, a linear operator will have (complex) eigenvalues, because we can characterise eigenvalues as being roots of a polynomial associated with the operator and then apply the fundamental theorem of algebra. But in infinite dimensional vector spaces we can't do that, so for each operator we have to check whether it has eigenvalues and if so how many.

Last edited:
WWGD
Gold Member
2019 Award
There are various concepts in abstract algebra which arise; principally inner product spaces and normed spaces. But general results from algebra will only take you so far, and you do at some point have to do some functional analysis.

EDIT: One major difference between finite dimensional vector spaces and infinite dimensional vector spaces is that in the finite case, a linear operator will have (complex) eigenvalues, because we can characterise eigenvalues as being roots of a polynomial associated with the operator and then apply the fundamental theorem of algebra. But in infinite dimensional vector spaces we can't do that, so for each operator we have to check whether it has eigenvalues and if so how many.
OP:
You also have the spectrum , which includes eigenvalues. Fourier series are a Schauder basis, which exists in some infinite-dimensional normed spaces, in which sums are neither finite nor actually equal to a given value , but instead converge on the topology of the normed space. The Fourier series give, as a basis, the best possible approximation, as a projection.

Last edited:
Okay, let me use an easier example (of something I am actually currently going through in my differential equations course) to see if I got the gist of it.

So there is an infinite dimensional subspace of ##F(\mathbb{R})## that is the space of all analytic functions in an open subset of the reals that contais zero. This space has as one of its basis set the set ##\{f(x)\in{}F(\mathbb{R}):\forall{}n\in\mathbb{N}(f(x)=x^n)\}## and therefore any function in this space can be expressed as
$$\sum_{n=0}^\infty{}a_nx^n$$
with ##a_n\in\mathbb{R}## for all ##n\in\mathbb{N}##. Right?

So there is this method for solving differential equations, called the power series solution method, that comprises of assuming that the solution space of the differential equation in question is this subspace and therefore can be expanded using this basis. So substituting a power series of that form in the differential equation would give me a different problem to solve, mainly finding the coefficients of the power series (i.e. the coefficients of each basis element).

As far as I know theres two things that can go wrong, either I can't find a recurrence relation for the coefficients (which means the solutions space wasn't the analytic one after all), or I find that the power series has a radius of convergence that is not the whole real line (in which case the the solution space is restricted to the space of analytic functions that are analytic in that open subset of the real line). Is that right?