A Hahn-Banach From Systems of Linear Equations

bolbteppa
Messages
300
Reaction score
41
In this paper on the history of functional analysis, the author mentions the following example of an infinite system of linear equations in an infinite number of variables ##c_i = A_{ij} x_j##:

\begin{align*}
\begin{array}{ccccccccc}
1 & = & x_1 & + & x_2 & + & x_3 & + & \dots \\
1 & = & & & x_2 & + & x_3 & + & \dots \\
1 & = & & & & & x_3 & + & \dots \\
& \vdots & & & & & & & \ddots
\end{array} \to \begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \end{bmatrix} = \begin{bmatrix}
1 & 1 & 1 & \dots \\
& 1 & 1 & \dots \\
& & 1 & \dots \\
& & & \ddots
\end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \end{bmatrix}
\end{align*}

as an example of a system such that any finite truncation of the system down to an ##n \times n## system has a unique solution ##x_1 = \dots = x_{n=1} = 0, x_n = 1## but for which the full system has no solution.

This book has the following passage on systems such as this one:

The Hahn-Banach theorem arose from attempts to solve infinite systems of linear equations... The key to the solvability is determining "compatibility" of the system of equations. For example, the system ##x + y = 2## and ##x + y = 4## cannot be solved because it requires contradictory things and so are "incompatible". The first attempts to determine compatibility for infinite systems of linear equations extended known determinant and row-reduction techniques. It was a classical analysis - almost solve the problem in a finite situation, then take a limit. A fatal defect of these approaches was the need for the (very rare) convergence of infinite products."​

and then mentions a theorem about these systems that motivates Hahn-Banach:

Theorem 7.10.1 shows that to solve a certain system of linear equations, it is necessary and sufficient that a continuity-type condition be satisfied.

Theorem 7.10.1 The Functional Problem Let ##X## be a normed space over ##\mathbb{F} = \mathbb{R}## or ##\mathbb{C}##, let ##\{x_s \ : \ s \in S \}## and ##\{ c_s \ : \ s \in S \}## be sets of vectors and scalars, respectively. Then there is a continuous linear functional ##f## on ##X## such
that ##f(x_s) = c_s## for each ##s \in S## iff there exists ##K > 0## such that
\begin{align*}
|\sum_{s \in S} a_s c_s | \leq K || \sum_{s \in S} a_s x_S || \ \ \ \ (1),
\end{align*}
for any choice of scalars ##\{a_s \ : \ s \in S \}## for which ##a_s = 0## for all but finitely many ##s \in S## ("almost all" the ##a_s = 0##).

Banach used the Hahn-Banach theorem to prove Theorem 7.10.1 but Theorem 7.10.1 implies the Hahn-Banach theorem: Assuming that Theorem 7.10.1 holds, let ##\{ x_s \}## be the vectors of a subspace ##M##, let ##f## be a continuous linear functional on ##M##; for each ##s \in S##, let ##c_s = f(x_s)##. Since ##f## is continuous, ##(1)## is satisfied and ##f## possesses a continuous extension to ##X##.​

My question is:

  1. If you knew none of the theorems just mentioned, how would one begin from the system ##c_i = A_{ij} x_j## at the beginning of this post and think of setting up the conditions of theorem 7.10.1 as a way to test whether this system has a solution?
  2. How does this test show the system has no solution?
  3. How do we re-formulate this process as though we were applying the Hahn-Banach theorem?
  4. Does anybody know of a reference for the classical analysis of systems in terms of infinite products?
 
Physics news on Phys.org
bolbteppa said:
Does anybody know of a reference for the classical analysis of systems in terms of infinite products?
Since any product with one factor = 0 will have the value 0, the usual way is write the product as \prod_{n=1}^{\infty}(1+a_{n}). Then

A necessary and sufficient condition for the absolute convergence of the product \prod_{n=1}^{\infty}(1+a_{n}) is the convergence of the series \sum_{n=1}^{\infty}\lvert a_{n}\rvert
 
Svein said:
A necessary and sufficient condition for the absolute convergence of the product \prod_{n=1}^{\infty}(1+a_{n}) is the convergence of the series \sum_{n=1}^{\infty}\lvert a_{n}\rvert
What does it mean that a product converges absolutely?
 
Erland said:
What does it mean that a product converges absolutely?
Again, citing from Ahlfors:

An infinite product is (\prod_{1}^{\infty}(1+a_{n})) said to be absolutely convergent if and only if the corresponding series \sum_{n=1}^{\infty}\log(1+a_{n}) converges absolutely.
 
  • Like
Likes Erland
Svein said:
Again, citing from Ahlfors:

An infinite product is (\prod_{1}^{\infty}(1+a_{n})) said to be absolutely convergent if and only if the corresponding series \sum_{n=1}^{\infty}\log(1+a_{n}) converges absolutely.
I see, and we this result too , towards finding a holomorphic function with a prescribed sequence of zeros, right?
 
WWGD said:
I see, and we this result too , towards finding a holomorphic function with a prescribed sequence of zeros, right?
Yes. Example: \prod_{n=1}^{\infty}(1+\frac{z}{n}) has all negative integers for zeros.
 
  • Like
Likes WWGD
Svein said:
Yes. Example: \prod_{n=1}^{\infty}(1+\frac{z}{n}) has all negative integers for zeros.
And convergence of the product guarantees it is holomorphic, right?
 
WWGD said:
And convergence of the product guarantees it is holomorphic, right?
Hm. Since \frac{1}{n} diverges, we need something extra. One way is to introduce a converging factor: \prod_{n=1}^{\infty}(1+\frac{z}{n})e^{-\frac{z}{n}} will converge. Another way is to extend the view a little: z\prod_{n\neq 0}(1+\frac{z}{n})e^{-\frac{z}{n}}. Now all integers are zeros...

But the last product can be manipulated a bit. If we combine the elements for n and -n, we end up with the following product z\prod_{n=1}^{\infty}(1-\frac{z^{2}}{n^{2}}) which converges (and, incidentally z\prod_{n=1}^{\infty}(1-\frac{z^{2}}{n^{2}})=\sin(\pi z)).
 
  • Like
Likes WWGD
Svein said:
Hm. Since \frac{1}{n} diverges, we need something extra. One way is to introduce a converging factor: \prod_{n=1}^{\infty}(1+\frac{z}{n})e^{-\frac{z}{n}} will converge. Another way is to extend the view a little: z\prod_{n\neq 0}(1+\frac{z}{n})e^{-\frac{z}{n}}. Now all integers are zeros...

But the last product can be manipulated a bit. If we combine the elements for n and -n, we end up with the following product z\prod_{n=1}^{\infty}(1-\frac{z^{2}}{n^{2}}) which converges (and, incidentally z\prod_{n=1}^{\infty}(1-\frac{z^{2}}{n^{2}})=\sin(\pi z).

Ah, yes, I had forgotten about weighing factors. I think that does it. Thanks.
 
  • #10
Do references even exist that try to study systems like
\begin{align*}
\begin{array}{ccccccccc}
1 & = & x_1 & + & x_2 & + & x_3 & + & \dots \\
1 & = & & & x_2 & + & x_3 & + & \dots \\
1 & = & & & & & x_3 & + & \dots \\
& \vdots & & & & & & & \ddots
\end{array} \to \begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \end{bmatrix} = \begin{bmatrix}
1 & 1 & 1 & \dots \\
& 1 & 1 & \dots \\
& & 1 & \dots \\
& & & \ddots
\end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \end{bmatrix}
\end{align*}
like you were doing elementary linear algebra and slowly getting more theoretical?
 
Back
Top