Fourier Series as (Generalized)Least Squares?

Bacle
Messages
656
Reaction score
1
Hi, All:

Given a normed vector space (X,||.||), and an inconsistent system Ax=b, the generalized
least squares solution x^ to Ax=b is the point in the span of Ax that is closest to b, i.e.,
given a fixed matrix A, we define AX={Ax: x in X}, and then:

x^:={ x in AX :||x-b||<||x'-b||, for all x' in AX}

In an inner-product space, x^ is the orthogonal projection of b into AX. The value
x^ that minimizes ||x-b|| also minimizes ||x-b||^2

(the least-squares problem in statistics is a sort of reverse problem of finding
a subspace that minimizes the sums of squares of distances of data points given.)

I am trying to express the Fourier Series for f with the standard orthonormal basis
in this format. Is it accurate to say that the Fourier-series for f is the orthogonal
projection of f on the span of the basis{ 1/2Pi, +/-cos(nx),+/-sin(nx), n=1,2,...}?

I am having some trouble with the fact that we are using an infinite-dimensional
space; if we cut off the series at some value N, then I think an argument is easier.

Any Ideas?

Thanks.
 
Physics news on Phys.org
Bacle said:
Hi, All:

Given a normed vector space (X,||.||), and an inconsistent system Ax=b, the generalized
least squares solution x^ to Ax=b is the point in the span of Ax that is closest to b, i.e.,
given a fixed matrix A, we define AX={Ax: x in X}, and then:

x^:={ x in AX :||x-b||<||x'-b||, for all x' in AX}

In an inner-product space, x^ is the orthogonal projection of b into AX. The value
x^ that minimizes ||x-b|| also minimizes ||x-b||^2

(the least-squares problem in statistics is a sort of reverse problem of finding
a subspace that minimizes the sums of squares of distances of data points given.)

I don't understand in what sense it is a "reverse" problem.

If I want to solve a an whole class of least squares problems, I can understand that as defining (in some sense) a subspace of the space of curves. For example, if I am fitting quadratics, the sum of two quadratics is a quadratic, a scalar multiple of a quadratic is a quadratic etc. On a finite interval [a,b] , one can define an inner product of two quadratics f and g by \int_a^b f(x) g(x) dx. Is that what you mean?<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> I am trying to express the Fourier Series for f with the standard orthonormal basis<br /> in this format. </div> </div> </blockquote><br /> I&#039;m not sure what you mean by &quot;in this format&quot;. I&#039;ll interpret it to mean that you want to look at finding the Fourier series for a function as projecting the function on to a subspace of functions, the subspace defined by all possible Fourier series.<br /> <br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Is it accurate to say that the Fourier-series for f is the orthogonal <br /> projection of f on the span of the basis{ 1/2Pi, +/-cos(nx),+/-sin(nx), n=1,2,...}? </div> </div> </blockquote><br /> I think it is correct. There might be some technicalties in defining terms for infinite dimensional spaces that would need to be done before we could say it was &quot;accurate&quot;.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> I am having some trouble with the fact that we are using an infinite-dimensional<br /> space; if we cut off the series at some value N, then I think an argument is easier. </div> </div> </blockquote><br /> You are correct that infinite dimensional vectors spaces require methods of proof that finite dimensional space do not and some things that are true in finite dimensional vectors spaces don&#039;t hold in infinite dimensional ones. For example, a &quot;vector&quot; given by its &quot;components&quot; in a finite dimensional space is unremarkable, but a vector given by a list that is an infinite series of basis functions might be a divergent series. The general setting for studying such things is &quot;functional analysis&quot;. Look up &quot;Banach spaces&quot; and &quot;Hilbert Spaces&quot;.<br /> <br /> I&#039;ve often asked experts on functional analysis about analogies between finite dimensional vectors spaces and matrices and infinite dimensional vector spaces and operators. A few say &quot;Yes, of course&quot; and others say &quot;No,No,No!&quot;. I think the ones that say &quot;No, No, No!&quot; are thinking in terms of the technicalities of convergence etc. The ones that say &quot;Yes, of course&quot; are thinking in terms of The Big Picture. From the point of view of The Big Picture, expressing a function in Fourier series and or in terms of various kinds of orthogonal polynomials is an attempt to project a vector onto a countably infinite set of basis functions.
 
Hi, Stephen; unfortunately, the quoting function is not working too well; I'll try my best, tho; I will use """" to start and finish quotes.


I wrote:
""""
Hi, All:

Given a normed vector space (X,||.||), and an inconsistent system Ax=b, the generalized
least squares solution x^ to Ax=b is the point in the span of Ax that is closest to b, i.e.,
given a fixed matrix A, we define AX={Ax: x in X}, and then:

x^:={ x in AX :||x-b||<||x'-b||, for all x' in AX}

In an inner-product space, x^ is the orthogonal projection of b into AX. The value
x^ that minimizes ||x-b|| also minimizes ||x-b||^2

(the least-squares problem in statistics is a sort of reverse problem of finding
a subspace that minimizes the sums of squares of distances of data points given.)""""

Stephen Tashi wrote:
""""I don't understand in what sense it is a "reverse" problem. """""

A correction: x^:={x in AX: ||Ax-b||<||Ax'-b||, for all x,x' in Ax}

I mean that the standard setup is one in which we are given the specific subspace and
a point b that is not on the subspace, and we want to minimize the distance/norm
between b and the subspace; we are given a map A:V,W, for V,W normed vector spaces,
and AV is the subspace, and some b not in AV is the point. In the case of statistical
(linear)least squares, we are given a collection of points (in R^n, usually, but in R^2 for least squares)and we want to find the line/subspace of R^n such that the sum of the squares of residuals is minimal.


I wrote:
""""
I am trying to express the Fourier Series for f with the standard orthonormal basis
in this format. """"

Stephen Tashi wrote:
""""I'm not sure what you mean by "in this format". I'll interpret it to mean that you want to look at finding the Fourier series for a function as projecting the function on to a subspace of functions, the subspace defined by all possible Fourier series.""""

I mean that I am trying to describe the Fourier series for f as the best least squares
approximation to f itself, in that the Fourier series for f are the projection of f into the
span of the standard orthogonal basis, and so that the Fourier series minimizes the
square residuals.

Sorry, I got to go, I will write the rest later.
 

Similar threads

Replies
11
Views
2K
Replies
9
Views
3K
Replies
8
Views
5K
Replies
2
Views
418
Replies
1
Views
2K
Back
Top