MHB Some questions about the existence of the optimal approximation

mathmari
Gold Member
MHB
Messages
4,984
Reaction score
7
Hey! :o

I am looking at the following that is related to the existence of the optimal approximation.

$H$ is an euclidean space
$\widetilde{H}$ is a subspace of $H$

We suppose that $dim \widetilde{H}=n$ and $\{x_1,x_2,...,x_n\}$ is the basis of $\widetilde{H}$.

Let $y \in \widetilde{H}$ be the optimal approximation of $x \in H$ from $\widetilde{H}$.
Then $(y,u)=(x,u), \forall u \in \widetilde{H}$.

We take $u=x_i \in \widetilde{H}$, so $(y,x_i)=(x,x_i)$

Since $\{x_1,x_2,...,x_n\}$ is the basis of $\widetilde{H}$, $y$ can be written as followed:
$y=a_1 x_1 + a_2 x_2 +... + a_n x_n$

$\left.\begin{matrix}
(x,x_1)=(y,x_1)=a_1 (x_1,x_1)+a_2 (x_2,x_1)+...+a_n (x_n,x_1)\\
(x,x_2)=(y,x_2)=a_1 (x_1,x_2)+a_2 (x_2,x_2)+...+a_n (x_n,x_2)\\
...\\
(x,x_n)=(y,x_n)=a_1 (x_1,x_n)+a_2 (x_2,x_n)+...+a_n (x_n,x_n)
\end{matrix}\right\}(1)$

So that the optimal approximation exists, I have to be able to write $y$ in an unique way as linear combination of the elements of the basis.

The system $(1)$ has class $n$, since the $\{x_1, ..., x_n \}$ consist the basis of $\widetilde{H}$.
So the system has a unique solution.>Why does the optimal approximation only exists when $y$ can be written in an unique way as linear combination of the elements of the basis?

>What does it mean that the system $(1)$ has class $n$? That it has $n$ equations and $n$ unknown variabes?
 
Mathematics news on Phys.org
Hey! (Blush)

>Why does the optimal approximation only exists when $y$ can be written in an unique way as linear combination of the elements of the basis?

In a linear (sub)space every vector can be written as a unique linear combination of basis vectors.
So if $y$ can be written as 2 different linear combinations, those are really different vectors. In other words: $y$ is not a unique vector, so you cannot call it "the" optimal approximation.
>What does it mean that the system $(1)$ has class $n$? That it has $n$ equations and $n$ unknown variabes?

I'm not aware of a concept named class as related to a system of linear equations. Googling for it gave indeed no hits. As I see it, it is ambiguous in this context. It can either mean $n$ equations or $n$ variables. Luckily, in this particular case it is both. :)
 
mathmari said:
>What does it mean that the system $(1)$ has class $n$? That it has $n$ equations and $n$ unknown variabes?
I have not come across the term "class" in that context. My guess is that what it means is that the matrix of coefficients in the system (1) has rank $n$. That implies that the equations have a unique solution, which is what is wanted here.
 
I like Serena said:
In a linear (sub)space every vector can be written as a unique linear combination of basis vectors.
So if $y$ can be written as 2 different linear combinations, those are really different vectors. In other words: $y$ is not a unique vector, so you cannot call it "the" optimal approximation.

A ok! So if $y$ can be written as 2 different linear combinations, that means that there are 2 different approximations, so we do not have the one that is optimal.
I got it!
I like Serena said:
I'm not aware of a concept named class as related to a system of linear equations. Googling for it gave indeed no hits. As I see it, it is ambiguous in this context. It can either mean $n$ equations or $n$ variables. Luckily, in this particular case it is both. :)

Opalg said:
I have not come across the term "class" in that context. My guess is that what it means is that the matrix of coefficients in the system (1) has rank $n$. That implies that the equations have a unique solution, which is what is wanted here.

Aha! Ok!

The system $(1)$ has class $n$, since the $\{x_1, ..., x_n \}$ consist the basis of $\widetilde{H}$.
Why do we conclude to that the class of the system is $n$ from the fact that the $\{x_1, ..., x_n \}$ consist the basis of $\widetilde{H}$?
 
mathmari said:
Why do we conclude to that the class of the system is $n$ from the fact that the $\{x_1, ..., x_n \}$ consist the basis of $\widetilde{H}$?
Good question! We know that $\dim\widetilde H = n$, so the condition for the set $\{x_1, ..., x_n \}$ to be a basis is that it should be linearly independent. Or, to put it negatively, the set will fail to be a basis if and only if it is linearly dependent. That in turn is equivalent to the condition that there should exist scalars $\lambda_1,\ldots,\lambda_n$, not all $0$, such that $\sum \lambda_ix_i = 0.$ But then $\sum \lambda_i\langle x_i,x_j \rangle = 0$ for all $j$. That says that the rows of the matrix $A = (\langle x_i,x_j \rangle)$ are linearly dependent, which means that the rank of $A$ is less than $n$.

Conversely, if the rank of $A$ is less than $n$, then its rows are linearly dependent. So there exist scalars $\lambda_1,\ldots,\lambda_n$, not all $0$, such that $\sum \lambda_i\langle x_i,x_j \rangle = 0$ for all $j$. This says that $\sum \lambda_ix_i$ is orthogonal to each $x_j$. Since the $x_j$ form a basis, it follows that $\sum \lambda_ix_i = 0$ and so $\{x_1, ..., x_n \}$ is not a basis for $\widetilde H$.
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Replies
1
Views
549
Replies
1
Views
4K
Replies
17
Views
3K
Replies
9
Views
5K
Replies
21
Views
3K
Replies
0
Views
394
Replies
1
Views
2K
Back
Top