I Solve Scalar Product Problem in Set R of Functions [0,1]

Gear300
Messages
1,209
Reaction score
9
Alright, so we ran into a peculiarity in answering this question.

Let R be the set of all functions f defined on the interval [0,1] such that -

(1) f(t) is nonzero at no more than countably many points t1, t2, . . .
(2) Σi = 1 to ∞ f2(ti) < ∞ .

Define addition of elements and multiplication of elements by scalars in the ordinary way, i.e., (f + g)(t) = f(t) + g(t), (αf)(t) = αf(t). If f and g are two elements of R, nonzero only at the points t1, t2, . . . and t'1, t'2, . . . respectively, define the scalar product of f and g as

(3) (f,g) = Σi,j = 1 to ∞ f(ti)g(t'j) .

Prove that this scalar product makes R into a Euclidean space.

By the looks of it, (3) is not referring to a sum across all pairs (i,j), since that may induce (absolute) convergence issues for certain elements in the set, where it might not be possible for them to have a finite norm. So we figured that the sum (3) is such that i and j run in parallel across Z+, like it would be in l2.
The peculiarity we found next is in the ordering of the countable domain of points with non-zero image. It may not be well-ordered for some functions, and even if you were to assume only well-ordered domains, there can be several different countable orderings (given that the sums of functions are included). One possibility we considered is if f has smaller ordering than g, we can generalize the sum (3) so that it only goes up to the ordering of f (like we would if f had a finite domain and g had a countable domain). But even so, the list of plausible orderings goes a long way in [0,1], and then there is a problem with resolving one of the properties of the scalar product:

(iv) (f , g+h) = (f , g) + (f , h)

Typically, proving (iv) would involve showing that (f , g+h) remains absolutely convergent. But the problem is how to consider the domain of g+h in the left expression as opposed to the individual domains of g and h in the right expression. In any case, we're stuck.
 
Mathematics news on Phys.org
Where does the question come from? The scalar product doesn't seem to make sense.
What I could imagine is $$(f,g) = \sum_{k=1}^{\infty} f(s_k) g(s_k)$$ where the sk are the union of ti and t'i. Or the intersection, doesn't make a difference here.
 
So for completion, we have managed to prune the problem statement to something doable. Altogether, it works when considering mfb's statement of the scalar product -

mfb said:
What I could imagine is $$(f,g) = \sum_{k=1}^{\infty} f(s_k) g(s_k)$$ where the sk are the union of ti and t'i. Or the intersection, doesn't make a difference here.

- which is the intuitive way of looking that things, since that is how the scalar product in C2[a,b] behaves. The general idea is that -

Gear300 said:
(2) Σi = 1 to ∞ f2(ti) < ∞ .

- is an instance of absolute convergence, where absolute convergence implies unconditional convergence. Then by http://math.uga.edu/%7Epete/3100supp.pdf, Chapter 2 . Section 9 . pg 89 . Theorem 2.52:

For a : NR an ordinary sequence and A ∈ R, the following are equivalent:
i. The unordered sum Σn ∈ Z+ an is convergent, with sum A.
ii. The series Σn = 0 to ∞ an is unconditionally convergent, with sum A.

So given any countable (un)ordering of points of non-zero image, so long as there exists a reordering ω that is absolutely convergent, then everything should fit together.
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top