Challenge Micromass' big August challenge

Click For Summary
The August challenge on Micromass features a variety of mathematical problems suitable for high school students, college freshmen, and advanced participants. Participants must provide full proofs or derivations for their solutions, with previous unsolved challenges excluded. Notable solved problems include proving the uniqueness of a specific function under continuity conditions and exploring properties of scale estimators in probability theory. The thread emphasizes the importance of rigorous proof in mathematical discussions, ensuring that only well-supported answers are recognized. Engaging in these challenges promotes deeper understanding and application of mathematical concepts.
  • #61
5a high school questions) Find all the 10-digit numbers such that each digit {0,1,2,3,4,5,6,7,8,9} is exactly used once.

If I understand the question correctly, I have to find how many numbers there are satisfying this condition.

So obviously, 0 can't be the first digit of a number. So, for the first digit, we have 9 possibilities. For the second digit, we have 9 possibilities left (now we can choose 0 as well). For the third digit, 8 possibilities, ...

Ultimately, using the product rule, we have 9*9*8*7*6*5*4*3*2*1 = 3265920 possibilities where each digit is exactly used once.
 
Physics news on Phys.org
  • #62
I'm quite sure the numbers should satisfy (a) and (b) together, not separately.

The list in the problem statement seems to have a typo in it, and I don't understand why 9 is not listed.
 
  • Like
Likes member 587159
  • #63
mfb said:
I'm quite sure the numbers should satisfy (a) and (b) together, not separately.

This is right.

The list in the problem statement seems to have a typo in it, and I don't understand why 9 is not listed.

Thank you.
 
  • #64
The 1 is still in there twice.
Unless I missed follow-up posts to this post by @Biker, we don't have all possible points for high school #2 yet.
Biker said:
Challenges for high school:
2-
I will just pretend that the Earth is a perfect sphere.
If you move close to the south pole, You can mark a circle with a circumference of 1 mile around the earth. Now move 1 mile further to the south and and face the north.
Move one mile north and you are on the circle, Move one mile west and you will come back to the same place (assuming west from your initial location) move 1 mile south and you are back to the same place.

and you can be on the north pole, as the Earth is a sphere you will form a triangle that will leads you back to the same place (Disregarding the method I suggested) or you could use the same method but with minor changes at the north pole

Who needs a gps :P
 
  • #65
I have been trying to solve Advanced Problem no 4 for a long time now, but it seems impossible. Conditions 1 and 5 seem impossible to reconcile. I have been trying an Axiom of Choice (Zorn's Lemma) approach, but I don't get anywhere. Perhaps this is a bad approach...
 
  • #66
Erland said:
I have been trying to solve Advanced Problem no 4 for a long time now, but it seems impossible. Conditions 1 and 5 seem impossible to reconcile. I have been trying an Axiom of Choice (Zorn's Lemma) approach, but I don't get anywhere. Perhaps this is a bad approach...

It does require the axiom of choice. But it's not equivalent to it.

If it's not found by the beginning of september, I'll repost the question in the new challenge thread with a hint.
 
  • #67
micromass said:
It does require the axiom of choice.
I think it requires a result that depends on AC, but not AC directly? At least that is how I know the proof of this problem. (I will stick by the rules and not say more.)
 
  • #68
Krylov said:
I think it requires a result that depends on AC, but not AC directly? At least that is how I know the proof of this problem. (I will stick by the rules and not say more.)

That's correct.
 
  • #69
Hmm, the approach I was investigating does not need the AC, so I guess there is some counterexample to my definition. Can someone find one?

4. Let ##X## denote the set of all bounded real-valued sequences. Prove that a generalized limit exists for any such sequence. A generalized limit is any function function ##L:X\rightarrow \mathbb{R}## such that
1) ##L((x_n + y_n)_n) = L((x_n)_n) + L((y_n)_n)##
2) ##L((\alpha x_n)_n) = \alpha L((x_n)_n)##
3) ##\liminf_n x_n \leq L((x_n)_n)\leq \limsup_n x_n##
4) If ##x_n\geq 0## for all ##n##, then ##L((x_n)_n)\geq 0##.
5) If ##y_n = x_{n+1}##, then ##L((x_n)_n) = L((y_n)_n)##
6) If ##x_n\rightarrow x##, then ##L((x_n)_n) = x##.
My first idea was to define a modified sequence where the nth element is the average of the first n elements, and then look for that limit.
$$y_n = \frac{1}{n} \sum_{i=1}^n x_n$$
$$L((x_n)_n) = \lim_{n \to \infty} y_n$$

It should be clear that this satisfies all 6 criteria if that limit exists. That catches all convergent sequences and many divergent sequences, but it does not catch sequences like (1,-1,-1,1,1,1,1, then 8 times -1, 16 times 1 and so on) where the limit does not exist. Let's repeat this step:

$$z_n = \frac{1}{n} \sum_{i=1}^n y_n = \frac{1}{n} \sum_{i=1}^n \frac{1}{i} \sum_{k=1}^i x_i$$

Now you can come up with a series where this does not converge either, and I can add a third step, ... but here is the idea: for each sequence, repeat this step as often as necessary to get convergence, and then define this limit as generalized limit of the sequence.

I didn't find a proof that every sequence has to lead to convergence after a finite number of steps.
 
  • #70
mfb I have been thinking along the same lines. But I didn't consider more than one step, also that the averages need not converge... So you're smarter than me :)
 
  • #71
mfb said:
I didn't find a proof that every sequence has to lead to convergence after a finite number of steps.

Right, and you can't find such a proof. It's a good idea though, but it will fail to provide a generalized limit on every sequence. Note that generalized limits are not unique either!
 
  • #72
If there is no such proof (and if you say that I trust you), then there should be a counterexample, but I don't find one.
 
  • #73
mfb said:
If there is no such proof (and if you say that I trust you), then there should be a counterexample, but I don't find one.

It might actually be so that AC might need to be invoked to find a counterexample. I know for sure your statement is false with AC though, but I will not show it until this question is solved.
 
  • Like
Likes mfb
  • #74
This doesn't spoil anything, but it's a counterexample to the claim made by mfb: http://mathoverflow.net/questions/84772/bounded-sequences-with-divergent-cesàro-mean
 
Last edited by a moderator:
  • Like
Likes S.G. Janssens
  • #75
micromass said:
This doesn't spoil anything, but it's a counterexample to the claim made by mfb: http://mathoverflow.net/questions/84772/bounded-sequences-with-divergent-cesàro-mean
Thanks. So the point is to blow up the length of the sub-sequences fast enough. Probably faster than e^e^x or something like that (didn't check it in detail).
 
Last edited by a moderator:
  • #76
mfb said:
Thanks. So the point is to blow up the length of the sub-sequences fast enough. Probably faster than e^e^x or something like that (didn't check it in detail).
This sounds like transforming the issue until you end up transcending something. (At least this would give us a reason for AC.)
I was thinking about another idea: All difficulties arise either from linearity or (if this is solved) from ##\underline{\lim} \leq L \leq \overline{\lim}##.
How about putting AC in a Hamel basis (if needed at all) and define ##L(x_n)## as a certain derivation? (Don't know whether this is a real idea or simply a crude thought.)
 
Last edited:
  • #77
Hamel basis is a good keyword.

We can solve parts of the linearity by directly using this linearity in the definition. Find a generalized limit in [-1,1] for all sequences where ##\liminf_n x_n = -1## and ## \limsup_n x_n = 1##. Then scale sequence and generalized limit linearly to find the generalized limit of sequences which have other liminf and limsup. This doesn't work for convergent series, where we set the generalized limit to the regular limit separately.

Now the interesting question: does this procedure survive condition (1), the limit of the sum of two sequences? I guess this depends on the definition. The critical point here is a cancellation of accumulation points, e.g. (0,2,-1,2,0,2,-1,...) + (0,-1,0,0,0,-1,0,0,0,-1,...) and (0,2,-1,2,0,2,-1,...) + (0,-1,0,0,0,-1,0,0,0,-1,...) have different limsup but need the same limit (condition 5 plus condition 1).
 
  • #78
I'll have a try on #4, although I admit that my solution hides AC in a general result, which I won't prove here. And my first idea hasn't been that far from what I have (not exactly a derivation, but something linear instead). The whole problem is to connect the algebraic property of linearity with the topological property of the ##\lim \inf## and ##\lim \sup##.

I'll start with a description of Hewitt, Stromberg, Real and Abstract Analysis, Springer in which the reals are defined as the cosets of a Cauchy sequence ##\mathfrak{C}## modulo null sequences ##\mathfrak{N}##. Both are subsets of all bounded real sequences ##\mathfrak{B}##.

With ##\mathfrak{N} \subset \mathfrak{C} \subset \mathfrak{B}## we get ##\mathbb{R} \cong \mathfrak{C} / \mathfrak{N} \subset \mathfrak{B} / \mathfrak{N} =: \mathfrak{A}## a real (topological, infinite dimensional, commutative) ##\mathbb{R}-##algebra ##\mathfrak{A}## (with ##1##).

The linear function ##L## can now be viewed as a linear functional on ##\mathfrak{A}##.
I will use the sub-additivity ##\overline{\lim}((x_n)+(y_n)) \leq \overline{\lim} (x_n)+\overline{\lim} (y_n)## and ##\underline{\lim}(x_n) = - \overline{\lim}(-x_n)##

Now here's the trick and the point where the real work is done:
If we define ##L(x_n)\vert_{\mathfrak{C} / \mathfrak{N}} := \lim(x_n)## then ##L(x_n) = \lim(x_n) \leq \overline{\lim}(x_n)## holds on ##\mathfrak{C} / \mathfrak{N} \subset \mathfrak{A} \;##, is linear and we may apply the theorem of Hahn-Banach. (Its proof uses Zorn's Lemma and therefore AC.)
This theorem guarantees us the existence of a linear functional ##L## on ##\mathfrak{A}## which extends our ##L## defined on ##\mathfrak{C} / \mathfrak{N} \; ##.

Linearity is already given by Hahn-Banach.
With that we have ##L(x_n) \leq \overline{\lim}(x_n)## by the theorem and
$$L(x_n)=-L(-x_n) \geq -\overline{\lim}(-x_n) = \underline{\lim}(x_n)$$
For the cutting process in point 5) we define a sequence ##z_n## by ##z_1=x_1## and ##z_n=x_{n} - y_{n-1}=0## for ##n > 1##.
Then ##L(z_n)=L(x_1,0, \dots) = 0 = L(x_n) - L(0,y_1,y_2, \dots) = L(x_n) - L(y_n)## and 5) is equivalent to ##L(0,y_1,y_2, \dots) = L(y_1,y_2, \dots)##. This is true for elements of ##\mathfrak{C} / \mathfrak{N}## which are dense in ##\mathfrak{A}## according to ##\Vert \, . \Vert_{\infty}##.

For ##0 \leq x_n## we have ##0 \leq \underline{\lim} (x_n) \leq L(x_n)##; and for ##\lim (x_n) = x## we have ##x = \underline{\lim} (x_n) \leq L(x_n) \leq \overline{\lim} (x_n) = x## and get equality everywhere. Therefore 4) and 6) hold.
 
  • Like
Likes Erland and micromass
  • #79
Nice fresh42, I should have thought of Hahn-Banach... :smile:
 
  • #80
Here is the solution for #2 of the previously unsolved advanced challenges. I got this from the excellent probability book from Feller, which contains a lot of other gems.

Take the random distributions of ##r## balls in ##n## cells assuming that each arrangement has probability ##1/n^r##. We seek the probability##p_m(r,n)## that exactly ##m## cells are empty.

Let ##A_k## be the event that cell number ##k## is empty. In this even all ##r## balls are placed in the remaining ##n-1## cells and this can be done in ##(n-1)^r## different ways. Similarly, there are ##(n-2)^r## arrangments leaving two preassigned cells empty. By the inclusion-exclusing rule we get that the probability that all cells are occupied is
p_0(r,n) = \sum_{\nu = 0}^n (-1)^\nu \binom{n}{\nu} \left(1-\frac{\nu}{n}\right)^r
Consider a distribution in which exactly ##m## cells are empty. These ##m## cells can be chosen in ##\binom{n}{m}## ways. The ##r## balls are distributed among the remaining ##n-m## cells so that each of these cells is occupied. The number of such distributions is ##(n-m)^r p_0(r,n-m)##. Dividing by ##n^r## we find for the probability that exactly ##m## cells remain empty
p_m(r,n)=\binom{n}{m}\sum_{\nu=0}^{n-m} (-1)^\nu \binom{n-m}{\nu} \left(1-\frac{m+\nu}{n}\right)^r

Now we wish to find an asymptotic distribution. First note that
(n-\nu)^r<n(n-1)...(n-\nu+1)<n^r
Thus we have
n^\nu\left(1 - \frac{\nu}{n}\right)^{\nu+r}<\nu!\binom{n}{\nu}\left(1-\frac{\nu}{n}\right)^r < n^\nu\left(1-\frac{\nu}{n}\right)^r
For ##0<t<1##, it is clear that ##t<-\log(1-t)<\frac{t}{1-t}## and thus that
\left(n e^{-(\nu+r)/(n-\nu)}\right)^\nu &lt;\nu!\binom{n}{\nu}\left(1-\frac{\nu}{n}\right)^r &lt; \left(ne^{-r/n}\right)^\nu
Now we put ##\lambda = ne^{-r/n}## and we assume that ##r## and ##n## increase in such a way that ##\lambda## remained constrained to a finite interval ##0<a<\lambda<b##. For each fixed ##\nu## the ratio of the extreme members in the inequality tends to unity and so
0\leq \frac{\lambda^\nu}{\nu!} - \binom{n}{\nu}\left(1-\frac{\nu}{n}\right)^r\rightarrow 0
This relation holds trivially when ##\lambda\rightarrow 0## and hence it remains true whenever ##r## and ##n## increase such a way that ##\lambda## remains bounded. Now
e^{-\lambda} - p_0(r,n) =\sum_{\nu=0}^{+\infty} (-1)^\nu\left(\frac{\lambda^\nu}{\nu!} - \binom{n}{\nu}\left(1-\frac{\nu}{n}\right)^r\right)
and the right hand side tends to zero. We therefore have that for each ##m## fixed
p_m(r,n) - e^{-\lambda}\frac{\lambda^m}{m!}\rightarrow 0
 
  • #81
There seems to be an error in the given proof of Advanced Problem 4:

fresh_42 said:
I'll have a try on #4, although I admit that my solution hides AC in a general result, which I won't prove here. And my first idea hasn't been that far from what I have (not exactly a derivation, but something linear instead). The whole problem is to connect the algebraic property of linearity with the topological property of the ##\lim \inf## and ##\lim \sup##.

I'll start with a description of Hewitt, Stromberg, Real and Abstract Analysis, Springer in which the reals are defined as the cosets of a Cauchy sequence ##\mathfrak{C}## modulo null sequences ##\mathfrak{N}##. Both are subsets of all bounded real sequences ##\mathfrak{B}##.

With ##\mathfrak{N} \subset \mathfrak{C} \subset \mathfrak{B}## we get ##\mathbb{R} \cong \mathfrak{C} / \mathfrak{N} \subset \mathfrak{B} / \mathfrak{N} =: \mathfrak{A}## a real (topological, infinite dimensional, commutative) ##\mathbb{R}-##algebra ##\mathfrak{A}## (with ##1##).

The linear function ##L## can now be viewed as a linear functional on ##\mathfrak{A}##.
I will use the sub-additivity ##\overline{\lim}((x_n)+(y_n)) \leq \overline{\lim} (x_n)+\overline{\lim} (y_n)## and ##\underline{\lim}(x_n) = - \overline{\lim}(-x_n)##

Now here's the trick and the point where the real work is done:
If we define ##L(x_n)\vert_{\mathfrak{C} / \mathfrak{N}} := \lim(x_n)## then ##L(x_n) = \lim(x_n) \leq \overline{\lim}(x_n)## holds on ##\mathfrak{C} / \mathfrak{N} \subset \mathfrak{A} \;##, is linear and we may apply the theorem of Hahn-Banach. (Its proof uses Zorn's Lemma and therefore AC.)
This theorem guarantees us the existence of a linear functional ##L## on ##\mathfrak{A}## which extends our ##L## defined on ##\mathfrak{C} / \mathfrak{N} \; ##.

Linearity is already given by Hahn-Banach.
With that we have ##L(x_n) \leq \overline{\lim}(x_n)## by the theorem and
$$L(x_n)=-L(-x_n) \geq -\overline{\lim}(-x_n) = \underline{\lim}(x_n)$$
For the cutting process in point 5) we define a sequence ##z_n## by ##z_1=x_1## and ##z_n=x_{n} - y_{n-1}=0## for ##n > 1##.
Then ##L(z_n)=L(x_1,0, \dots) = 0 = L(x_n) - L(0,y_1,y_2, \dots) = L(x_n) - L(y_n)## and 5) is equivalent to ##L(0,y_1,y_2, \dots) = L(y_1,y_2, \dots)##. This is true for elements of ##\mathfrak{C} / \mathfrak{N}## which are dense in ##\mathfrak{A}## according to ##\Vert \, . \Vert_{\infty}##.

For ##0 \leq x_n## we have ##0 \leq \underline{\lim} (x_n) \leq L(x_n)##; and for ##\lim (x_n) = x## we have ##x = \underline{\lim} (x_n) \leq L(x_n) \leq \overline{\lim} (x_n) = x## and get equality everywhere. Therefore 4) and 6) hold.
The problem lies in this line:
##L(0,y_1,y_2, \dots) = L(y_1,y_2, \dots)##. This is true for elements of ##\mathfrak{C} / \mathfrak{N}## which are dense in ##\mathfrak{A}## according to ##\Vert \, . \Vert_{\infty}##.
It is not clear what you mean by ##\Vert \, . \Vert_{\infty}##. Normally, it would mean that ##\Vert \, (x_n) _n\Vert_{\infty}=\sup_n|x_n|##, but to make sense here, I suppose it means ##\Vert \, (x_n)_n\Vert_{\infty}=\overline\lim |x_n|##.
Anyway, it is certainly wrong that ##\mathfrak{C} / \mathfrak{N}## is dense in ##\mathfrak{A}## with respect to this. For example, let ##x_n=(-1)^n## (##n=1,2,\dots##). Then, there is no Cauchy sequence ##(y_n)_n## in ##\mathbb R## such that ##\Vert (y_n-x_n)_n\Vert_\infty< 1##, for both these possible definitions of ##\Vert \, . \Vert_{\infty}##.
Thus, ##\mathfrak{C} / \mathfrak{N}## is not dense in ##\mathfrak{A}##.

So, the proof that 5) holds for the extended functional fails. And in fact, there is no obvious reason that an extended functional, which is proved to exist by Hahn-Banach's Theorem, should satisfy 5) just because the original (unextended) functional does.

Advanced Problem 4 must therefore still be considered as open (at least for us) until this is fixed.
 
  • #82
@fresh_42 do you have an answer to this? If not I will put it back as an open problem.
 
  • #83
How about an algebraic argument?

Let ##S## denote the linear operator that inserts a zero in the first position, i.e. ##S(y_1,y_2, \dots ) = S(0,y_1,y_2, \dots )##

Then ##L_0 : S.(\mathfrak{C}/\mathfrak{N}) \rightarrow \mathbb{R}## with ##L_0(0,(x_n)) := L((0,(x_n))## is a linear functional on ##\{(0,x_n) \vert (x_n) \in \mathfrak{C}/\mathfrak{N}\}## which is identical to ##L## on Cauchy sequences (##L## as defined in the solution above) since ##L(0,x_n) = L(x_n)## for Cauchy sequences ##(x_n)##.

Especially ##L_0## is majorized by ##L##, i.e. ##L_0(0,(x_n)) \leq L(x_n)##. Thus ##L_0## can be linearly extended on ##S.\mathfrak{A}## by Hahn-Banach. (The fact, that the first component is zero, makes all sets linear subspaces.)
Thus ##L_0(0,(y_n)) \leq L(y_n)## for all ##(0,(y_n)) \in S.\mathfrak{A}##. But this can only be true if it is an equation, because we can take ##(-y_n)## and linearity to turn it into ##L_0(0,(y_n)) \geq L(y_n)##. Hence ##L_0(0,(y_n)) = L(y_n)## for all ##(y_n) \in \mathfrak{A}## and ##L_0## can be expanded by ##L_0((y_n)) := L(y_n)##. The new ##L_0## then automatically fulfills all properties.

Remark: I hope there is no circular conclusion in it. And I assume there is a topological proof, too, as ##S## is linear, has norm ##1## and is thus uniformly continuous. One could probably even take a ##\mathfrak{C}/\mathfrak{N}-## Hamel basis of ##\mathfrak{A}## to extend ##LS##, because ##1 \in \mathfrak{A}## can be chosen as first basis vector. The important property of Hahn-Banach is the majorizing sublinear function.
 
Last edited:
  • #84
Ok, it's late night in Sweden and I'm tired, but I still can't see that you solved the problem. Your notation is a little confusing, but as far as I can see, you actually did nothing but defined ##L_0## on ##\mathfrak A## by ##L_0(y_1,y_2,y_3,...)=L(y_2,y_3,y_4,...)##, and I don't see that ##L_0## satisfies 5).

Please, correct me if I am wrong...
 
  • #85
Erland said:
Ok, it's late night in Sweden and I'm tired, but I still can't see that you solved the problem. Your notation is a little confusing, but as far as I can see, you actually did nothing but defined ##L_0## on ##\mathfrak A## by ##L_0(y_1,y_2,y_3,...)=L(y_2,y_3,y_4,...)##, and I don't see that ##L_0## satisfies 5).

Please, correct me if I am wrong...
I'm afraid we share the same time zone ...

The argument goes as follows:
(I only have to consider the case ##L(0,y_1,y_2, \dots )= L(y_1,y_2, \dots)## due to linearity (see my first post on this). And this equation is already true for Cauchy sequences.)

I take the function ##L## as previously defined, i.e. it has all properties except (5).
This means it is linear and defined on all bounded sequences (modulo the ideal of null sequences) which I denote as ##\mathfrak{A}##.
##\mathbb{R} \cong \mathfrak{C}/\mathfrak{N}## are the Cauchy sequences (modulo null sequences), which are a subspace of ##\mathfrak{A}##.

Now ##\{0\} \times \mathfrak{A}## defines a vector space with ##\{0\} \times \mathfrak{C}/\mathfrak{N}## as a subspace.
##L_0(0,(x_n)) := L(0,(x_n)) = L((x_n)) \leq L((x_n))## defines a linear functional on this subspace.

I now apply the theorem of Hahn-Banach (without property (5)) to ##L_0## which is majorized by the linear and therewith sublinear functional ##L## on ##\{0\} \times \mathfrak{C}/\mathfrak{N}## to get a linear extension of ##L_0## on ##\{0\} \times \mathfrak{A}##.

Thus ##L_0(0,(y_n)) \leq L((y_n))## for all ##(0,y_n) \in \{0\} \times \mathfrak{A}##.
Therefore ##L_0(0,-(y_n)) \leq L(-(y_n))## and ##L_0(0,(y_n)) \geq L((y_n))## because ##L_0## and ##L## are linear.
I then have ##L_0(0,(y_n)) = L((y_n))## on all ##(0,y_n) \in \{0\} \times \mathfrak{A}## which means on all ##(y_n) \in \mathfrak{A}##.

Now applying Hahn-Banach again to ##\{0\} \times \mathfrak{A} \subset \mathfrak{A}## and ##L_0(0,(y_n)) = L((y_n))## gives with the same argument / trick as above a linear functional ##L'_0## on ##\mathfrak{A}## with ##L'_0(0,(y_n)) = L_0(0,(y_n)) = L((y_n))## and ##L'_0((y_n)) = L((y_n))##.

(Maybe the last argument can stand alone without the previous ##L_0## but as you said, it's too late.)
 
  • #86
How do you get ##L'_0((y_n))=L((y_n))##?
 
  • #87
Erland said:
How do you get ##L'_0((y_n))=L((y_n))##?
The same way as before.
##L_0(0,(y_n)) = L((y_n) \leq L((y_n))## on ##\{0\} \times \mathfrak{A}##.
Hahn-Banach gives ##L'_0((y_n)) \leq L((y_n))## on ##\mathfrak{A}##.
Now ##L'_0(-(y_n)) \leq L(-(y_n))## and by linearity ##L'_0((y_n)) \geq L((y_n))##, i.e. ##L'_0((y_n)) = L((y_n))## on ##\mathfrak{A}##.
Since ##L'_0## extends ##L## (5) still holds.

Edit: But you have been absolutely right. I tried to get to ##L'_0## in one step and had trouble with an egg-chicken kind of problem in defining it (and which I didn't even realize in the first place). The trick with the subspace ##\{0\} \times \mathfrak{A} \subset \mathfrak{A}## seems to be an easy way out of this circle. In addition I have to thank you for the insights into the power of this theorem. I've found quite a bit different versions and applications on the way. And I now know that Banach is the keyword of first choice whenever topology meets linearity.
 
Last edited:
  • #88
Perhaps I am slow-witted, but I just don't get it.
fresh_42 said:
The same way as before.
##L_0(0,(y_n)) = L((y_n) \leq L((y_n))## on ##\{0\} \times \mathfrak{A}##.
Hahn-Banach gives ##L'_0((y_n)) \leq L((y_n))## on ##\mathfrak{A}##.
How can you go from ##L_0(0,(y_n)) = L((y_n) \leq L((y_n))## to ##L'_0((y_n)) \leq L((y_n))##? I suppose you use that ##L'_0(0,(y_n))=L_0(0,(y_n))##, but then, you don't have the same upper estimation functional before and after you apply Hahn-Banach (unless ##L## already satisfies 5). To get an upper estimate by ##L((y_n))## after Hahn-Banach, you should have ##L(0,(y_n))## before Hahn-Banach, while in fact you have ##L((y_n))##, which is not the same if ##L## does not satisfy 5).
 
  • #89
I'm not sure I understand you.
First I construct an ##L## on ##\mathfrak{C}/\mathfrak{N}## which satisfies all properties, (5) including.
Then I expand ##L## on ##\mathfrak{A}## say ##L_1##.
##L_1## satisfies all properties except (5).
Now I take ##L_1## on ##\{0\} \times \mathfrak{C}/\mathfrak{N} \subset \{0\} \times \mathfrak{A}## with ##L_1 (0,(y_n)) \leq L(y_n)##.
Then I expand ##L_1## on ##\{0\} \times \mathfrak{A}##, say ##L_2## which still satisfies ##L_2(0,(y_n)) \leq L_1(0,(y_n)) = L(y_n)##.
And now I expand ##L_2## from ##\{0\} \times \mathfrak{A}## on ##\mathfrak{A}##, say ##L_3##.
It still satisfies ##L_3(y_n) \leq L(y_n)## and its restriction on ##\{0\} \times \mathfrak{A}## is ##L_2##

At every step linearity guarantees that "##\leq##" is actually an equation and ##L_3(y_n) = L(y_n) = L_2(0,(y_n)) = L_3(0,(y_n))## by construction.
 
  • #90
fresh_42 said:
I'm not sure I understand you.
First I construct an ##L## on ##\mathfrak{C}/\mathfrak{N}## which satisfies all properties, (5) including.
Then I expand ##L## on ##\mathfrak{A}## say ##L_1##.
##L_1## satisfies all properties except (5).
Now I take ##L_1## on ##\{0\} \times \mathfrak{C}/\mathfrak{N} \subset \{0\} \times \mathfrak{A}## with ##L_1 (0,(y_n)) \leq L(y_n)##.
Then I expand ##L_1## on ##\{0\} \times \mathfrak{A}##, say ##L_2## which still satisfies ##L_2(0,(y_n)) \leq L_1(0,(y_n)) = L(y_n)##.
And now I expand ##L_2## from ##\{0\} \times \mathfrak{A}## on ##\mathfrak{A}##, say ##L_3##.
It still satisfies ##L_3(y_n) \leq L(y_n)## and its restriction on ##\{0\} \times \mathfrak{A}## is ##L_2##

At every step linearity guarantees that "##\leq##" is actually an equation and ##L_3(y_n) = L(y_n) = L_2(0,(y_n)) = L_3(0,(y_n))## by construction.

You write "It still satisfies ##L_3(y_n) \leq L(y_n)##", meaning that it did that before. But, as far as I can see, ##L_2## does not need to satisfy ##L_2(y_n) \leq L(y_n)## for ##(y_n)\in \{0\} \times \mathfrak{A}##. What it satisfies is ##L_2(0,(y_n)) \leq L(y_n)##.

A correct application of Hahn-Banach would lead from ##L_2(0,(y_n)) \leq L(y_n)## to something like ##L_3(x,(y_n))\le L(y_n)##, for ##x\in \mathbb R##, as I understand it.

Btw, I have no problem with your clever trick of making an equality into an equation. It is the original inequality I doubt.
 
Last edited:

Similar threads

  • · Replies 77 ·
3
Replies
77
Views
12K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 61 ·
3
Replies
61
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
11K
  • · Replies 42 ·
2
Replies
42
Views
10K
  • · Replies 61 ·
3
Replies
61
Views
13K
  • · Replies 61 ·
3
Replies
61
Views
10K
  • · Replies 86 ·
3
Replies
86
Views
13K
  • · Replies 46 ·
2
Replies
46
Views
8K
  • · Replies 93 ·
4
Replies
93
Views
15K