Math Challenge - July 2020

  • Challenge
  • Thread starter fresh_42
  • Start date
  • Featured
  • #71
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
I don't know if you're allowed to answer/give hints, but for problem 14, I had this thought:

We have two equations and two unknowns. This is a good start. Although, I don't see any way we can solve for ##x## in terms of ##y## (I've really tried), which makes it hard to make progress. Although I recognize that ##\sin^n{(x+\frac{\pi}{2})} = \cos^n{(x)}##. Does that mean for in order for there to be any solutions at all that $$\sin^n{(x+\frac{\pi}{2})} - \cos^n{(x)} = y^4+\left(x + \frac{\pi}{2} \right)y^2-4y^2+4 - x^4+x^2y^2-4x^2+1 = 0$$

I might be completely wrong in my reasoning, but I was just wondering.

Disclaimer: I've never done a "find all solutions x,y for a function" type problem.
You cannot solve it directly. The trick is to get rid of what disturbs most. This is always a good plan. You can eliminate the trig functions to the cost that equality turns into an inequality.
 
  • #72
You cannot solve it directly. The trick is to get rid of what disturbs most. This is always a good plan. You can eliminate the trig functions to the cost that equality turns into an inequality.

If you add them, using ##\sin^4{x} + \cos^4{x} \in [\frac{1}{2},1]##, you can get the inequality $$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) < -4$$ $$(x^2 + y^2)^2 -4(x^2 + y^2) + 4 < 0$$ $$([x^2 + y^2] -2)^2 < 0$$That would seem to imply that there are no possible values of ##x^2 + y^2## that satisfy the inequality, and that there are no solutions.
 
  • #73
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
That would seem to imply that there are no possible values of ##x^2+y^2## that satisfy the inequality, and that there are no solutions...
... if you only had worked properly ...

Or to say it in chess speak: Sauber setzen!
Sorry, the rhythm doesn't work in English.
 
  • #74
ItsukaKitto
10
4
For the first problem in the high school section, I think one can use induction (Don't know if this has been done already)...noting that for n=1 the solution is trivial i.e. a1 + b1 = a + b, and assuming the statement is true for n = m,
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2) = (a^2 + b^2) ## , we can prove
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2)(a_{m+1}^2 + b_{m + 1}^2) = (a^2 + b^2)(a_{m+1}^2 + b_{m+1}^2 ) = (aa_{m+1} + bb_{m+1} )^2 + (ab_{m+1} - ba_{m+1})^2 ##
 
  • #75
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
For the first problem in the high school section, I think one can use induction (Don't know if this has been done already)...noting that for n=1 the solution is trivial i.e. a1 + b1 = a + b, and assuming the statement is true for n = m,
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2) = (a^2 + b^2) ## , we can prove
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2)(a_{m+1}^2 + b_{m + 1}^2) = (a^2 + b^2)(a_{m+1}^2 + b_{m+1}^2 ) = (aa_{m+1} + bb_{m+1} )^2 + (ab_{m+1} - ba_{m+1})^2 ##
Another possibility is to write ##a_k^2+b_k^2 =\det \left| \begin{pmatrix}a_k&-b_k\\b_k&a_k\end{pmatrix} \right|##, show that this form is conserved by matrix multiplication, and then use that the determinant respects matrix multiplication.
 
Last edited:
  • Like
Likes ItsukaKitto
  • #76
Mayhem
274
174
Can we prove problem 11 using vector algebra? Let ##a_k = \binom{a_k}{a_k}##, then we can rewrite the question as
$$\prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \mathbf{a}\cdot\mathbf{b}$$
We know that the scalar product returns a scalar. Which would mean that a product of ##n## scalar products would also return a scalar, namely ##\mathbf{a}\cdot\mathbf{b}##.

And since the scalar product for a 2-d vector is by definition ##a_1a_2+b_1b_2##, then for ##a_1 = a_2## and ##b_1 + b_2##, the resulting scalar of \mathbf{a}\cdot\mathbf{b} must be a sum of two integer squares given the initial conditions for ##\mathbf{a}_k##.

Maybe I'm using circular logic/assuming the conclusion. I'm quite new to proofs.
 
  • #77
ItsukaKitto
10
4
I know one way to solve the trigonometric problem, only that it happens to be just etotheipi 's approach, but done...(I hope) correctly:
Adding the equations we see that we get an inequality in ## A = x^2 + y^2 ## , namely
##cos^4 (x) + sin^4(x) = A(A-4) + 5 <= 1 ##
But we see the greatest value of the expression in A is 1, so that we have equality when it is equal to 1, then we solve ##A^2 - 4A + 4 = 0## , getting ##A = 2##;
Now, we also see that ##cos^4(x) + sin^4(x) = 1##, which will occur when ##x = \pm π/2, 0 ##
So that the values of y can be found from the relation A = 2, to be ## y = \pm \sqrt{2}, \pm \sqrt{2 - π^2/4 } ##
 
  • #78
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
Can we prove problem 11 using vector algebra? Let ##a_k = \binom{a_k}{a_k}##, then we can rewrite the question as
$$\prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \mathbf{a}\cdot\mathbf{b}$$
We know that the scalar product returns a scalar. Which would mean that a product of ##n## scalar products would also return a scalar, namely ##\mathbf{a}\cdot\mathbf{b}##.

And since the scalar product for a 2-d vector is by definition ##a_1a_2+b_1b_2##, then for ##a_1 = a_2## and ##b_1 + b_2##, the resulting scalar of \mathbf{a}\cdot\mathbf{b} must be a sum of two integer squares given the initial conditions for ##\mathbf{a}_k##.

Maybe I'm using circular logic/assuming the conclusion. I'm quite new to proofs.
I guess you meant ##\mathbf{a}_k=\mathbf{b}_k=\begin{bmatrix}a_k\\b_k\end{bmatrix}## in which case we have
$$
(a_1^2+b_1^2)\cdot \ldots \cdot (a_n^2+b_n^2)= \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{a}_k = \prod_{k = 1}^{n} \|\mathbf{a}_k\|_2^2
$$
And at this point you have to insert the argument why the result is again of the requested form. Do you know this argument?
 
  • #79
Mayhem
274
174
I guess you meant ##\mathbf{a}_k=\mathbf{b}_k=\begin{bmatrix}a_k\\b_k\end{bmatrix}## in which case we have
$$
(a_1^2+b_1^2)\cdot \ldots \cdot (a_n^2+b_n^2)= \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{a}_k = \prod_{k = 1}^{n} \|\mathbf{a}_k\|_2^2
$$
And at this point you have to insert the argument why the result is again of the requested form. Do you know this argument?
Not on the top of my head, no. And yes, I messed up the definition of ##\mathbf{a}_k##. Thanks for noticing that.
 
  • #80
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
Not on the top of my head, no. And yes, I messed up the definition of ##\mathbf{a}_k##. Thanks for noticing that.
The scalar product of a vector with itself is the squared Euclidean norm of this vector, its length squared. Now which properties do norms have?
 
  • #81
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
I know one way to solve the trigonometric problem, only that it happens to be just etotheipi 's approach, but done...(I hope) correctly:
Adding the equations we see that we get an inequality in ## A = x^2 + y^2 ## , namely
##cos^4 (x) + sin^4(x) = A(A-4) + 5 <= 1 ##
But we see the greatest value of the expression in A is 1, so that we have equality when it is equal to 1, then we solve ##A^2 - 4A + 4 = 0## , getting ##A = 2##;
Now, we also see that ##cos^4(x) + sin^4(x) = 1##, which will occur when ##x = \pm π/2, 0 ##
So that the values of y can be found from the relation A = 2, to be ## y = \pm \sqrt{2}, \pm \sqrt{2 - π^2/4 } ##
Have you checked which of all these numbers is actually a solution?
 
  • #82
Mayhem
274
174
The scalar product of a vector with itself is the squared Euclidean norm of this vector, its length squared. Now which properties do norms have?
Ah, so ##||\mathbf{a}||## is norm? During high school, we used ##|\mathbf{a}|##, but I should have guessed that it was the same thing.

The norm of a vector ##|\mathbf{a}|##, that is to say its "length" is defined as

$$|\mathbf{a}|=\sqrt{a_1^2+a_2^2}$$

If we square both sides, we get ##|\mathbf{a}|^2 = a_1^2+b_2^2##. Generalize that for ##|\mathbf{a_k}|## and we get what we wanted. I see! Thank you.
 
  • #83
$$([x^2 + y^2] -2)^2 < 0$$
Sorry, this should be a non-strict inequality.
 
  • #84
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
Ah, so ##||\mathbf{a}||## is norm? During high school, we used ##|\mathbf{a}|##, but I should have guessed that it was the same thing.

The norm of a vector ##|\mathbf{a}|##, that is to say its "length" is defined as

$$|\mathbf{a}|=\sqrt{a_1^2+a_2^2}$$

If we square both sides, we get ##|\mathbf{a}|^2 = a_1^2+b_2^2##. Generalize that for ##|\mathbf{a_k}|## and we get what we wanted. I see! Thank you.
Yes, but you still need a property of the norm.
 
  • #85
Mayhem
274
174
Yes, but you still need a property of the norm.
That it is a scalar?
 
  • #86
ItsukaKitto
10
4
Have you checked which of all these numbers is actually a solution?
Ah yes, they're only a list of permissible values, the actual solutions I think must be a subset of these...
##(0,\sqrt{2}) , (0, -\sqrt{2})## work but the other values don't seem to work.
 
  • #87
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
That it is a scalar?
The problem we still have is whether the product of the squared norms is still of the form ##\|\mathbf{a}\|_2^2##. I don't see this. So maybe I was wrong and the norm doesn't work as easy as I thought. We will still need additional information. I guess the determinant idea is the closest one to that approach.
 
  • #88
Mayhem
274
174
The problem we still have is whether the product of the squared norms is still of the form ##\|\mathbf{a}\|_2^2##. I don't see this. So maybe I was wrong and the norm doesn't work as easy as I thought. We will still need additional information. I guess the determinant idea is the closest one to that approach.
So in conclusion, setting it up as a product of scalar products probably isn't particularly useful. I really ought to brush up on my linear algebra. Unfortunately calculus is more fun.
 
  • #89
fresh_42
Mentor
Insights Author
2022 Award
17,796
18,965
So in conclusion, setting it up as a product of scalar products probably isn't particularly useful. I really ought to brush up on my linear algebra. Unfortunately calculus is more fun.
13.) and 12.c.) from last month are calculus - basically.
 
  • #90
Lament
10
3
Given,
\begin{align*}
f(xy)&=f(x)f(y)-f(x)-f(y)+2 ...(1)\\
f(x+y)&=f(x)+f(y)+2xy-1 ...(2)\\
f(1)&=2 ...(3)
\end{align*}
Put ##y=0## in ##(2)##, gives ##f(0)=1##
Put ##y=1## in ##(2)##, gives
$$f(x+1)=f(x)+f(1)+2x-1$$
or,
\begin{align*}
f(x+1)-f(x)&=2x+1 ...(4)
\end{align*}
Hence,
\begin{align*}
[f(x+1)-f(x)]\\+[f(x)-f(x-1)]\\+...\\+[f(3)-f(2)]\\+[f(2)-f(1)]&=[2*x+1]+[2*(x-1)+1]+...+[2*2+1]+[2*1+1]\\
\Rightarrow f(x+1)-f(1)&=2*\frac {x(x+1)} {2}+x \\
\Rightarrow f(x+1)&=x^2+2x+2=(x+1)^2+1 ...(5)\\
\end{align*}
or,
\begin{align*}
f(x)&=x^2+1 ...(6)
\end{align*}
It is easy to check ##(6)## satisfies ##(1)## & ##(2)##.
 
  • #91
Adesh
735
188
If you add them, using ##\sin^4{x} + \cos^4{x} \in [\frac{1}{2},1]##, you can get the inequality $$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) < -4$$ $$(x^2 + y^2)^2 -4(x^2 + y^2) + 4 < 0$$ $$([x^2 + y^2] -2)^2 < 0$$That would seem to imply that there are no possible values of ##x^2 + y^2## that satisfy the inequality, and that there are no solutions.
I think, pal, you missed the equality. The first expression should be

$$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) \leq -4$$
because 1 is allowed. Hence, your last expression becomes
$$([x^2 + y^2] -2)^2 \leq 0$$
since we want only real solutions, therefore equality is something that we want (a square cannot be less than zero if it's components are less than 0). So, we have
$$
[(x^2 +y^2)-2]^2= 0$$
$$
x^2 + y^2 = 2 $$
With this value we can put it into our original given equations to get the real solutions.

(Pal I'm not alluding anything to our discussion in that thread, we are friends and I respect that).
 
  • #92
I think, pal, you missed the equality. The first expression should be

You're right, see #83 :wink:
 
  • #93
Adesh
735
188
You're right, see #83 :wink:
Sorry missed that. Actaully, there were so many posts and even the tab “there are more posts to display” but I didn’t click on it :biggrin:
 
  • #94
mathwonk
Science Advisor
Homework Helper
11,423
1,693
SPOILER Remarks/hints:



Problem #1 taught me that i did not really know what the weak topology is, i.e. what the basic open sets are. nice problem.

Problem #9 taught me that you can reverse the inequality and still get the same conclusion, even wthout assuming the map is onto, (essentially same proof). Another instructive problem.

Problem #2, I don't know much about, but I would start by finding their lie algebras. (Since, as you probably know, the lie algebra is the tangent space to the lie group at the identity, and transversaity is related to relative position of tangent spaces.)
 
Last edited:
  • #95
Infrared
Science Advisor
Gold Member
996
556
@mathwonk Yes, computing their Lie algebras is the right way to go. And in general that should be enough, because if ##G,G'\subset GL_n## are Lie groups, and ##g\in G\cap G'##, then ##G## and ##G'## intersect transversely at ##g## iff they do at ##1## (since ##T_gG=g_* \left(T_1G\right)## and similarly for ##G'##).
 
  • #96
wrobel
Science Advisor
Insights Author
997
862
you can reverse the inequality and still get the same conclusion, even wthout assuming the map is onto, (essentially same proof). Another instructive problem.
Sure, to remember the Poincare recurrence theorem is also instructive in this concern
 
  • #97
mathwonk
Science Advisor
Homework Helper
11,423
1,693
I'm afraid I don't know that theorem of Poincare'. Ok I googled it, yes, nice connection.
Also one gets from the reverse result that every (necessarily injective) isometry of a compact metric space into itself is also surjective, a possibly useful fact, analogous to results about linear maps of finite dimensional vector spaces, and algebraic maps of irreducible projective varieties, and of course functions on finite sets, another illustration of the principle that "compact" generalizes "finite".


And hopefully the lie algebra result gives what we want, since then SU(n) has a nice lie algebra structure compatible with the other two!



Possible spoiler comments on #4.



And in looking at continuous functions on the interval, I was able to find a sequence of constant norm that converges to zero weakly, (using the known structure of the dual of C[0,1] due to Riesz in 1909, as functions of bounded variation), but so far that only illustrates problem #1, apparently not whether C[0,1] is a dual. I thought maybe I could use the theorem that the unit ball in a dual is weak star compact, but have not seen how to do that yet. I.e. even if the ball were not weakly compact, which is not at all clear either, it seems it might still be weak-star compact. Some people say the key is to look at "extreme points", i.e. maybe convexity is involved? (i.e. Krein Milman as well as Alaoglu.) Haven't written anything down yet but if the picture in my head is close, among positive valued functions in the unit ball in C[0,1], there seems to be only one extreme point, hence altogether only two? Just tossing out guesses.
 
Last edited:
  • #99
nuuskur
Science Advisor
807
698
Since [itex]\dim V \leq \dim V'[/itex] always holds, the space [itex]V'[/itex] must be infinite dimensional. Assume for a contradiction [itex]\sigma (V,V')[/itex] is normable. The dual space of normed space is a Banach space, so it is enough to show that the vector space [itex]V'[/itex] admits a countable basis, which is impossible for Banach spaces.

Let [itex]\|\cdot\|[/itex] be the inducing norm. I will attempt to justify that we may assume a countable neighborhood basis of [itex]0[/itex] which consists of kernels of functionals in [itex]V'[/itex]. Consider a nh basis of [itex]0[/itex], for instance [itex]T_n = \{x\in V \mid \|x\| < 1/n\},\quad n\in\mathbb N.[/itex]
Fix [itex]n\in\mathbb N[/itex]. By Hahn-Banach (the point separation corollary), take [itex]\varphi _n \in V'[/itex] such that [itex]\overline{T_n}\subseteq \mathrm{Ker}\varphi _n[/itex]. Then the sequence of kernels constitute a nh basis of [itex]0[/itex]. In fact,
[tex]
\left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon >0\right\}
[/tex]
is a nh basis of [itex]0[/itex] w.r.t [itex]\sigma (V,V')[/itex]. Take [itex]\varphi \in V'[/itex]. For every [itex]\varepsilon >0[/itex] we have [itex]N_\varepsilon \in\mathbb N[/itex] s.t
[tex]
\bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))
[/tex]
Put [itex]N := \min _{\varepsilon }N_{\varepsilon}[/itex]. Then [itex]x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _k[/itex] implies [itex]x\in\mathrm{Ker}\varphi [/itex], thus [itex]\varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N)[/itex].
I think Hahn Banach is overkill and I also think a similar argument passes assuming [itex]\sigma (V,V')[/itex] is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.
 
Last edited:
  • #100
Since [itex]\dim V \leq \dim V'[/itex] always holds, the space [itex]V'[/itex] must be infinite dimensional. Assume for a contradiction [itex]\sigma (V,V')[/itex] is normable. The dual space of normed space is a Banach space, so it is enough to show that [itex]V'[/itex] admits a countable basis, which is impossible for Banach spaces.

Let [itex]\|\cdot\|[/itex] be the inducing norm. I will attempt to justify that we may assume a countable nh basis of [itex]0[/itex] which consists of kernels of functionals in [itex]V'[/itex]. Consider a neighborhood basis of [itex]0[/itex], for instance [itex]T_n = \{x\in V \mid \|x\| < 1/n\},\quad n\in\mathbb N.[/itex]
Fix [itex]n\in\mathbb N[/itex]. By Hahn-Banach (the point separation corollary), take [itex]\varphi _n \in V'[/itex] such that [itex]\overline{T_n}\subseteq \mathrm{Ker}\varphi _n[/itex]. Then the sequence of kernels constitute a nh basis of [itex]0[/itex]. In fact,
[tex]
\mathcal N := \left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon >0\right\}
[/tex]
is a nh basis of [itex]0[/itex] w.r.t [itex]\sigma (V,V')[/itex]. Take [itex]\varphi \in V'[/itex]. For every [itex]\varepsilon >0[/itex] we have [itex]N_\varepsilon \in\mathbb N[/itex] s.t
[tex]
\bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))
[/tex]
Put [itex]N := \min _{\varepsilon }N_{\varepsilon}[/itex]. Then [itex]x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _n[/itex] implies [itex]x\in\mathrm{Ker}\varphi [/itex], thus [itex]\varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N)[/itex].
I think Hahn Banach is overkill and I also think a similar argument passes assuming [itex]\sigma (V,V')[/itex] is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.

I didn't read yet in detail but what do you mean with "basis"? A topological basis? In that case ##l^2(\Bbb{N})## is separable and thus has a countable basis, yet has infinite dimension. So I don't quite see how you would get a contradiction.
 
  • #101
nuuskur
Science Advisor
807
698
I didn't read yet in detail but what do you mean with "basis"? A topological basis? In that case ##l^2(\Bbb{N})## is separable and thus has a countable basis, yet has infinite dimension. So I don't quite see how you would get a contradiction.
Now, you're confusing me. The [itex]\varphi_n[/itex] make a countable basis of the vector space [itex]V'[/itex], which is impossible for Banach spaces. Iirc it's a consequence of Baire category theorem.. now that I think about it, maybe P1 more generally can also be shown with BCT.

I edited the post to emphasise it's a basis in the sense of vector spaces.
 
Last edited:
  • #102
Now, you're confusing me. The [itex]\varphi_n[/itex] make a countable basis of the vector space [itex]V'[/itex], which is impossible for Banach spaces. Iirc it's a consequence of Baire category theorem.. now that I think about it, maybe P1 more generally can also be shown with BCT.

I edited the post to emphasise it's a basis in the sense of vector spaces.

Ah yes, I see now. Your idea was to show that ##V^*## admits a countable Hamel basis by showing that ##V^*## is spanned by countably many functionals. Interesting! I will look at the details soon and get back to you.
 
Last edited by a moderator:
  • #103
Since [itex]\dim V \leq \dim V'[/itex] always holds, the space [itex]V'[/itex] must be infinite dimensional. Assume for a contradiction [itex]\sigma (V,V')[/itex] is normable. The dual space of normed space is a Banach space, so it is enough to show that the vector space [itex]V'[/itex] admits a countable basis, which is impossible for Banach spaces.

Let [itex]\|\cdot\|[/itex] be the inducing norm. I will attempt to justify that we may assume a countable neighborhood basis of [itex]0[/itex] which consists of kernels of functionals in [itex]V'[/itex]. Consider a nh basis of [itex]0[/itex], for instance [itex]T_n = \{x\in V \mid \|x\| < 1/n\},\quad n\in\mathbb N.[/itex]
Fix [itex]n\in\mathbb N[/itex]. By Hahn-Banach (the point separation corollary), take [itex]\varphi _n \in V'[/itex] such that [itex]\overline{T_n}\subseteq \mathrm{Ker}\varphi _n[/itex]. Then the sequence of kernels constitute a nh basis of [itex]0[/itex]. In fact,
[tex]
\left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon >0\right\}
[/tex]
is a nh basis of [itex]0[/itex] w.r.t [itex]\sigma (V,V')[/itex]. Take [itex]\varphi \in V'[/itex]. For every [itex]\varepsilon >0[/itex] we have [itex]N_\varepsilon \in\mathbb N[/itex] s.t
[tex]
\bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))
[/tex]
Put [itex]N := \min _{\varepsilon }N_{\varepsilon}[/itex]. Then [itex]x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _k[/itex] implies [itex]x\in\mathrm{Ker}\varphi [/itex], thus [itex]\varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N)[/itex].
I think Hahn Banach is overkill and I also think a similar argument passes assuming [itex]\sigma (V,V')[/itex] is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.

I'm not sure how you apply Hahn-Banach, but even if its use is justified there seems to be a problem:

You have ##T_n \subseteq \ker(\varphi_n)## and consequently ##V=\operatorname{span}(T_n) \subseteq \ker(\varphi_n)## so we have ##\varphi_n = 0## for all ##n##, so it is impossible that these functionals span the dual space.

Somewhere you must have made a mistake.
 
  • #104
nuuskur
Science Advisor
807
698
Oops, I'm an idiot..:oldgrumpy:
Will revise, apologies for wasting your time.
 
  • #105
Will revise, apologies for wasting your time.

No time wasted! I learned something from it :)
 

Suggested for: Math Challenge - July 2020

  • Last Post
3
Replies
93
Views
5K
  • Last Post
2
Replies
60
Views
7K
  • Last Post
3
Replies
98
Views
10K
  • Last Post
Replies
33
Views
6K
  • Last Post
2
Replies
52
Views
8K
  • Last Post
3
Replies
104
Views
12K
  • Last Post
5
Replies
156
Views
13K
  • Last Post
2
Replies
61
Views
8K
  • Last Post
5
Replies
150
Views
13K
  • Last Post
3
Replies
77
Views
11K
Top