- #1
- 42
- 0
"Applied Functional Analysis" by Zeidler
In my book, "Applied Functional Analysis" by Zeidler, there's a question in the first chapter which, unless I got my concept of density wrong, I can't seem to see true : Let X=C[a,b] be the space of continuous functions on [a,b] with maximum norm. Then the subset S of all functions (in X) with u(a)>0 is open, convex and dense in X.
Open and convex is trivial, but how is this subset dense in X? If we take f(x)=-1, which is in X and suppose that S is dense in X, then there exists a u in S s.t. max|u(x)-f(x)|<1/2, by def. of density. But 0 < 1 < 1+u(a) = u(a)-(-1) = u(a)-f(a) = |u(a)-f(a)| =< max|u(x)-f(x)|<1/2, which implies 1 < max|u(x)-f(x)| < 1/2, a contradiction.
What I don't understand? Thanks!
In my book, "Applied Functional Analysis" by Zeidler, there's a question in the first chapter which, unless I got my concept of density wrong, I can't seem to see true : Let X=C[a,b] be the space of continuous functions on [a,b] with maximum norm. Then the subset S of all functions (in X) with u(a)>0 is open, convex and dense in X.
Open and convex is trivial, but how is this subset dense in X? If we take f(x)=-1, which is in X and suppose that S is dense in X, then there exists a u in S s.t. max|u(x)-f(x)|<1/2, by def. of density. But 0 < 1 < 1+u(a) = u(a)-(-1) = u(a)-f(a) = |u(a)-f(a)| =< max|u(x)-f(x)|<1/2, which implies 1 < max|u(x)-f(x)| < 1/2, a contradiction.
What I don't understand? Thanks!