Lebesgue measure in lower dimensional space

  • Thread starter Thread starter Shaji D R
  • Start date Start date
  • Tags Tags
    Measure Space
Shaji D R
Messages
19
Reaction score
0
The context is that I am reading the proof that Lebesgue measure is rotation invariant

Let X be a k-dimensional euclidean space. T is a linear map and its range is a subspace Y of lower
dimension. I want to prove that m(Y) = 0 where m is the lebesgue measure in X.

How to prove this?

Consider a special case k = 2.Consider the subspace which is a linear combination of (1,1)
(Which is a line 45 degrees from the x-axis). Call the subspace as Y. How can we prove that
m(Y) = 0 without rotating the Y.

Also, is there a simple( and rigorous!) proof that rotation is a Linear Transformation in k-dimensional
euclidean space?
 
Physics news on Phys.org
It's unclear to me what you can use. Can you say which book and which theorem you're doing?
 
"Real And Complex Analysis" by Walter Rudin 3rd Edition, Theorem 2.20 Page No:50, 51,52
The sentence troubling me is "If range of T is a subspace Y of lower dimension, Then m(Y) = 0..."
 
Lemma: If ##(X,\mathcal{F},\mu)## is a ##\sigma##-finite measure space and if ##\{A_i~\vert~i\in I\}## have the property that
1) Each ##A_i\in \mathcal{F}##,
2) For each ##i## holds that ##\mu(A_i) >0##,
3) The ##A_i## are disjoint.
Then holds that ##I## is countable.

Proof: By ##\sigma##-finiteness, we can write
X=\bigcup_{j\in J} X_j
such that each ##X_j\in \mathcal{F}## and such that ##\mu(X_j)<+\infty##. I claim that the set
I_j = \{i\in I~\vert~\mu(X_j\cap A_i)&gt;0\}
is countable. Indeed, if ##i_1,...,i_n## are distinct indices in ##I## such that ##\mu(X_j\cap A_{i_k})\geq \varepsilon>0##, then
n\varepsilon\leq \sum_{k=1}^n \mu(X_j\cap A_{i_k})\leq \mu(X_j)
And thus we must have ##n\leq \mu(X_j)/\varepsilon##. So the set
\{i\in I~\vert~\mu(X_j\cap A_i)&gt;\varepsilon\}
must be finite. Thus
I_j = \{i\in I~\vert~\mu(X_j\cap A_i)&gt;0\} = \bigcup_{n&gt;0} \{i\in I~\vert~\mu(X_j\cap A_i)&gt;1/n\}
is countable as countable union of finite sets. This proves the claim.

Now, if ##i\in I## has the property that ##\mu(X_j\cap A_i)=0## for all ##j##, then
\mu(A_i)\leq \sum_{j\in J}\mu(X_j\cap A_i) = 0
which is a contradiction. Thus we see that ##I = \bigcup_{j\in J} I_j## and is thus countable. This proves the lemma.

Now, to prove the main result. Assume that ##V## is a subspace of ##\mathbb{R}^n## of dimension ##\textrm{dim}(V)<n##. Assume that ##\mu(V)>0##. Let ##x## be a vector not in ##V##, then consider the set
\{\alpha x+V~\vert~\alpha\in \mathbb{R}\}
This satisfies all criteria from the previous lemma (it satisfies (2) because of translation invariance). Thus ##\mathbb{R}## is countable, which is a contradiction.
 
Last edited:
Your proof is tough!
Looks like there are some small mistakes in the proof. For example "for all i" should be replaced by "for all j".
one "> epsilon" should be replaced by "> 1/n".

One major doubt is that in the proof it is assumed that measure of A is finite.
Consider my example V = {all linear combination of (1,1)}. If you take union of all V+alpha.x its union will be the
entire xy plane and the measure is Infinity.

Actually I am not sure about what I wrote.That "all V+alpha.x are disjoint " I could prove myself.
I think I should learn LateX. Please give me some time.

Even if what I wrote is foolish please reply to me.
 
Shaji D R said:
Your proof is tough!
Looks like there are some small mistakes in the proof. For example "for all i" should be replaced by "for all j".
one "> epsilon" should be replaced by "> 1/n".

Please excuse me for errors in my proof. I tried to fix it as well as I could. I did not quite see where to replace the >1/n though.

One major doubt is that in the proof it is assumed that measure of A is finite.

Actually, there shouldn't have been an A in the first place. Please read the proof again now.
 
See the phrase "is countable as countable union of finite sets". In the step above that, epsilon in the right side
should be replaced by 1/n, I think. Union on the right side should be over n. (I am not sure).

Thank you very much for the proof. But how I was supposed to prove this while reading Walter Rudin Text?
Would you please tell me how you got the idea of this proof? Do you suggest any other book?

One again, Thank you very much
 
Shaji D R said:
See the phrase "is countable as countable union of finite sets". In the step above that, epsilon in the right side
should be replaced by 1/n, I think. Union on the right side should be over n. (I am not sure).

That's actually correct. But if you prefer this ##1/n## thing, then that's correct also. I'll change it into your suggestion.

Thank you very much for the proof. But how I was supposed to prove this while reading Walter Rudin Text?
Would you please tell me how you got the idea of this proof? Do you suggest any other book?

One again, Thank you very much

To be honest, if you're reading Rudin, then you're basically asking for this. It's not that you're stupid, it's that Rudin is a horrible writer. He will occasionally put in statements like that which are highly nontrivial. I would never recommend Rudin to anybody unless perhaps as a reference book.

Here's a list of much better books:
- "Probability and Measure" by Billingsley. Don't let the title fool you into thinking thinking this is only about probability. He does quite a lot of measure theory in chapters which tend to be quite disjoint from the probabiliy chapters. It's a masterful written book. And it's this book that I used to construct the proof in this thread.

- "Real and functional analysis" by Lang. Some Lang books are horrible, others are very good. This is a book that I like really much.

- "Real Analysis" by Yeh. Contains a lot of stuff on measure theory.
 
Shaji D R said:
Thank you very much for the proof. But how I was supposed to prove this while reading Walter Rudin Text?
This is no doubt a frequently asked question when reading Rudin... :smile:

I also highly recommend Real Analysis (2nd edition) by Bruckner, Bruckner, and Thompson. This is a great, very readable book if you want to really understand measure theory rather than treating it like a nuisance as Rudin seems to do.
 
  • #10
Shaji D R:
Re the proof that a rotation is a linear map, consider what happens when you have a point (x,y)=(cosθ, sinθ) and you rotate it into the point (x',y')=(cos(θ+ψ), sin(θ+ψ)). Expand (x',y')Try writing (x',y') in terms of (x,y) using a matrix. And you can then use the Jacobian of the matrix ...EDIT: Sorry, I misread; this is an argument to show that a rotation preserves the measure, not that the image in lower dimensions has measure zero; the determinant is not even defined here. Let me think it through.
 
Last edited:
  • #11
We can assume some familiarity of Matrix and Determinants. Definitely in xy plane rotation is a Linear Mapping.z-axis will be the axis of rotaion.
x' = xcosø-ysinø and y'=ycosø-xsinø and z' = z - This can be expressed as a matrix(of Linear Transformation).(Rotaion by an angle ø in xy plane about origin)

Now take k-dimensional Euclidean space. Here we can define a plane of rotation. We can find 2 orthogonal vectors in that plane with uint length. But if we assume the existence of k-2 orthogonal
vectors, rotaion can be defined as a linear transformation.Other k-2 co-ordinates remains unchanged. But how we will construct the other k-2 vectors. It may appear that it is a matter of solving some linear equations-But I couldn't make out.Theorem 9.3(c) "Principles of Mathematical Analysis, by Rudin" may help? I reduced the problem like this: given two column vectors orthogonal for kXk matrix, how to construct the other k-2 orthogonal column vectors?

Maybe without referring to matrix and determinats some solution exists. Actually I am totally confused.
 
  • #12
Shaji D R said:
We can assume some familiarity of Matrix and Determinants. Definitely in xy plane rotation is a Linear Mapping.z-axis will be the axis of rotaion.
x' = xcosø-ysinø and y'=ycosø-xsinø and z' = z - This can be expressed as a matrix(of Linear Transformation).(Rotaion by an angle ø in xy plane about origin)

Now take k-dimensional Euclidean space. Here we can define a plane of rotation. We can find 2 orthogonal vectors in that plane with uint length. But if we assume the existence of k-2 orthogonal
vectors, rotaion can be defined as a linear transformation.Other k-2 co-ordinates remains unchanged. But how we will construct the other k-2 vectors. It may appear that it is a matter of solving some linear equations-But I couldn't make out.Theorem 9.3(c) "Principles of Mathematical Analysis, by Rudin" may help? I reduced the problem like this: given two column vectors orthogonal for kXk matrix, how to construct the other k-2 orthogonal column vectors?

Maybe without referring to matrix and determinats some solution exists. Actually I am totally confused.

Are you looking for the Gram-Schmidt process? http://en.wikipedia.org/wiki/Gram–Schmidt_process
 
  • #13
You are correct. Thank you very much!
 
Back
Top