Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Orthogonal complement of gradient field?

  1. Apr 5, 2012 #1
    I am doing my research in probability. I have found some probability distribution of a random variable X on the n dimensional unit sphere. Let b be a smooth and lipschitz vector field mapping X to [itex]R^n[/itex]. I have also found that for all continuous differentiable function f mapping X to [itex]R[/itex], the expectation of [itex]\triangledown f\cdot b[/itex] is zero. I have strong feeling that this implies b(X)=0 with probability 1, but I am not sure how I can prove it.
     
    Last edited: Apr 5, 2012
  2. jcsd
  3. Apr 5, 2012 #2

    chiro

    User Avatar
    Science Advisor

    Hey liguolong and welcome to the forums.

    The fact that X is a random variable defined over the n-dimensional unit sphere means that X is a pdf where the pdf is a function of n variables and the CDF is the integral over the n-dimensional space where you will have the limits represent the entangled nature (since you're defining over the surface of the sphere). This implies a continuous distribution which is important to note later on.

    Now you then have the constraint about the gradient and it's expectation is zero.

    Now if you want to prove that this transformation is only defined at b(X) = 0, then you will need to prove two things: E[b(X)] = 0 and VAR[b(X)] = 0. If you prove these two things then you will show that without a doubt the only value that will suffice will be at the point b(X) = 0.

    Now your grad(f) . b = 0 implies that b is perpendicular to the gradient of f for all f. Now I haven't done vector calculus in a long time but I think that for this to happen then grad(b) . b = 0 since f includes b. This implies that f = 0. (I could be wrong so please correct me if I am).

    So if f is the zero function then that means everywhere will be zero where b(X) maps to 0 for every value and hence E[b(X)] = 0 and VAR[b(X)] = 0 which means that P(b(X) = 0) should be one as you suggested.

    I'm interested if any of my reasoning is wrong and again I haven't taken vector calculus in a long time.
     
  4. Apr 5, 2012 #3
    The fact that E[b(X)] = 0 is easy, just take f=x_i for each i it is ok. However, I am not sure how to prove Var[b(X)] = 0.

    I am not sure what is meant by grad(b).b=0 since b is a vector field, and grad is only defined for scalar functions.
     
  5. Apr 5, 2012 #4

    chiro

    User Avatar
    Science Advisor

    This is what I mean by grad(f)

    http://en.wikipedia.org/wiki/Del#Gradient
     
  6. Apr 5, 2012 #5
    Ok, what I mean was f doesn't include b. Because b is a vector field. And grad only maps scalar functions to vectors, but it doesn't map vector functions to anything. Or do you mean E[grad(b_i).b]=0? In that case I don't know why it follows b=0.
     
  7. Apr 5, 2012 #6

    chiro

    User Avatar
    Science Advisor

    It's nearly midnight here so I'll respond tomorrow if need be.
     
  8. Apr 5, 2012 #7
    Ok, I am doing the wrong thing. I have found a counter example and what I am trying to prove makes absolutely no sense.
     
  9. Apr 5, 2012 #8

    chiro

    User Avatar
    Science Advisor

    The reason for the statement above is that f must include b since f includes all of your smooth functions of which b is also one of these. This was the motivation for my argument. From this I conjectured that b must be the zero function.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Orthogonal complement of gradient field?
Loading...