foobster
- 1
- 0
Greetings, I'm new to the forums and just starting grad school. We recently had a homework problem to estimate the amount of time a pencil could stand on its tip without falling over. I remember an undergrad professor mentioning that he had been asked this problem on his orals so perhaps it is common and some of you have seen it before.
In any case, the desired solution was to first solve for the classical equations of motion and then plug in initial conditions based on the uncertainty principle. You assume minimum uncertainty, assume that values of \dot\theta and \theta are approximately equal to their variances, and then optimize the ratio of \dot\theta and \theta to maximize the time before it falls over. This method yields something on the order of a few seconds.
The superficial problem that I have with this is that \dot\theta and \theta are both assumed to be positive. I'm alright with approximating it as one dimensional and saying that \dot\theta and \theta will be on order of \sigmaaway from their centers (in this case 0), but shouldn&#039;t it be equally likely that they have opposite sign as positive sign? If they had opposite sign and you maximized the time it took for the pencil to fall based on the classical equations it would be infinite.<br /> <br /> The deeper problem I have is that I don&#039;t understand how you can just put the uncertainty in the initial conditions. I&#039;m trying to understand how uncertainty effects time evolution, but I&#039;m up against a wall here. I could almost see something like drawing values of p and x from under their distributions, evolving them classically for some small time, drawing new values from their distributions, etc. I know that that isn&#039;t correct, but is there any way remotely like this to think about it?<br /> <br /> Most of the course so far has been devoted to pure math and we&#039;re only just starting to see anything remotely physical now. Sorry if my question is naive but I would really appreciate any insight that anyone has to offer.<br /> <br /> Thanks.
In any case, the desired solution was to first solve for the classical equations of motion and then plug in initial conditions based on the uncertainty principle. You assume minimum uncertainty, assume that values of \dot\theta and \theta are approximately equal to their variances, and then optimize the ratio of \dot\theta and \theta to maximize the time before it falls over. This method yields something on the order of a few seconds.
The superficial problem that I have with this is that \dot\theta and \theta are both assumed to be positive. I'm alright with approximating it as one dimensional and saying that \dot\theta and \theta will be on order of \sigmaaway from their centers (in this case 0), but shouldn&#039;t it be equally likely that they have opposite sign as positive sign? If they had opposite sign and you maximized the time it took for the pencil to fall based on the classical equations it would be infinite.<br /> <br /> The deeper problem I have is that I don&#039;t understand how you can just put the uncertainty in the initial conditions. I&#039;m trying to understand how uncertainty effects time evolution, but I&#039;m up against a wall here. I could almost see something like drawing values of p and x from under their distributions, evolving them classically for some small time, drawing new values from their distributions, etc. I know that that isn&#039;t correct, but is there any way remotely like this to think about it?<br /> <br /> Most of the course so far has been devoted to pure math and we&#039;re only just starting to see anything remotely physical now. Sorry if my question is naive but I would really appreciate any insight that anyone has to offer.<br /> <br /> Thanks.