DIY: Simulate Atoms and Molecules

In summary, the speaker is interested in creating a simplified simulator for molecular modeling based on quantum mechanics. They have been disappointed in the currently available free options and have had difficulties with a particular program called Ghemical. They have a background in EE and have successfully simulated fields using numerical methods. Their goal is to replace traditional QM methods with the pilot wave interpretation of the Schrodinger equation and have been inspired by a mechanical spinning top model from the 1950s. They hope to be able to simulate effects such as the Bohr magneton and orbital transitions, but have encountered difficulties in understanding the mechanism of spontaneous emission.
  • #1
andrewr
263
0
I would like to see how far I can get writing some novel & simplified simulator code for molecular modeling based on QM.

------ Background follows ---- skip to next post to get to the details of the project & starter QM question..

My son took chemistry this last semester (High-school level) and to help him out I wanted to find a free molecular modeler tool that would allow him to visualize VSEPR like images for molecules (PI/Sigma bonds), and also for myself; I was severely disappointed. As I have have some QM work that I may later publish (free) I don't want to enter into any kind of ip contract for such a program. eg: most molecular modelers are either 1. $1K> / seat or 2. proprietary, and want signed papers ... although the methods and implementations have been around for nearly as long as computers.

This became acute a few weeks ago when I succeeded in doing an electrolytic process that I previously though impossible because water would be decomposed; but it worked in spite of that. I desperately wanted to understand this chemical process although I am not a chemist but an EE. (Chem was my weak point in college). After a fairly diligent search, I discovered that the only program which was totally free in a sense usable to me was Ghemical, built on top of Jmol as a visualizer.

Sad to say the code is extremely buggy in places; The Java version never got beyond the splash screen, and the C/fortan code version could not completely compile -- especially in the area of orbital interaction based on old Fortran code, so that some features were disabled in my version.

I still had the crude "optimize geometry" function which I had hoped was close to VSEPR (Vander Waal's outlines...) and that would have been good enough -- but even that has serious bugs; Eg: when I built a test crystal that I knew what its bonds were the optimization turned it into a 2D item with atoms that would practically have to undergo nuclear fusion to be in the places they were optimized to...

I am not adverse to fixing code -- I have done quite a bit of it including kernel drivers, but I would rather spend my time on something other than old algorithms such as Hartree Fock, slater determinants, and all the well known problems of these variational methods which make my eyes glaze over.
Nobody yet has a flawless simulator...

So I would like to write my own code, both for my son and I -- but do not understand QM sufficiently to complete the project, and would like some feedback on problems I see and perhaps a heads up on any issues where I show myself to be blatantly ignorant of something. (As a BSEE I solve solid state QM problems regularly, but molecular modeling is a bit different ... )

I have figured out how to numerically simulate a field, such as E-M field, in ways that are reasonably accurate and efficient (sparse matrix). I wrote an electronics simulator using these techniques and am able to simulate circuit functions and nonlinear devices with extreme accuracy. I know this is sufficient as a foundation for actually building virtual "space" for watching EM in the classical Maxwell sense from discreetly located electrons and protons. So, I would like to experiment on the next possible step -- replacing Hartree Fock approximations, Slater determinants, etc, with the pilot wave interpretation of the Schrodinger equation and a statistical sampling of a (perhaps modified) Schrodinger equation.
As inspiration for trying this, I came across an old textbook (1950's) showing all of the QM orbitals S,P,D,F, modeled *correctly* using a mechanical spinning top -- and the accuracy was surprising to me -- so I do believe that a semi-classical simulation can produce fairly accurate results.
 
Physics news on Phys.org
  • #2
My first goal is a simple electron simulation, and perhaps atomic simulation of Hydrogen and Helium.
In particular, I would like to be able to simulate effects of the Bohr Magneton, orbital transitions, and the Stern Gerlach experiment; which ought to work for hydrogen, but fail for Helium.

For now, I am looking at the Bohr magneton as an effect proceeding from a point (or ring), affecting the EM field -- and am simply using the definition of energy in a dipole magnet to simulate the electron spin's effect using a quasi ZPE technique, and using the pilot wave idea with QM/Schrodinger's to decide where the electron may statistically go.

I know how to use Boltzmann eqn. to determine the probability distribution among orbitals;
But I am stumped on orbital transitions -- since I am using a dynamic time based simulation.
I also hope that the speed of light effects also included in my EM field may yield results which are consistent with relativity ... but I'll settle for classical if not.

What I envision is that in the Schrodinger Eqn, individual "orbital" states can be computed given the immediately surrounding EM field. Ignoring for discussion, but not simulation, the motion of the proton -- thus -- the electron can be said to be in a state who's probability distribution is known (eg: at statistical sample time t) for the EM field given, regardless of how many particles created that field.
The self field of the electron/proton/etc being easily removed for that calculation.

One can in temporary memory, vary the total energy of an electron in say the 1s state to the inter-orbital state 1/2(1s + 2s) to determine when the EM field would permit the electron to change to the 2S state (although no change in electron position would be recorded). I intend to experiment with various algorithms for conserving energy... and am not concerned about that yet.

I know from semiconductor physics how to compute the probability of an item being in various states; but I have no idea what the mechanism/mathematics of spontaneous emission ought to look like.
I haven't found any mathematics which I understand and which explain how to estimate/compute the time an electron will stay in an excited state -- and what dependencies that has on the EM field around it.

In reality this may not be a problem, as I expect the EM field will always be changing and may simply naturally fall to lower levels at the appropriate times statistically, I have no way of verifying if a repeated experiment is right on average or not without some kind of estimate.

So, I'll stop here as I have no idea how complex the answer will be to this first question, what determines the lifetime of an excited state.

--Andrew.
 
  • #3
Probably I am not helping you out with this reply, but I am also interested in developing my own C code for simulating quantum physics. Up to now I have succeeded performing classical many-body gravitation and classical Ising models, but I would like to jump to quantum realm. My wife used Gaussian software to perform chemical simulations in her PhD thesis, but I would like to build my own code from scratch.

Just when figuring out what to do in my mind, I have found an important problem. I would like to get a dynamic picture as the result of the simulation. I mean, the position of the electron at different times. But, according to quantum mechanics, getting the position implies collapsing the wave-function. That leads me to a Zeno effect, or kind of, because the simulator is constantly performing position measurements on the system. I am concerned about that, but I am not sure if my point is correct or I am missing something important.

Thanks.
 
  • #4
Heisenburg's principle applies to actual measurements. He himself, in one document I read, admitted that once the experiment was over his principle did not by itself prevent one from determining where the measured item was in the past. (That is not to say that it is possible to always determine it in every kind of experiment.) However, the Heisenburg uncertainty principle does destroy the ability to predict future positions after a measurement is made. There are philosophical issues which I really don't have answers to. But a hand-grenade simulator does not have to be perfect, just close.

I would point you back to the physical top model I mentioned which generates the different orbitals of quantum mechanics. The device used time lapse photography to take pictures of a light which a specially designed gyrating top manipulated. In this way, the more often the top was in a certain position, the brighter that point was on the film. "Modern college physics" Fifth edition, Harvey E. White -- 1966, (C) Litton Educational Publishing, Published by: Van Nostrand Rheinhold Company, N.Y, N.Y., PP. 562. The top produced reasonably accurate pictures all the way to the 6**2F[7/2] orbital. Since the model is macroscopic, Heisenburg does not really apply. Considering the activity of the top faithfully reproduced the orbitals -- and also considering that the uncertainty at the atomic level is high enough to make the precision of individual measurements impossible -- one is left with the conclusion that even if the model is not working identical with nature -- it none the less is a good enough approximation to solve problems of interest.

I analyzed the Schrodinger equation some years ago substituting in a classical velocity interpretation for probabilty, and wanted to see how that would correlate with a periodic oscillator and other such standard fare. I especially wanted to do it because I wanted to see if Schrodinger's equation took into account correlation from repeating orbits (assuming a non-stationary one). The answer was a surprising no. The probability does not include such correlation, but appears to represent the probability of a single shot experiment through each point. It is a diffusion equation, and I don't know why it doesn't contain any useful information about possible repeating "statistical" orbits -- except that the equation somehow implies that all paths (not speeds) are equally likely. Again, this is philosophical -- not so much practical for it may be simply an artifact of the equation -- but it would seem that in the 1D particle box, it is perfectly legitimate to say that the electron is under one "hump" of probability *only* and never goes through the zero probability points (infinite speed exceeds C) -- but the odds of the electron being in anyone of the humps is equally likely. The alternate interpretation, and the more knee jerk one -- is to assume the electron travels by tunneling from one probability hump to another since they are all there in the solution.

I am not familiar with Ising models, so I can't really comment on what the changes will be like.In the end, though, the Schrodinger equation to me looks to me like a 3D version of Bohr's equation -- that is, instead of saying the nodes along a classical trajectory must be exactly a wavelength multiple -- Schrodinger's appears to say that the phase loop around *ANY* path must be 360 degrees with the local "wavelength" varying according to the EM field. Bohr's orbits being circular and therefore tracing out a constant energy had only a single value of wavelength; so Schrodinger simply generalized it and perhaps got rid of degenerate cases. (Eg: 1s does not orbit kepler like, but hovers based on the wave equation. Don't forget a Bohr magnetic spin effect is also involved in the hovering. ) I am interested to see what happens.
 
  • #5
andrewr said:
visualize VSEPR like images for molecules (PI/Sigma bonds)

VESPR doesn't involve pi and sigma bonds, that's valence-bond theory.

most molecular modelers are either 1. $1K> / seat or 2. proprietary, and want signed papers

That doesn't nmean others don't exist. E.g. http://pyquante.sourceforge.net/" [Broken] require a license, but don't have any particularily draconian restrictions. Mostly they want to keep track of who is using their software, and make sure you cite them if you do.

... although the methods and implementations have been around for nearly as long as computers.

Longer.

This became acute a few weeks ago when I succeeded in doing an electrolytic process that I previously though impossible because water would be decomposed; but it worked in spite of that. I desperately wanted to understand this chemical process although I am not a chemist but an EE. (Chem was my weak point in college).

QC software won't help you if you don't already know a great deal about what you're studying. You need a solid understanding of the chemistry involved before you can do calculations. (you also need a solid understanding of the methods). Whether an electrolytical reaction can occur can be determined from standard electrode potentials.

I would rather spend my time on something other than old algorithms such as Hartree Fock, slater determinants, and all the well known problems of these variational methods which make my eyes glaze over. Nobody yet has a flawless simulator...

Which "well-known problems"?
And what's wrong with Hartree-Fock? It's been the basis of the majority of QC methods for the last 80 years, and there have been good reasons for it. (Nor is it an algorithm, btw, it's an physical/mathematical approximation. SCF is an algorithm) What do you mean by 'flawless simulator'? There are quantum-chemical methods such as CI and CC (which are based on Hartree-Fock) which are exact, in principle. You can hardly do better than that, as far as accuracy is concerned. (Scaling and speed is another matter)

So I would like to write my own code, both for my son and I -- but do not understand QM sufficiently to complete the project

The best recommendation I can give you is to get some textbooks and start studying QM and QC, work your way up to the current state-of-the-art in the field, and then you can start thinking about how to improve on the existing methods.
 
Last edited by a moderator:
  • #6
alxm said:
VESPR doesn't involve pi and sigma bonds, that's valence-bond theory.

That's VSEPR not VESPR; You just blew it.
Looking in my Chem book and several others either ball and stick sketches or overlapping P orbitals are sketched in the VSEPR chapter. What I said still stands: VSEPR like sketches.
If you wish to quibble, I never said Sigma and Pi orbitals are officially part of VSEPR theory -- I don't believe you are denying Sigma and Pi bonds exist in molecules, and that VSEPR theory is used to sketch molecules. So it seems you are concerned over accuracy of the sketching or something, or just didn't notice things like:

However, molecular mechanics force fields based on VSEPR have also been developed
http://en.wikipedia.org/wiki/VSEPR_theory

And how does one sketch these force fields? Is it forbidden to use a pi LIKE bond sketch?
Or are you demanding I dot my i's and cross my t's because my statements might not fit your categories perfectly, and my kind hint that similie/analogy was meant was missed by you?

That doesn't nmean others don't exist. E.g. http://pyquante.sourceforge.net/" [Broken] require a license, but don't have any particularily draconian restrictions. Mostly they want to keep track of who is using their software, and make sure you cite them if you do.

Draconian? I don't recall saying that ... So I'll spin that for fun -- I was merely indicating that I didn't want to offer temptation to anyone. The very fact that someone is worried about who is using their software and wanting credit implies that very temptation.
Do you trust me not to use your name and address against you *in any way* if I somehow am upset BY you and have these pieces of information? Do you want to give me your name and address with no strings attached to do with whatever I want? (I'm not Dracula, that's for sure.)

I'll look into it again ... but I think PyQuante was the one my friend tried and didn't work.
As far as I know, the DFT algorithm[/s] is/are probably worse than Hartree Fock implementations -- it is designed to reduce computational load not improve accuracy.

Despite recent improvements, there are still difficulties in using density functional theory to properly describe intermolecular interactions, especially van der Waals forces (dispersion);
http://en.wikipedia.org/wiki/Density_functional_theory
Longer.

The algorithms I was speaking of are computer based, hence they couldn't have been around longer -- there was at least the time for an implementation to be coded after the invention of the computer. Do you not count the Babbage machine as a computer? It does run a crude program.
Oh, am I being unfair?

QC software won't help you if you don't already know a great deal about what you're studying. You need a solid understanding of the chemistry involved before you can do calculations. (you also need a solid understanding of the methods). Whether an electrolytical reaction can occur can be determined from standard electrode potentials.

Point sort of well taken. I came up with the generation of hydrogen at the result of electrolysis -- based on the standard electrode potentials corrected for temperature and concentration even (in water).
What surprised me was a difficult to discover chemical reaction that someone else had found which made it happen in spite of this problem. In most experimental attempts I get hydrogen contaminated product. The success of this method delights me, and I would like to explore why it works; I'm not sure if it has to do with complexing, or the particular acids used -- especially because I had to substitute a chemical in the reaction that wasn't in the original experiments -- for environmental reasons.

Not only did the reaction succeed, but it went beyond even the original experiments in quality. I found that I could produce a product *and* the expected post processing from the literature was un-necessary in my formulation.

I suppose I could try to figure out what happened using a slide rule, paper, and other easily available methods -- but then I would be likely to make a mistake, and it would take a long time to redo the calculation. I think it would be nicer to have my computer do it for me.

As far as a solid understanding ... how would you know?

Which "well-known problems"?

I rest my case.

And what's wrong with Hartree-Fock? It's been the basis of the majority of QC methods for the last 80 years, and there have been good reasons for it. (Nor is it an algorithm, btw, it's an physical/mathematical approximation. SCF is an algorithm)

It is an algorithm as well. Many mathematical statements are algorithms -- take Sigma notation for example. If you want to quibble over distinctions which don't make any difference... my brother was a math major, he likes to do it too, sometimes -- but less often now. Perhaps you would like to be introduced?

What do you mean by 'flawless simulator'? There are quantum-chemical methods such as CI and CC (which are based on Hartree-Fock) which are exact, in principle. You can hardly do better than that, as far as accuracy is concerned. (Scaling and speed is another matter)

Oh, they are *exact* ... then either they are not solvable in many cases ... or the people who implement computer programs based on them simply can't get them to work in every single case ... or they are not free and require coding anyway. Am I mistaken?? I am not omniscient, so since I get frustrated with other people's garbage is there a problem with my attempting something new?

Historically, I can say nothing pisses me off more than spending $5000+ on a piece of software, and having to contact tech support only to have them tell me; "the soulution is -- just don't do that.".
I am speaking from experience. Somhow the spend $1000 on a gamble just isn't appealing either.

The best recommendation I can give you is to get some textbooks and start studying QM and QC, work your way up to the current state-of-the-art in the field, and then you can start thinking about how to improve on the existing methods.
I have studied variational calculus since the 8th grade -- my original purpose in doing so was to understand Schrodinger's equation.

I have a hard time believing what you say here as anything but double talk.
If these methods worked perfectly or exactly as you seem to be selling them as, and if you really include what I desire to simulate -- in the sense that I am proposing -- then they would definitively predict what happens in say, "the Bell inequality", based experiments. Considering most of the physics community is still arguing over that, including ZPE (with known flaws) as one objection, and no definitive set of experiments has yet buried the competition ... I see no point in arguing over what is better, or what is "perfect".

I'm interested in an experimental simulation method; the one I have chosen is just different -- it is not likely perfect either -- but I would like to try it; Are you interested in actually addressing the thread questions I am proposing or something else after my clearing the air here?

If you have anything to add concerning how to compute how long an electron stays in an excited state, I'm interested. That is one of two issues which prevent me from finishing the code which I already have; as I said at the start of the thread, I'd like to see how far I get; and I mean after I know the answers to my actual questions.

--Andrew.
 
Last edited by a moderator:
  • #7
-I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things.
-Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort.
-All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker.
-Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either.
-Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software.
-You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy.
-The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free.
-The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest.
-A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute.
-It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance)
-The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so.
-You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system.
-How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up.
-The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion.

Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic.
 
  • #8
"Nobody yet has a flawless simulator..."

Although I won't dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces.

You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind.

Regards
 
  • #9
edguy99 said:
"Nobody yet has a flawless simulator..."

Although I won't dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces.

Hi;
I think the photon scale and the electron are interchanged?

I was planning on using my electronics simulator base as-is for the project at first -- and modifying it later for improvements. There is a saying "one in the hand is worth two in the bush."

The electronics simulator auto-adjusts time-steps depending on interaction values -- that is, the timesteps computed become larger as the path(s) become more predictable / repeating of past history -- and smaller where a small change can cause a large instability.

The algorithm is complicated but like a video game drawing algorithm -- is carefully optimized in the inner loops; Roughly speaking, it allows me to simulate around 1000+ items in a field, and easily 100,000 nodes+ in electronic circuity on a modest single core Celeron processor (SSE2/SIMD) -- although, the computation time drops exponentially with the *randomness (non-repetitive change); and conversely accelerates with the repetition of similar events/changes. (*randomness within a steady state is not what I mean.)

The details of the algorithm are beyond the scope of Quantum Physics, obviously, and are a programming/data-structures nightmare; but it's extremely powerful and fast compared to alternative solutions used in the electronics simulator industry (eg: typically Laplace transformations).

At heart, I have opted for a compute quickly -- and use a statistical method to do quality control.
If you are familiar with industrial process control -- the method is essentially the same kind of thing.

In the electronics realm, I specify the time-step I wish to output images/plot at, and the simulator discards all intermediate states between frames to be drawn. This eliminates the highly variable time-step plot problems required for efficient & accurate calculation of events; In electronic simulation, the time-step typically varies from milliseconds down to picoseconds. A crude estimate for (super)atomic phenomena is in the range of milliseconds down to sub-atto second 0.01 atto (YUCK).

I haven't given much thought to the time-steps I will output -- outputting every subtle change doesn't make a whole lot of sense as a large number of small changes are required (typcially) to make a long term macroscopically noticed change;

I plan, for right now, in just hand setting the time-steps as I do in electronics simulation for different regions of interest. Eg: large time-steps which allow me to ignore the development of the initial state (typical state) of the system -- but still do a sanity check. And then focus the time-step down in regions where interesting things are happening that I am trying to understand better.


You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind.

Regards

Even though time is involved in my simulation, I was not planning on making time evolved movies in the end. The reason for this is twofold --

1) the simulator itself determines how to focus CPU power computationally on different regions of space; This is what allows the simulation to proceed at much quicker rates than would be possible if all points were computed for the same time-step everywhere in parallel. Items of circuitry / space which are sufficiently decoupled to introduce small errors by reducing the update rate of *changes* in interaction become time-wasters each time a displaying of state occurs. Of course, blindly printing every time-step automatically chosen by the simulator is the absolute worst time waster. eg: 1e6 + times slower...

2) In electronics, the time-evolving waveforms are often important; so much so that they become unintelligible if the time-step varies, so I am forced to plot a maximum & fixed time-step rate if I wish to easily extract information about the evolution of the waveforms in time -- so that is the way my simulator works now -- and to keep the cpu time down, when the simulator has to reduce the time-step for accurate results; intermediate steps are discarded from the plot to save time. To keep the problem tractable once I get to large (65-100 atom) systems, I plan on having the simulator ultimately record changes in state and not changes in time; and I ultimately, I would like to be able to specify which states are of interest to reduce the data even more.

I do have open questions about how much information to save at each plot point; but I can't solve those in my head -- I have to actually experiment to see what is possible.

The space, is unfortunately, limited by the nature of the simulation, cache memory, main memory, etc.
I won't be able to simulate more than a few cubic microns of space -- and only a 1000-2000 or so electrons/protons/etc. within that space. (Presuming the questions I am trying to get answered do not significantly degrade the estimates... engineering :) )

Why would some consider it "speculative", atto second laser pulses are regularly used to excite atoms into the wave packet superposition of states which mimics classical motion. That's experimentally done, and I wasn't aware that the people doing this were violating any principle of quantum physics. I know that when I first heard of the frequencies they were talking about and atto second that the uncertainty principle crossed my mind. For the purposes of simulation, though, these small times replace the differential element in calculus -- and anyone who uses calculus to work with Schrodinger's is automatically guilty of using small times as well, for a mathematical purpose if not a physical one. hmmm... Separation of Church and state, or ought it be Calculus and state... hmmm...

I wasn't planning to get into that. And from the other response I got, I need to correct the extra words being put on my lips...

--Andrew.
 
  • #10
Alxm,

I have tried a few times to pen a response, but I simply don't post because the response is so much longer than your post. There are overlapping issues in what you are remarking about -- and clearly some confusion about what I actually said, and what I appear to have said. So rather than reply head on; I hope I am able, in this way, to emulate your very compact response style which I rather wish came naturally to me.

-How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up.

I use Fermi's rule for understanding stimulated emission events. If what you are saying isn't nuts (and I do admit you have an IQ) then it appears to imply that there are no such things as truly "spontaneous" emissions. If that's the case -- my simulator already takes care of the issue, and I am wasting time asking about it; If your comment is a mistake, let me know -- otherwise I need to change the question.

-Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic.
-The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so.
-A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute.

If it is all you have to say, then the thread won't be cluttered up with more of this. I haven't taken sides in any absolute way on controversial issues. In the last (and only other) thread I ever posted on the forums -- it took the science advisor several pages to actually come up with an answer that made any sense. I still shake my head that he could have possibly missed that using a 1024+ digit calculator means one is serious about verifying something, and not just 'estimating'. Your response gives me hope that you have more awareness than average. Beyond that, I will simply remark that the research I have cited thus far in the thread -- isn't mine. Secondly, it was searches on the physics forums and recommendations of "science advisors" and more important "physics mentors" which led me to follow the links to these controversial issues in the first place.

* up
-Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either.
-Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software.
-The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free.
* same
-You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system.
-The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so.

My respect for you went up a notch or stayed the same with each of these comments. I agree with them, and always did although that may not be obvious at first reading of my past comments. They do not affect why I am doing anything, though.

-The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest.
-I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things.
-Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort.

Gee. In one line: My "hand" is also an abstract concept built upon sigma and pi abstract comments, and I can draw my hand with my hand which does not invalidate my theory of what a hand or sigma or pi bond is -- in the slightest.

-All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker.
-The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion.

OK. Your entitled to your opinion even when it has almost nothing to do with what I said. Sommerfeld's modification is not Bohr -- and DeBroglie only interpreted, but did not change Bohr's theory.
Nor can I prove or falsify what you said -- so I'll join it: Since I don't know what is the cause of what I witnessed -- Better safe to include dispersion than sorry I didn't.
BTW: The experiment I am trying to understand has nothing to do with cold fusion or Blacklight power, etc. It is purely related to the change in economics over the last 100 years as to what processes are economically viable. and environmentally friendly.

-It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance)
! Then, you admit we are on par rather than you being so much more superior than I am ?! Wow. I'm flattered.

-You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy.

! Touche!
! LOL: Say that again with a straight face after buying $50K software to solve a problem which you have only a week to solve before your competitors take your client away; then discover that there are bugs in the software you have purchased, and after you hold hands with tech support to solve their mistake for free (which only they can actually fix in the code) -- and after they turn around and sell your fixes to your competitor and refuse to pay you a dime. (And that is what the nicer of the companies will do. Murphy was an optimist.)

I am laughing at the notion that you might think you will convince anyone that *refusing* to spend money is crazy, or that a refusal to buy is equivalent to *DEMAND*ing something for nothing ?

By the way, thank you for giving *some* of your time away free. I am thankful to Dr. Young (no the name is not a 'simple' coincidence) for selling me a college physics education, and allowing me to use the knowledge after class as public domain -- including his comments.

If one (esp you) are crazy for giving away, or even think you are -- I have the name of some very good psychiatrists with multiple degrees and understanding of other subjects. They do interview before accepting clients, and not many get accepted, but you might get accepted if you are lucky.
 
  • #11
andrewr said:
Hi;
I think the photon scale and the electron are interchanged?

You could be right, I'll try again (feel free to correct):

1. units Femtosecond(10^-15) and Nanometers(10^-9) to show photon size and movement - with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion.

2. units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) don't really move at all over this timeframe.

3. units Yoctoseconds(10^-24) and Attometers(10^-18) for nuclear decay of neutrons - with a viewing width of 10 femtometers and steptime of 1 yoctosecond, a down quark changes to an upquark and emits an electron and an anti-electron neutrino. The w-boson would be visible for less then a yoctosecond and the neutrino would float off at a comfortable pace.

You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment;

I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I don't see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I don't see how they can be recalculated without going to an n-body problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate.
 
  • #12
edguy99 said:
You could be right, I'll try again (feel free to correct):

1. units Femtosecond(10^-15) and Nanometers(10^-9) to show photon size and movement - with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion.

mmmm... much better.
At this resolution, in free space: c=3.0x10**8 m/s, the wavefront will step with a distance of 300nm / femtosecond. If the screen distance is 10μm, that means steps= 10/0.3 = 33.3 frames.
So it takes around a second to cross the screen; if a wavefront simulates in dispersive media with a slower rate of travel, this would indeed be comfortable.

#2 I will just let sit, #3 -- wow. At least I don't think I will have to worry about that time-reference...

You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment;
Sort of. I want to be able to simulate the effect of these three things -- but that isn't always the same as graphing/animating them directly.

eg:
I don't know what a Bohr magenton would look like up "close". I do know what the electric field would look like a short distance away from an electron/proton emitting a magnetic moment. In general, one is merely graphing the E-M field, or a compressed sketch of it over all space, but by watching the extrema points of the plot -- one can trace a path -- whether "jumping" or continuous, to locate the results of an experiment.

In particular, if a possible simulation of the Bohr magneton is occurring -- the Pauli exclusion principle ought to occur automatically without introduction of a second matrix, or any extra data structures. So graphing the magneton itself is not necessary -- just graphing the EM field, and the orbits taken in a helium atom ought to show whether or not the simulation is working correctly.

What is more significant, is that the simulation is going to have to deal with photon/EM disturbances at every step of the simulation (Not all such disturbances are measurable photons.). If one is watching an "electron" probability cloud develop -- then there will be "random" noise due to photons and EM disturbances in transit which are moving too quickly to be watchable in the animations.

I was considering sampling the location (or volume of most probability) for an item (electron/proton/etc) and accumulating it on a quasi-static probability graph rather than attempting to animate a probability over all space. eg: keeping track of the last 1000 positions of each item and varying intensity with the number of times a volume is entered in the simulation. The amount of data which needs to be output at each time-step is significantly reduced that way, and one is able to get an idea of what the probability field looks locally with respect to each item.

Part of my goal is to be able to output standard ball and stick models as well in the future., along with sketches of MO -- whether done from the probability angle, or from the equivalent charge density angle (EM field intensity)-- which is subtly different, and perhaps an even more accurate picture than the HF charge/time average approximation.

An orbital transistion will not look like anything in such a simulation. The position will not change except in subsequent time, so the "view" one gets of an orbital will depend on how long an electron remains stable in it. The long it is there, the more completely the probability will be graphed.


I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I don't see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I don't see how they can be recalculated without going to an n-body problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate.

Correct, the location does not change. Time scaling is a mathematical artifact of the simulation. Essentially, the scale is changed by a factor of 1/2 or 2x at each step depending on measured accuracy conditions. The reason for 2x or 1/2 is the math co-processor efficiency, round off error, etc. problems.

Each item has a total energy; When you mention an N body problem -- you are correct. I am making an assumption about how to solve it based on the identical nature of the fields interacting from item to item. There is no separate field for each electron, proton, etc. but one composite EM field for all of them. In order to simulate, I must provide an operator based on the local EM field which causes the item being simulated to *statistically* move to places which will correctly solve the Schrodinger equation in a probability wise fashion. A single run of the simulator may not provide a complete probability field for graphing the reaction progressing, but the simulator can be re-wind and run segments again with a changing random sampler, (and again, and ...) to improve the probability field resolution. I think R. Feynman invented/did something similar with path integrals, but I can't remember for certain off the top of my head -- it has been too many years. The path integral was Dr. Young's favorite tool.

I might do well to simulate a 1D particle in a box for you to show you what I mean. (It will be a static gnu-plot graph for now) I am not certain if I can leave out the Bohr magneton and still have the model I have in mind predict correctly -- but after vacation next week, I'll give it a try and see what happens. A picture is worth a 1000+ words... eh?

--Andrew.
 
  • #13
A couple of questions and comments.

Regarding the Stern Gerlach experiment:

If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going?

Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I don't think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount?

Regarding orbital transitions:

What is more significant, is that the simulation is going to have to deal with photon/EM disturbances at every step of the simulation (Not all such disturbances are measurable photons.). If one is watching an "electron" probability cloud develop -- then there will be "random" noise due to photons and EM disturbances in transit which are moving too quickly to be watchable in the animations.

I think you may need to consider "units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) don't really move at all over this timeframe."

A viewing width of 10 angstroms produces the kind of picture you would see of "real atoms" like the IBM pictures with scanning electron microscopes, what you don't get is the timescale. A timescale of femtoseconds works well to view things like proton/nucleon vibrations but even low energy "unbounded" electrons are moving way to fast to see. A timescale of attoseconds allows you to build your electron probability clouds for the frame and deal with a free electron floating by that you know, because of its momentum, will be somewhere else in the next few frames. This timescale has the same problem you talked about where photons will seem to appear out of nowhere.

Also,

I do know what the electric field would look like a short distance away from an electron/proton emitting a magnetic moment.

I would appreciate if you could expand on this comment.

Regards
 
  • #14
edguy99 said:
A couple of questions and comments.

Regarding the Stern Gerlach experiment:

OK. (SG henceforth in my notes.)

If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going?

I intend to mimic the MIT reproduction of the experiment using hydrogen and either accepting the factor of 40 mass change, and therefore expected 40x scale change -- or artificially increasing proton mass 40x to simulate a fake "potassium" atom in transit with hydrogen. I haven't calculated the speeds, but the transit time and dimensions are essentially fixed by the experiment. If I modify them arbitrarily -- then verification of the simulation will be difficult at best. If I were to animate a cross section of the experiment -- the atom would be vanishingly small. I was thinking to simply collect the
[x,z] data points at the end of many experiments for a statistical sample equivalent to the SG photograph.

Note the MIT experiment uses a slightly different shape of electromagnet than SG. I believe SG is more of a triangular wedge near a half-circular socket. MIT's is on pp. 17, from a Google search of Lab 18, MIT and Stern Gerlach:

web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf

I'm going to answer these next to questions out of order, as I think I may be clearer that way.
I would appreciate if you could expand on this comment

I stated the idea poorly. I am less certain of the shape/distribution of the EM field as one approaches an electron's vicinity. An electron is a dynamic magnetic dipole -- for there are no monopoles of magnetic "charge", and thus the field changes around the electron as it "moves";
(Dispite wikipedia's present fictional monopole account... which is "mathematically" equivalent, supposedly -- but whoever wrote that makes me groan... ).
A dipole's detectable field scales as r**-3, where r is somewhat ill defined, but as one gets farther away from the source(s) the error in r from variations in dipole shape contributes less and less to the overall values. I think (but am not absolutely certain) that much of the geometry distortion of a dipole is primarily in higher terms (r**-4) etc.

The primary meaning of my comment is that there are several unknowns affecting the field of an electron which become more important as computes closer to it's location. These unknowns are not necessarily a result of the Heisenberg uncertainty principle.

In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged -- and although I am not certain of what experiment(s) disprove the idea altogether -- I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle.

Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I don't think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount?

In SG, the developed photograph of the silver atoms which hit the target is quite distinct. So much so that there is little variation from electron to electron (and perhaps even less spread would be realized if the experiment was improved.)

See p. 56 for a photo of the actual silver atom pattern on the film target.
http://www.owlnet.rice.edu/~hpu/courses/Phys521_source/Stern-Gerlach.pdf

A couple of things to note about the photo.

The "ideal" silver dipole would be located at the center (x axis in the photo) for one launched non-skew into the magnetic field. It is precisely here where a spike develops in the x direction -- that spike is generally reasoned to have something to do with the non-symmetry of the pole faces on the magnet, although no explanation I have read is really satisfying for a mathematical analysis of SG's actual magnet is not done. The other plots side is flattened, and that is typically described as being because of the silver atoms physically hitting the rounded part of the magnet cavity.

I didn't see a plot for the MIT version of the experiment which uses a magnet shape which ought not have that spike, either. The only obvious clue pointing to the anomalous nature of the spike is that it is asymmetrical -- The x-axis was oriented vertically in the actual experiment, so the spike is even against gravity if I recall correctly. But if the explanation of the other side's thin-ness is truly because of physical clipping, then there is no way to be certain the spike would not have shown up experimentally due to the field shape if the atoms had not hit the surface of the magnet. One really needs a predictable *linear* variation in magnetic field strength over space to be able to effectively correct for physical manufacturing limitations of the slits, etc, and SG doesn't really have that.

Comparing the demagnetized plate to the magnetized one, there is a fairly strong indication that the plates were sent on the postcard willy-nilly and with no definite orientation. (The mirror image reticule pattern on the right image re-enforces that notion...) The demagnetized version ought to be a purely flat line -- but isn't. In the lower half of the line a clear broadening of the pattern can be seen -- which if the magnet is truly off, can only be attributed to the shape of the slits allowing the silver out -- or the position of the silver oven behind the slits biasing the source statistically.
In any event, a careful inspection I believe I see trace widening on the upper half of the pattern in the magnetized view suggesting that the flat line slide is actually rotated 180 degrees from how it was taken in the other photo, with an optional mirroring on top of that...

Using that information as a corrective measure, and knowing that the slit is horizontal, one can estimate the amount of classical skew in trajectory a silver atom would have (incoming y angle and x angles with respect to the demagnetized photo, in contradistinction to a perfect 90 degree angle.); and since the size of the slit is likely large compared to the wavelength of the silver atom, these classical trajectories ought to be fairly accurate. Each point, then (y on the photo) has a definite angle (source dx/dz, dy/dz) from which the silver atom could have come, and the effect of this distribution needs to be calculated and compensated for to determine what an idealized simulation ought to look like.

Generally, there ought to be fewer angles of approach as one gets closer to the edges of the slit (y on the photo), although dx/dz will be fairly constant across the whole slit except where the demagnetized photo shows thickening.

So, I think the most accurate part of the magnetized photo will be the lower right quadrant of the photo taking the approximate center of symmetry of the shape as the origin.
Using a straight edge, the line widening as one gets closer to y=0, increases very linearly (eg: right most trace / lower right quadrant) until extremely close to the magnet center where the field of the magnet becomes less known / near the surface of the magnet. So, there are two (ideal) linear effects with approach toward the center -- 1) the width of the trace, and 2) the offset of the trace from the picture's y axis
.
The offset is explained by the experiment itself -- eg: the proportionality between magnetic field intensity gradient with x offset in the photo vaires approximately linear in SG (and linearly in the MIT version, according to the MIT text). The widening of the trace, however, is something I don't know how it theoretically ought to proceed, and which is caused by "all other differences" in the electron including any Heisenburg and QM interference effects.

If only the electron's position [delta x, delta y] were affected -- I could more easily separate out what causes the thickening -- but as there is also skew, the problem isn't solvable qualitatively in my head. The variation in trace with, then, is something I don't know if it would replicate in an idealized experiment or not; and will need to work out before verifying my simulation against SG.

If I am able to correct (reduce data) in the photo by mathematical modeling of skew, other features might become visible which presently are not. But to really do this correctly, the best source of information would be a run of the MIT experiment with computer usable data points to operate on. I'll have to look around and see if anyone has posted a good run, and if not, perhaps I will put some thought into re-constructing the experiment. I have the vacuum equipment, some very pure micro-crystalline level homogeneous Iron, and the machining equipment to re-construct the magnet, along with motorized micrometers (robotic) -- but that would take quite a bit of time and effort to put together for me in my present state ... if anyone knows of an online data source, I would appreciate it.

I'll probably post a bit more tomorrow... I am still trying to resolve a second question(/s) concerning the time/intensity distribution that an electron dipole has by focusing on what a quantized magnetic moment actually means, both classically (the limit) and Quantum Mechanically. I will need to think out loud, and perhaps will make some mistakes when doing so -- but hopefully the sharp eyes of others will be helpful there and I can get enough of an answer to my second question/(s) to model it reasonably well for simulation purposes.


Regarding orbital transitions:

I think you may need to consider "units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) don't really move at all over this timeframe."

We'll just have to see what works once I get there... but I can certainly try it.

.--Andrew.
 
  • #15
Thank you for the post and link: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...

The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else..

In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged -- and although I am not certain of what experiment(s) disprove the idea altogether -- I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle.

I agree completely. There are at least 2 major problems, the size you mention - I believe radius calculations for the mass involved come out way to high (300 femtometers vs a lorentz radius of 3 femtometers) and also a problem, I think the electron can be flipped over 720 degrees and a spinning disk is only 360 degrees.

Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed? and rate? and over a timespan (microseconds?) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult.
 
  • #16
edguy99 said:
Thank you for the post and link: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...
Without checking, that sounds correct from what I remember.

The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else..

I agree.
The experiment requires the student to wait for the temperature to stabilize for a long period of time, otherwise the data will be in error -- so I am fairly sure that small variations in temperature affect speed significantly.

The divergence of the magnetic field is what causes a vertical accelerating force to appear on the magnetic dipole in the first place, and divergence is independent of speed. The traveling speed of the dipole, then, has nothing to do with vertical acceleration -- eg: the overall charge is neutral, and the + and - are both moving in the same direction so even Hall effect does not occur. However, the + and - charge would want to be deflected in opposite directions horizontally in proportion to the speed with which the atom travels through the B field, and I am sure (classically) that would cause the unpaired electron and remaining nuclear charge to align on a horizontal line with respect to each other and in a plane 90 degrees to the magnetic field direction.

There is, then, going to be a vertical force vector independent of speed, and a horizontal one which is counterbalanced by electrostatic attraction. Since acceleration is an integral of force ( on average f/m), the deflection depends on time spent in the magnetic field. Even at a constant temperature speeds will vary statistically. These variations will cause (add to) trace width spread in the photos shown; and also (small) skew angle which does the same thing. The lab sheet may contain enough information to compute the speed variation, and one can compute a percentage of average time which eg; 99.8% of atoms will fall within, and the spread of the atoms (not including skew angle) due to this variation in time will be a linear fraction of the total deflection. % time error = % y deflection in error.


I agree completely. There are at least 2 major problems, the size you mention - I believe radius calculations for the mass involved come out way to high (300 femtometers vs a lorentz radius of 3 femtometers) and also a problem, I think the electron can be flipped over 720 degrees and a spinning disk is only 360 degrees.

You'll have to excuse my ignorance of vocabulary; as a BSEE, I was taught primarily about semiconductors which is a bulk phenomena. Different terminology is employed, perhaps somewhat simplified, because the dominant effect is not always the same as would be the case with discreet items. When you refer to a "Lorentz radius" are you speaking about a contraction in length due to motion of some kind? And regardless of that, what calculations did you do to arrive at those particular numbers?

Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed? and rate? and over a timespan (microseconds?) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult.

You're welcome. And thanks for just speaking about the subject in general; in your own way you have brought up something I did not think about regarding this experiment -- and that is the fact that the scale must exceed the few cubic microns which I can simulate realistically. In order to simulate the MIT version of SG, I will have to find a way to delete nodes from one side of the space I am simulating and add those nodes to the other side so as to keep the moving atom inside a window of simulation space. (Not to mention, computing a new value for the moved nodes...)
That will, unfortunately, complicate the simulator enough to seriously increase the time it takes me to implement it as my present simulator does not have that capability. :yuck:

+1 to the to-do list.

More will be coming about the "spin" of an electron when I have a chance this evening, along with Q/Qs that I have about how to work with it -- and a short summary of what I am presently considering. (Next steps.)
 
  • #17
OK. Fell asleep on vacation ... and fell asleep last night early... and this morning again .. but I'm back.

A brief review of the ideas I need to build the simulator:

So far I have spoken about orbital transitions -- which has nothing to do with *where* an electron(/other item) is at the instant before the state change and after. Only the probability of where it is found in the future can change -- without the "actual" position changing for a single run of the simulation at the instant of state change. So, one may think of a simulated orbital change (state change) as being a change in the statistical rule about where the item will diffuse in the future.

Alexm ties "spontaneous" decay of excited state orbitals to a "disturbance" in the EM field by invoking Fermi's golden rule. This idea is new to me, for in semiconductor physics the mechanism of spontaneous decay is *NOT* explained, rather it is pragmatically considered an empirical property of the semiconductor which is modified by disturbances in the EM field (eg: in a perfect crystal, it would be empirical -- but in a doped one, the lifetime of an excited state is the empirical value *decreased* by the probability that a fermion will diffuse into a defect in the crystal causing a stimulated decay of the excited state.) Since no one has contradicted Alexm, his opinion stands as the method I will attempt to use for simulation.

The remaining issue I have to deal with is the "spin" state of an item. This state is what causes an "anomaly" in emission spectra similar to fine structure (haven't studied this, so I am speaking in general) based on the magnetic state of individual protons in the nucleus coupled to the magnetic state of the light emitting electron. The anomaly is extremely small since the Bohr magneton is affected by the mass of the item -- and protons being relatively heavy, have ~~ 1/1836 times the effect of an electron.

I have no engineering text describing the exact nature of various experiments about the Bohr magneton and guidance/input from those who might know of experiments/articles to review would be appreciated -- but I do know that spin of an electron has a mechanical moment from an experiment done by Einstein and a colleague, and I do know that the orientation of the magnetic dipole in space can change.

Since classical angular momentum is found experimentally when dealing with the Bohr magneton -- along with non-classical quantum effects -- I am simply going to write down what I think I know -- and hope for correction or at least suggestions of what I might be over-simplifying if anything.

A top (mechanical moment/dipole) spinning which has force (torque) applied to re-orient the axis of rotation will resist re-orientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession -- and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device...

I assume reorientation is possible even in a frictionless environment, since permanent magnetism in non-superconductor material is (as far as I know) purely a spin based phenomena and there is no friction that I am aware of for a spinning electron (there are no electrons that *DON'T* 'spin'). I suppose it is possible that quasi-Kepler motion of an electron around an atom (P orbital, etc.) might also contribute to magnetic field, but everything I was taught in chemistry indicates that unpaired electrons are the sole cause of magnetism. So I am assuming that is the case for the moment. (pun intended.)

Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect.
I have not seen any mathematics which explicitly show how QM-wise engineers came to predict that the magnetic field would "FLIP" when appropriate resonance radiation was applied to the electrons in hydrogen (the typical NMRI target), although the energy required being proportional to a statically applied magnetic field does make sense. The energy required to reorient the magnetic dipole being readily computable from analogous situations in DC/AC motors being used for power generation. The magnetic dipole rotor requiring work to flip orientation while in the magnetic field of the motor's stator.

I am presuming that a rule akin to Fermi's golden rule needs to be used in this case (spin flip) -- but I have not come across in commonly available literature how NMRI is deduced, nor how to handle the QM states.

Any pointers/example calculations which could be applied in the case of a non-static magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation -- but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can re-orient.

A second issue that comes to mind, and which may/or not affect the discussion is that Ampere's law and other effects computed using calculus assume a uniform magnetic field when taking the limit as area goes to zero (eg: differential area or volumes...which have an existence whose consistency is questionable given the Heisenberg concept... ). A magnetic dipole's strength is measured in current x area enclosed by the current loop. I*area**2. However, making a large rectangular and planar loop of wire and applying a constant current to it -- and then using a compass as a probe the field's strength has convinced me that B field magnitude in the plane of the loop, and inside the loop, is stronger as one gets closer to the wire -- and weaker as one gets closer to the center of the loop.

When replacing distributed currents with individual electrons, protons, etc. The magnitude of the current is replaced with the velocity of the charged atomic item in relation to the path length required to circle an "enclosed" area. Therefore, the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment (eg: The identity of the Bohr magneton for an electron..) So, the diffusion of the electron in space to create the magnetic moment must have an average value in an angular sense -- and because of inertial reference frames, helical paths must also be possible where the path is closed only on a moving reference frame.

What I have stated here, is the sum total of what I am certain of regarding magnetic moments; discussion of how these ideas fit with QM/Schrodinger's would be greatly appreciated.

--Andrew.
 
Last edited:
  • #18
A quick sidenote, the June 10/2010 issue of Nature has a pretty good article:

"Electron localization following attosecond molecular photoioniztion" that starts with "For the past several decades, we have been able to probe the motion of atoms that is associated with chemical transformation and which occurs on the femtosecond (10^-15) timescale. However, studing the inner workings of atoms and molecules on the electronic timescale has become possible only with recent development of isolated attosecond (10^-18) laser pulses."

The attosecond timescale will (I think) prove to be very important in the history of the electron.

Regarding:
A top (mechanical moment/dipole) spinning which has force (torque) applied to re-orient the axis of rotation will resist re-orientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession -- and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device...

The object you may be thinking of is a Bloch Sphere, animations http://www.animatedphysics.com/videos/larmorfrequency.htm" [Broken]. Although not directly derived from general relativity, It does properly predict most (if not all) of the properties of the proton. Its important features are an axis that points in a particular direction and a spin direction.

and:
Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect.
Proton MRI is generally modeled by the Bloch Sphere, animations http://www.animatedphysics.com/videos/rabioscillations.htm" [Broken]. For more elements with multiple protons and neutrons, you don't just have spin up/down or more commonly termed +1/2 or -1/2. Lithium-7 nucleon for example has 4 spin states labeled +3/2, +1/2, -1/2 and -3/2. It does not matter so much what the picture looks like, its just important that you label your objects with the right spin so you know how it reacts to the forces around it.

and:
Any pointers/example calculations which could be applied in the case of a non-static magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation -- but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can re-orient.

The electron at this level is more difficult. As I understand it, the Bloch Sphere does not work for it as the magnetic moment is slightly over 2 times too big to work (it takes too much energy for its size to flip it..) and there are many other problems. Electron flipping is the basis of atomic clocks and I think many of the properties have come about to explain this.

The Lorentz radius is: "In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy - not taking quantum mechanics into account." and comes out to 3 femtometers. I think that this much mass in a 3 femtometer radius is spinning at something like 100 times the speed of light. Ie, the electron looks too small to fit that much energy in.

You commented "A dipole's detectable field scales as r**-3, where r is somewhat ill defined" and "the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment "

An important structure to is consider are twistors (one animated http://www.animatedphysics.com/videos/electrons.htm" [Broken], ignore the bottom row as it is a work in progress and sometimes you have to click a couple of times to get them to run right). It has a defined axis and it has the 720 spin (watch carefully as the blue dot goes through 360 degrees, the electron ends upside down and must go another 360 degrees to get right side up). It would pack a lot more energy in a lot less space and has kind of a "wierd" layout of its dipole field...

I have assumed spin flips do not apply to the SG experiment. There is no reason why the valance electron would not simply align itelf with the external field and slowly pull the atom in one direction or the other depending on whether it was up to down.

Dont know if this helps much, but I have to get back to trying to find the speed of the potassium atoms..
 
Last edited by a moderator:
  • #19
edguy99 said:
The attosecond timescale will (I think) prove to be very important in the history of the electron.

Three years ago I was speaking with a physics prof. about the attosecond timescale, funny how it is just "news" now...
With such short pulses, the wavelength/frequency is not very precise. Many experimenters seem to be trying to excite the electron into "wave packet" localized states for study at present. I hope your prediction holds well over time.

The object you may be thinking of is a Bloch Sphere, animations

Thanks, I'll look at those -- the block sphere is a state-space animation, not so much a physical space animation from what I gather so far. I expect to take a few days sorting through ideas to get a feel for what they are about, and also reviewing a book I bought on introductory QM (2nd ed. by David J. Griffiths) which my college uses in classes for physics majors (eg: I bought it on a whim to help refresh my memory and extend my BSEE knowledge...).

The electron at this level is more difficult. As I understand it, the Bloch Sphere does not work for it as the magnetic moment is slightly over 2 times too big to work (it takes too much energy for its size to flip it..) and there are many other problems. Electron flipping is the basis of atomic clocks and I think many of the properties have come about to explain this.

OK.

The Lorentz radius is: "In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy ...
Oh ... m = E/c**2.
but, electrostatic potential energy ? that would be the repulsion/attraction between charges; do you mean the energy inherent in a magnetic dipole moment, or am I overlooking something obvious?
(not that it matters since the result is impractical anyway.)

An important structure to is consider are twistors...

OK, I see. That's going to take a little time to sink in.

I have assumed spin flips do not apply to the SG experiment. There is no reason why the valance electron would not simply align itelf with the external field and slowly pull the atom in one direction or the other depending on whether it was up to down.

That's a problematic assumption;
The angle that the electron makes with respect to the magnetic field is arbitrary when ejected from the oven. The magnetic field, then, applies a torque to the magnetic dipole attempting to align it with minimum energy in the field. The quantized spin must be "chosen" / "observed" by interaction with the magnetic field, and thus "choose" spin up or down. In essence, I envision that it must "FLIP" from an random analog angle to a quantized one either aligned or anti-aligned (unless it happens to be aligned from the start which is highly improbable).

look at a classic angular momentum demonstration:
http://www.youtube.com/watch?v=8H98BgRzpOM&feature=related

In the classic example, when the person's hands apply a large amount of torque the gyro-wheel re-orients its axis of rotation quite rapidly -- however, when a smaller force is applied (eg: the free hanging state) the wheel will precess for quite some time with very minimal change in rotation axis.

In the SG experiment, one has to determine which behavior will result from the magnetic field (or to what degree the axis will rotate / flip ) from the initial condition which has the orientation of spin randomly and continuously assigned an angle relative to the magnetic field.

In the demonstration -- I am not entirely certain that the wheel would have flipped at all without friction reducing the speed of the wheel, however -- if friction is not the main player in the effect, then a Bohr magneton can also be expected to slowly orient itself toward the magnetic field with time in addition to precessing.

There is nothing stopping the "valence" electron from orienting itself relative to the nucleus very quickly -- however, during this orientation, I would not expect the rotational axis to change much. That is, the electron will translate easily relative to the nucleus -- but like a gyroscope -- it not change its rotation "angle" to change it's position relative to the nucleus.

Dont know if this helps much, but I have to get back to trying to find the speed of the potassium atoms..

It is a good start; thank you; though it does add questions...

BTW:
I think the average (non-relativistic) speed, due to temperature, is simply:
v = sqrt( 3*Kb*T / m )
Kb=boltzman constant.
T=temperature (kelvin)
m=mass of a Potassium atom.

I am not sure of the statistical spread (variation/deviation) of speed off the top of my head; but you can at least get an error slope %speed chg vs. temperature change (degree K or C) which would be useful.

--Andrew.
 
  • #20
look at a classic angular momentum demonstration:

I have a gyroscope just like this and I find it much easier to compare with a bloch sphere. I like how it spins with its axis straight up (or slightly askew as it precesses under gravity) whereas the bicycle wheel has a horizontal axis relative to gravity.

BTW:
I think the average (non-relativistic) speed, due to temperature, is simply:
v = sqrt( 3*Kb*T / m )
Kb=boltzman constant.
T=temperature (kelvin)
m=mass of a Potassium atom.
Many thanks for this, for some reason I thought I had to know much more to calculate the speed.
 
Last edited by a moderator:
  • #21
DIY: standard deviation of gas molecule speed

edguy99 said:
I have a gyroscope just like this and I find it much easier to compare with a bloch sphere. I like how it spins with its axis straight up (or slightly askew as it precesses under gravity) whereas the bicycle wheel has a horizontal axis relative to gravity.


Many thanks for this, for some reason I thought I had to know much more to calculate the speed.


:smile:

Half way through the video -- they use a circus like stool and have the gyroscope precess with a horizontal orientation...

Hey, edguy -- I just recalled a way of how to get the variation in molecule speed based on temperature of a gas.
Doppler broadening theory... classical will work as the velocities are low.
Doppler shift based on velocity toward/away from an observer goes like:

[tex]\nu\approx\nu_{0}\left(1+\frac{V}{c}\right)[/tex]

The half-width broadening of any rest emitter frequency (ν0) photon emitted from an atom (std deviation, I think) goes like:
[tex]\delta_{Hz}\approx1.67\nu_{0}\sqrt{\frac{2RT}{m}}[/tex]

So, 50% is 1deviation, 97+% is around 2deviations, and plugging into Doppler shift;
The difference in Doppler shift has to equal the deviation, and will reveal the uncertainty in speed for 50% or 97+% of deviating atoms going through the slit.

[tex]\delta_{Hz}=\nu_{0}\left\{ \left(1+\frac{V_{0}+\Delta V}{c}\right)-\left(1+\frac{V_{0}}{c}\right)\right\} [/tex]
[tex]\delta_{Hz}=\nu_{0}\left(\frac{\Delta V}{c}\right)[/tex]

Which, substituting in the standard doppler broadening formula yields:
[tex]\nu_{0}\left(\frac{\Delta V}{c}\right)=1.67\nu_{0}\sqrt{\frac{2RT}{m}}[/tex]
[tex]\Delta V\approx1.67c\sqrt{\frac{2RT}{m}}[/tex]

For 2 deviations, it becomes:
[tex]\Delta V\approx3.34c\sqrt{\frac{2RT}{m}}[/tex]

An anamanipia walah!... if it is correct... I ought to re-read the MIT paper and see if it is in there someplace.
 
Last edited by a moderator:
  • #22
DIY:SAM First try at spin χ(t) state EH or B field only coupling

OK, this post is about what I have learned concerning edguy's GOOD suggestion of studying Bloch spheres...
If you see any critical mistakes, let me know ... please ... fortunately the information about that Bloch spheres was easier to find than discovering Pauli spin matrix equations, which is what this is really about -- I think. MRI/etc use these equations -- and a little bit of work and I think they will apply nicely to time-step based simulation; their form has to be changed to make program code, but that's about it.

The Bloch sphere describes qubits which are a two state item, just like an electron... hmm I wonder, they are probably using electrons?...
The formulas used for a Bloch sphere also represent the rotation disturbance of an electron. I am just getting used to how these spin computations work as I don't generally use spin in EE semiconductor calculations...so this is new to me even if reaaaaly simple to everyone else in the thread...

The Bloch spheres are quite helpful for visualization -- edguy -- but looking at the website you linked to -- I noticed a couple of things. First is that the green point/vector appears to be independently rotating -- and acts somewhat like a time variable -- the remaining two dots rotate in a plane defined as the normal to the green line. As a visualization thing, staring at the animations for a while helped me to grasp the relationship of the two state variables in a Bloch sphere, and of particular help was the very first animation which plotted the height of the blue dot showing the complicated shape on the three-D sphere to be nothing more than a constant rotation in two planes.

Unfortunately, IMHO, the only external aid in precession (Larmor) animations were very unclear to me -- I couldn't, for example, tell the difference between what those spheres and the non-precessing examples on the top row of animations really is; they look the same. The "plane" added to the animation doesn't seem to make any sense to me yet; I'm still reading on the spheres from various sites.

This is what I know about spin mathematics now, since no one answered the questions so far; perhaps this will aid in explaining what I am after and getting some people with basic knowledge to give me a little assistance?

There are two possible "stationary" (eigen) states, UP and DOWN and the general solution before observation is a linear superposition of all possible states according to differential equation theory; For simplicity, since there is no spatial variable to worry about -- but just a "point", the functions of the state space can be constants: and there will be two stationary states, so traditionally a vector is used to represent them:

[tex]|spin+>=\left[\begin{array}{c}
1\\
0\end{array}\right],|spin->=\left[\begin{array}{c}
0\\
1\end{array}\right][/tex]

There are only two rows in a spin state variable, and coincidentally it takes two angles to make a direction -- θ,φ -- in 3D space.

I think the two states quantization HUP issue -- regardless of 3D direction for "spin" -- might even make sense classically/relativistically once one tries to use Einstein's concept that a magnetic field is a time-delay effect of the electrostatic field ... for there is no such thing as a true magnetic moment if that is the case... two dimensions of Cartesian space are used to make the equivalent of something moving in order to generate the E fields representing "spin". and that causes those two dimensions to have time varying magnetic fields when one tries to measure with respect to those axii...(eg: normal to the axis of spin/vector which is the orientation of Bohr magneton N&S pole directions) and since it varies (predictably, but affected by even the slightest disturbance from an external magnetic field...) the result is quite random looking for all practical classical physics purposes. For simulation purposes I might even make it partially random to simulate small magnetic field fluctuations from atoms that would exist in reality, but that don't exist in the simulator due to space/computation constraints.

The Bloch sphere includes this issue by defining the results of the space as the surface of a sphere -- and surface area goes as dimension squared, not cubed. The energy of the electron is fixed in its "spin" sense, no magnitude to change, and therefore no physical significance to the variable known as radius either. The system is purely represented by the surface of a fixed radius sphere.

So, glossing over the above issue for now, the two spin states +1/2 and -1/2 are called vectors (orthonormal basis vectors even??) of the spin space, or sometimes "spinors".

... and like the case for the Schrodinger equation with respect to spatial position having a state ψ(x,y,z,t), there has to be a sub-state for spin, (I need a different name so χ seems to be what people use on the net ??) χ(φ,θ,t) or χׂ(Xc,Yc,Zc,t) for a Cartesian version -- which the Bloch sphere is. So, one needs to generate a Hamiltonian of the spin state (χ) and then operators can be applied to extract information about the probability of detecting/interacting with the spin magnetically.

The functions applied turn out to be simple matrix operators; and I learned they are called Pauli spin matrices:
[tex]\varsigma_{x}\equiv\left[\begin{array}{cc}
0 & \frac{\hbar}{2}\\
\frac{\hbar}{2} & 0\end{array}\right],\varsigma_{y}\equiv\left[\begin{array}{cc}
0 & -i\frac{\hbar}{2}\\
i\frac{\hbar}{2} & 0\end{array}\right],\varsigma_{z}\equiv\left[\begin{array}{cc}
\frac{\hbar}{2} & 0\\
0 & -\frac{\hbar}{2}\end{array}\right][/tex]

What makes each one specific to the axis chosen eg: x,y, and z ... I haven't the slightest clue...
The equations look like one axis was picked arbitrarily, (like the green dot in the Bloch sphere diagrams?) and the rest were based on that one.

The only problem remaining to make a state equation is the coupling of electromagnetics to mass so that the EM field will properly interact; in part that turns out to be the classic gyromagnetic ratio of coulomb to mass; a constant called γ.

So, doing a very simple example -- eg: an electron in a constant magnetic field oriented in Z:

(note: This does not include Bloch sphere representation which changes the variable space of rotation to remove a redundancy in vectors which are 180 degrees from each other. I will have to think about what I learned concerning dividing angles by two in order to remove the redundant half the surface of a unit vector space, sphere... and the concept that 720 degrees is required to come to a TRUE identical position in vector space... not necessarily Bloch vector space, as I haven't figured that out yet.
http://en.wikipedia.org/wiki/Orientation_entanglement )

The energy of the Electron oriented similarly will be γBz as known from classical physics, so the Hamiltonian is writable by inspection, and the solution of the Schrodinger Equation of an electron (arbitrary initial state) with field pointing in the Z direction is:

[tex]\jmath\hbar\frac{\partial\chi}{\partial t}=\left[-\gamma B_{z}\varsigma_{z}\right]\chi[/tex]
(Wow! I love this TeX editor I picked up...so much easier to express things! so fast to do offline! LyX the problem!)

Time evolving solution for the Schrodinger spin equation ... is simply:

[tex]\chi(t)=\left[\begin{array}{c}
a\left\{ e^{\frac{^{\jmath\left\{ \gamma B_{z}\right\} }}{2}}\right\} \\
b\left\{ e^{\frac{^{-\jmath\left\{ \gamma B_{z}\right\} }}{2}}\right\} \end{array}\right][/tex]

To compute an observable probability, say in the X direction I do:

[tex]YisUP|\chi(t)>=\overline{\chi(t)}\varsigma_{y}\chi(t)[/tex]

Now this is where I get stumped...

For the change in position, according to the Schrodinger equation, one has to construct a wave packet to determine the speed of the item in question. Yet, I haven't seen anyone do that for spin; they generally glibly equate γB as the Larmor frequency if one chooses a and b such that precession will occur. But, I don't see how that is arrived at; can anyone point out where to look for this?

Given a uniform magnetic field, the electron will never flip -- nor even align itself with the field.

Next:
I can see how divergence of the B field would allow one to calculate a tendency for the electron to move in x,y,z coordinates as a whole. The energy of the system determines motion for spin, thus I have to treat an inhomogeneous magnetic field like: E=γ(Bz + kz), and I will ignore the other axii inhomogeneous field... they would be analogously treated; IN SG, the net z motion is the one of interest.

But then, that means I am going to end up with a time dependency with kz in it... hmm, can I do that?
the result would be, if possible:
[tex]\chi(t)=\left[\begin{array}{c}
a\left(e^{\frac{\jmath\gamma B_{z}}{2}}\right)e^{\frac{^{\jmath z\left\{ \gamma B_{z}t\right\} }}{2}}\\
b\left(e^{-\frac{\jmath\gamma B_{z}}{2}}\right)e^{\frac{^{-\jmath z\left\{ \gamma B_{z}t\right\} }}{2}}\end{array}\right][/tex]

If that is right I can also do that for the electric field of the translating electron using ψ,where one normally one plugs the entire voltage field in ...

[tex]hamiltonian=-\frac{\hbar^{2}}{2m}+V[/tex]

If that is possible, it has the advantage of allowing local calculations of Energy to show how the state will evolve in the short term, and may eliminate the need to know the entire potential field at once; clearly the spin solution could be adapted to integrate E on the fly, and compute the changes to the coefficients ... that could be an excellent approximation!

And finally, a philosophical/open minded thought/question ... hoping for computational shortcuts...

One of the things I began to think about was the meaning of tunneling; I came across a derivation of the Heisenberg uncertainty for time and energy -- and what surprised me was that I can't find any justification for saying that a particle may "borrow" energy it wouldn't classically have if it pays it back, as I often hear is the explanation for tunneling.
but, the Δτ seems to indicate how long a system/state must be in operation to detect a significant (50%?) change in the state of a system -- but it doesn't seem to have anything to do with "borrowing" energy for a time, no matter how I try to interpret the equation parts.

The question, then, becomes what does it mean to "tunnel" into classically forbidden areas? I am beginning to wonder if that has something to do with an idea equivalent to "width" of an electron or other particle -- eg: The reason one detects it in a forbidden energy area is simply that not "all" of it was there, and thus it didn't require that much energy to penetrate as far as it got -- but enough of it was available in the forbidden region to be grabbed by a detector. Sort of a Heisenberg uncertainty definition of radius. Has anyone come across any similar idea in literature? I don't intend to invent a new formula -- but I was thinking that if such an analogy were useful at all, that it might suggest a way to do a change of variables which simplifies the integration equations for linear motion ... maybe not, though, the approach might complicate them instead.

OK. Anyone care to share some thoughts/enlightenment about these very basic QM issues?
:smile:
 
Last edited:
  • #23


Half way through the video -- they use a circus like stool and have the gyroscope precess with a horizontal orientation...

You are correct. I would like to expand on what I feel are important differences between the two videos http://www.youtube.com/watch?v=8H98B...eature=related"

For a bloch sphere, the axis horizontal to the force (bicycle wheel) showing precession is the most "unstable" form. Notice in both videos, how much motion the entire gyroscope is undergoing in this position (horizontal axis) in order to maintain balance.

What I am trying to say here (in a poor selection of words), is that if you were to stop the horizontal rotation (precession) of the axis on the bicycle wheel, it would compensate by moving the axis to a vertical position, either up or down depending on the direction of spin. These are the natural "up" and "down" states of a bloch sphere when placed in an external force. The bicycle wheel is in a very stable position if you let it fall over and spin on the ground.

The larmor frequency animations http://www.animatedphysics.com/videos/larmorfrequency.htm" [Broken] in row 2 and 3 show how precession works in 2 different field strengths with the precession speed (larmor frequency) being the same no matter the tilt of the sphere against the external field (It is perhaps not clear that the external field is vertical to the side view of the bloch sphere, also it represents a very "strong" field as normally the precession rate would be in the range of 100k times slower then the spin in an MRI machine - you may have to click a couple of times to get them to run right). Again, the most unstable view is #3-C where the bloch sphere has the high tilt relative to a strong field and hence a lot of movement is going on. The plane and the axis pointer represent the plane of the spinning wheel (green axis and plane of the spinning green dot). Both the plane and the axis are what precess. The toy gyroscope is great (I got mine at the Smithsonian in Washington DC) and it is very helpful to hold and feel the forces involved. It is important to note that when the bloch sphere (or toy gyroscope) is flipped from up to down, the spinning motion never stops or reverses.

I think bloch sphere calculations are valid for all proton/neutron MRI calculations (including modeling for t1/t2 relaxation times).

[tex]\Delta V\approx1.67c\sqrt{\frac{2RT}{m}}[/tex]
For 2 deviations, it becomes:
[tex]\Delta V\approx3.34c\sqrt{\frac{2RT}{m}}[/tex]

An anamanipia walah!...

Great connection - thanks, all that's left is the rate (and distribution) that the atoms come out of the oven. Must be some kind of function based on the size of the slits/holes that they have to get through or...
 
Last edited by a moderator:
  • #24
edguy99 said:
... Notice in both videos, how much motion the entire gyroscope is undergoing in this position (horizontal axis) in order to maintain balance.
Aye.
What I am trying to say here (in a poor selection of words)...
yes, I can see that there is a natural position. In an EM field the direction is the magnetic north, not gravity -- but you are using an analogy. I can see a useful convention for a reference being the direction of the field -- but as magnetons never "slow" down flipping has to do stimulation...
Also, gravity is uni-attractive -- whereas magnets are both attractive and repulsive; So there is a maximum instability in the same vertical orientation and the most stable position, just sign reversed.
But I have no problem calling the direction of most stability "up" in deference to your convention ?

The plane and the axis pointer represent the plane of the spinning wheel (green axis and plane of the spinning green dot). Both the plane and the axis are what precess.
OK. That's what I was missing. The actual visual rotation of the three dots doesn't look distinctive to the eye, so that's my critique of the image. The writer of the graphics would do well to leave a trace like image 1,1 does for the "tip" of the spinning axis which would make the precession distinctive eye candy.
It was just hard to see for me.

I think bloch sphere calculations are valid for all proton/neutron MRI calculations (including modeling for t1/t2 relaxation times).

A Bloch sphere models any two state quantum effect, eg: spin 1/2 "particles".
t1/t2 reaction times are based on thermal kinetics -- how long it takes to "wiggle" a large number of protons loose from a precessing state, to an aligned one as neither protons nor electrons will stop precessing and align until agitated in the proper way -- eg. in sync with the spin rate to make the flippin force to not cancel out. Thermal agitation does this randomly -- but it can be induced by an EM RF wave; eg: I am sure you are correct -- the frequencies used are in the low MHz which is a proton moment, not electron, for their frequency differ by ~ the mass ratio of electron to proton, and the proton is the more massive and hence lower frequency. The chemical bonds (electrons) would be unreliable for imaging anyway, so using the Nucleus makes sense. Bloch spheres are mentioned in modeling NMRI (Nuclear MRI).

I believe you about t2, although I am not quite certain of what is happening; anyhow, for my present purposes that isn't important. I'll just think of it as something similar to Barkhausen noise...an avalanche triggered by one tiny pop...

Great connection - thanks, all that's left is the rate (and distribution) that the atoms come out of the oven. Must be some kind of function based on the size of the slits/holes that they have to get through or...

Well, yes that's left -- and now checking the MIT paper, p.6 -- they have the same form of my velocity, v(T,m) =sqrt(3...) -- with one exception; the constant is 2 not 3. You appear to know more than you let on...:smile: The formula I gave is for the mean velocity(eg: expectation value) of a gas in thermal equilibrium <V0>. There is a 19% variation in speed between the two distributions -- not a show stopper error. The Boltzmann distribution selects the single mode, apparently, "most atoms", and that makes more sense since the goal counts atom distribution not speed once the experiment is over. eg: so the "2" is more accurate of the SG image -- for the speed distribution -- the standard deviation is good enough, using a Gaussian, unless someone wishes to generate a more accurate value.

See p. 6
http://web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf"

Reading forward to p. 8 , though -- the classical treatment is oversimplified -- it will not produce the graph they show. Or indeed, one will have to admit that the results of the SG do not predict quantization in the sense of the Z axis. (Y on the photos.) As an EE, I computed the B field, and my suspicion was confirmed -- these texts dumb down classical physics. Notice MIT does NOT predict motion in the X direction -- and one easily might not notice that the solution only predicts Z motion. But if one only plots the Z -- then one can't KNOW where on the X that Z will appear -- and thus ALL Z's are potentially suspect as valid values... (The plot just became practically "classical" in the sense they say it isn't; but a little more analysis is required.).

The magneton does NOT reorient into the magnetic field (except when atoms collide) -- it does precess -- but the TRANSLATIONAL motion of the magneton depends on the gradient of the magnetic field; The gradient must be as large in X as in Z for conservation laws to hold. There is no-place on the magnetic field where the X portion of the gradient of the magnetic field is zero except exactly on the Z axis...(local extrema in direction of X).

That correlates with the "spike" in the actual SG experiment -- for the most stable direction is oriented toward the Z axis and atoms from both X- and X+ coordinates will preferentially drift toward X=0, (even if not a collision caused reorientation) the center, creating a single place with more atoms than anywhere else. (It goes from aligned to anti-aligned as it crosses X=0, so it will not go off the other side equally -- but is rectified non-linearly.)

IF an atom were oriented off center, anti-aligned, to the field -- its deflection in the X direction could reasonably be expected to exceed the deflection in Z, so that atoms which classically do not climb as high Z wise -- will go much farther in X. The chances of an atom being exactly on the Z axis are zero -- so that it is a fallacy to argue that the Z axis is the predictor - it isn't (I was suckered in too.) I am not going to bother with an analytical solution right now -- I may do a classical simulation to verify what I have argued later -- but just note that any solution which does not predict X deflection can't claim to fully represent the Z distribution of classical physics. eg: the higher density at X=0, is not predicted by MIT's lab. Precession does NOT cancel X effect out in a gradient.
Also, the SG DOES show quantization of the Magneton --- but just does not prove that classical predictions would not ALSO predict quantization. To be blunt -- we plug the QUANTIZED value Bμ into the classical equation -- THUS IT BETTER BE Quantized! Actual picture again...

http://www.kcvs.ca/martin/phys/phys243/labs/sglab/stern_gerlach.html"
http://plato.stanford.edu/entries/physics-experiment/figure13.html" [Broken]

The Bloch sphere (BS) is a different matter. I don't know how to use it in (Shrodinger's Equation) SE as it is formulated -- and I haven't found any examples. The primary benefit of the Bloch sphere is to remove redundant values from the [ x,y,z ] direction vectors. Eg: Vectors which are π radians (180d) are the same vector save sign -- the point that this redundancy exists and is undesirable is well taken by me; and using the Bloch sphere to fix this is intriguing -- but I appear to be redoing someone else's work -- which is what I set out to avoid, I want to either have a solution in hand -- or blaze a new path.

On an X and Y axis, the traditional locations of spin 1/2 and spin -1/2 are formulated by the complex vector. Oddly the angle division by two used in the BS does not reduce redundant vectors -- rather it makes the spin 1/2 and spin -1/2 vectors at 180 degrees (2Pi), and extends the range of the angles represented by the sphere by a factor of 2, eg: 720 degrees. But that's OK, I like a physical representation ... it is more intuitive *if it works*. The entanglement page I read still seems to hold some promise, so I am open to the BS.

Do any of you know of a worked example where it is used *IN* solving the problem -- and *NOT* just as a representation after the fact? eg: Where one plugs a Bloch sphere variable in for χ or ψ and solves the eqn?

--------------- py. NOW FOR SOMETHING TOTALLY DIFFERENT --------------

I have a plot of a well to make, and so I thought I would lay out my first attempt's math, now a month old.
There is no point in going to all the trouble of working out Eigenstates for a numerical simulator.
So, let's start with SE RAW.

[tex]\frac{\partial}{\partial t}\Psi=-\jmath\hbar\nabla^{2}\Psi+V(x,y,z,t)\Psi[/tex]
The first thing to do is get rid of the idea of a complex vector exp( f(x,y,z,t) ). Mathematically, the exponential is costly to compute and often quite inaccurate on C implementations interfaced to Intel processors. (which causes it, I don't care.) So, we're going for a direct timestep (quadratures) integration to solve the equation from ground zero -- and let's see how far I can go with it. Reduce the problem to real functions A and B.
[tex]\Psi=A(x,y,z,t)+\jmath B(x,y,z,t),\Psi^{2}=A(x,y,z,t)^{2}+B(x,y,z,t)^{2}[/tex]
[tex]A+\jmath B=A_{0}+\jmath B_{0}+\intop_{T_{0}}^{t}-\jmath\hbar\sum_{d:xyz}\left\{ A_{dd}+\jmath B_{dd}\right\} +V\left\{ A+\jmath B\right\} [/tex]
Dropping the Σ for brevity, break into two simultaneous equations of time.
[tex]\left[\begin{array}{cc}
A, & B\end{array}\right]=\left[\begin{array}{cc}
\intop_{T_{0}}^{t}-\jmath\hbar\jmath B_{dd}+VA & ,\intop_{T_{0}}^{t}-\hbar A_{dd}+VB\end{array}\right]+\left[\begin{array}{cc}
A_{0} & B_{0}\end{array}\right][/tex]
[tex]\left[\begin{array}{cc}
A, & B\end{array}\right]=\left[\begin{array}{cc}
\intop_{T_{0}}^{t}\hbar B_{dd}+VA & ,\intop_{T_{0}}^{t}-\hbar A_{dd}+VB\end{array}\right]+\left[\begin{array}{cc}
A_{0} & B_{0}\end{array}\right][/tex]

The last equation is suitable for numeric integration. Choosing the initial condition is the only difficulty.
 
Last edited by a moderator:
  • #25
Hi, I'm a new member. The posters here seem extremely well versed on the subject that I am interested in asking some questions about. My physics knowledge comes mostly from an old Serway book I used when I studied some Engineering. I am very interested in the area of atomic and subatomic interactions involving EMR. I do not even consider my questions to be based on a reasonable theory, that is why I am asking these questions.
I am trying to garner a practical understanding of electron spin. I am thinking that the atomic model really should include photons. Is the electron pairing explained by a reciprocating photon/s that pull/s the electron into an orbital path at the same time as changing (photon) direction? If this is the case, what is the nature of the interaction with the nucleus, and is there photonic polarisation and inversion occurring? Is a photon a record of a quantum event manifested as a moving spatial distortion?
Am I on the right track here? I would value your opinions and any directions towards good texts or sites. Thank you.
 
  • #26
DIY:SAM 1D well, attempt #2, no spin.

Hi Radguy, I can't answer your question(s) totally (yet) -- at least, I would be forced to give more of a Socratic answer than a useful one at this point -- but feel free to do a little web searching and compare notes with what we're talking about. I expect I am going to start posting solutions (numerical) that can test how Schrodinger's equation predicts before my kid goes back to chem class this fall, and will be comparing how the simulator works against known cases -- once I have that down, then some hypothetical questions are in order.

So;
First, let me correct my total goof on the last post as I wrote down the first modification of Schrodinger's wrong (I seem to do a lot of that lately...) and so all that followed is, well, wrong. (My apologies to anyone that was confused; dealing with illness is a pain on my part -- I loose concentration easily and peer review would be really useful even if you can't answer my questions...)

The probability of locating an item in space, without the spin matrix modification, and including a manipulation to one possible form of time-step integration for simulation is:
(given)

[tex]\jmath\hbar\frac{\partial}{\partial t}\Psi=-\frac{\hbar^{2}}{2m}\nabla^{2}\Psi+V(x,y,z,t)\Psi[/tex]

[tex]\Psi=A(x,y,z,t)+\jmath B(x,y,z,t),\Psi^{2}=A(x,y,z,t)^{2}+B(x,y,z,t)^{2}[/tex]

(Then transforming to a time based integration to solve for future times...)

[tex]\frac{\partial}{\partial t}\Psi=\jmath\frac{\hbar}{2m}\nabla^{2}\Psi-\jmath\frac{V(x,y,z,t)}{\hbar}\Psi[/tex]
[tex]A+\jmath B=A_{0}+\jmath B_{0}+\intop_{T_{0}}^{t}\jmath\frac{\hbar}{2m}\sum_{d:xyz}\left\{ A_{dd}+\jmath B_{dd}\right\} -\jmath\frac{V}{\hbar}\left\{ A+\jmath B\right\} [/tex]

Dropping the sigma for brevity, again,

[tex]\left[\begin{array}{cc}
A, & B\end{array}\right]=\left[\intop_{T_{0}}^{t}\begin{array}{cc}
-\frac{\hbar}{2m}B_{dd} & +\frac{V}{\hbar}B\end{array},\intop_{T_{0}}^{t}\frac{\hbar}{2m}A_{dd}-\frac{V}{\hbar}A\right]+\left[\begin{array}{cc}
A_{0} & B_{0}\end{array}\right][/tex]

Although this just as integrable by quadratures as the last attempt -- which was wrong, sigh -- the terms magnitude are separated by Planck's constant each -- which means the decimal spread between magnitude is up to (10-34)2 ... which is 68 digits long. 128bits / 3.5bits/digit = 36digits (If it was all mantissa, and it isn't...). I need to put a magnitude on the spatial derivative for common cases, eg: hydrogen -- and also a magnitude on the typical potential energy V(...), to see if this is recoverable -- but this doesn't look good (and it would be worse in Java...).

Hmm...
I was looking at some 1D examples, like the simple harmonic oscillator (SHO) -- and there is definitely a classical modulation which can be removed from the answer of Schrodinger's equation which has a composite of interference and momentum effects on probability: If one looks at the classical probability distribution; which is simply the time an item stays in a differential volume/total time the item stays at all locations -- the *average* modulation envelope of Schrodinger's equation solutions match those of the QM case. See: Stephen Gasiorowicz "Quantum Physics", John Wiley & sons, 1974 for an excellent graph of the classical probability v. QM probability for |ψ100|2. I may be getting that book...

The fact that the classical probability can be recovered merely by computing NormConst/sqrt(2(E-V)/m) suggests that SE really incorporates redundant information.There is a problem in the "forbidden"/"tunneling" energy locations, but in general -- one may divide the solution |ψ|2 of Schrodinger's equation by the classical probability -- and one would be left with purely interference modulation. I expect there may be a way to modify Schrodinger's equation to solve for solutions absent the classical probability -- and having only interference effects. Well, lots to do ... and little help...
Radguy said:
I am thinking that the atomic model really should include photons. Is the electron pairing explained by a reciprocating photon/s that pull/s the electron into an orbital path at the same time as changing (photon) direction? If this is the case, what is the nature of the interaction with the nucleus, and is there photonic polarisation and inversion occurring? Is a photon a record of a quantum event manifested as a moving spatial distortion?
Am I on the right track here? I would value your opinions and any directions towards good texts or sites. Thank you.

Radguy,
I don't yet know much about how electrons and photons interact -- (That is, the more I learn -- the less I know...) but the Pauli exclusion principle, and the reason that two electrons may fit in the same quantum state is related to their being two state particles. What happens is that there are two states, spin up and down; and one can make a wave-function which is a sum of these two states -- so there at least *is* a state with spin up and down in it. However, if one tries to put two electrons in exactly the same state -- calculating the probability will yield exactly "0". In other words, the Pauli exclusion principle comes directly from trying to compute an expectation value. Unfortunately, all information about what on the EM scale might be involved in this is "swept under the rug" (If there is one.).

The acceleration of an electron with spin into translational motion was handled just a few posts back;Jul11-10 08:31 AM-- I haven't had anyone verify if my conjecture is right or not (If it isn't clear, ask specific Q's and I will clarify as best I can...) -- but it looks to me as if the effect of spin is to cause momentum based on a magnetic gradient. My flash of insight was that the Schrodinger equation is based on Energy of the particle -- so if one plugs in the Energy variation, one computes the change in momentum. A homogeneous magnetic field changes the Energy by a fixed amount -- so will not cause translational motion -- but a gradient will;

So, since this is the only interaction spin has with the EM field (It explicitly is missing V(x,y,z,t) ) -- I would assume that the spin pairing is because one electron, by moving, creates a magnetic gradient -- and the other one is affected by it -- and vice versa. I know this isn't much of an answer -- but its the best I have given the lack of guidance from more advanced physics/Engineering students... This stuff was never used in my semiconductors classes...

In classical physics, particles which accelerate radiate energy. (Photons.) How it is, that two electrons can be accelerated by each other and *NOT* radiate energy is something I never have learned. In the case of the 1S hydrogen state, there is no momentum of "orbiting" the nucleus -- the electron essentially "hovers" in defiance of the electric field. The same ought to be true of the Helium atom with two electrons -- that is one of the atoms I wish to simulate; hopefully when I get there, we can both get a glimpse of what this mystery might mean in terms of classical EM fields...

--Andrew.
 
Last edited:
  • #27
Thanks Andrew, that is a great reply! I think you are onto something. Geometric modeling of molecules is important, and I would expect there to be the regularity and predictability of shape, despite not being able to specifically locate any electron. An electron pathway around a molecule should be predictable if ignoring externally applied fields. I think that your nutting out here is capable of some very meaningful conclusions. You could be on to the best physics code written.
My amatuer interpretation is this:
I would say that orbiting electrons do not emit radiation externally (ie outside the pair and nucleus) unless externally influenced. The Universe's glue wouldn't work if it were lossy.
I would expect in the case of two electrons in the same place to be an uncertain mathematical limit like 0/0, different to 0. Before they get to this point, obviously some pretty strong countering of the EM repulsion would be required to make two electrons touch let alone merge. I would guess that a merge would probably require infinite energy.
A constant magnetic field is of course impossible. The variable nature of individual photons means that they should operate variably, with a sine gradient. We know that photons are fully absorbed in collisions, although lower frequency photons can be created as a reaction. I am suggesting that within the orbiting pair, the photon is nearly perfectly reflected by the electrons back and forth, and possibly inverted by the nucleus which is probably always intersected by the photons trajectory manifold (assuming a photon is at least 2d).
The photon (maybe a polarised one?) is the smallest EMR and does seem to be inherently wavelike internally as well as when grouped. If an electron-photon interaction takes place, it will occur completely, even if only the edge of the photon manifold is encountered. The diameter of the photon's manifold could explain the governor like stability of the electron shells. Random radiation should not perturb the shells very much, but other photons will be exchanged between electrons in the shells which distributes the electrons away from each other.
This is just a quick response Andrew, I need to look up a few references through your posts. I am useless at matrix methods (I was taught this badly and never retaught). In my job as a specialised pattern maker I deal with lemniscate-like geometry which I have found has some connection with orbital geometry. I will do my best to catch up and keep up.
I'm interested to see how you go with the simulation. Don't let me distract you too much.
 
Last edited:
  • #28
Hmm. Good point about the hydrogen Andrew. Maybe the small nucleus can deflect/transform photons enough for a stable orbit. If this is the case, the nucleus seems to be constantly trying to grab the slippery little electron/s, usually without success!

Jeremy
 
  • #29
Historical Electron

Radguy said:
...I am trying to garner a practical understanding of electron spin...

Its helpful I think to remember how these ideas came about from a history, some terms are remants of older days. Controlling electrons with magnetic fields is pretty important to a lot of things.

mmom.gif

This is a good background http://hyperphysics.phy-astr.gsu.edu/Hbase/magnetic/magmom.html#c1


This picture from http://home.tiscali.nl/physis/deHaasPapers/DiracEPR/DiracEPR.html#Jimenez
[URL]http://home.tiscali.nl/physis/deHaasPapers/DiracEPR/magneton.jpg[/URL]

is the picture that people like Lorentz had in mind in the early 1900's. The advantage of using something spinning is that is has an up and a down state.

The difficulty is that if you try to animate this picture there are proven values of the amount of mass of the electron, the amount of energy of the electron and the speed of light. If you rotate the e- line above to represent the energy, you will have to rotate it very very fast. Other people like: http://www.astrosciences.info/Ring.html [Broken] include the closed b lines and feel that they have accounted for the energy.

String theory tries many different structures ie.
http://members.chello.nl/~n.benschop/electron.pdf [Broken]
 
Last edited by a moderator:
  • #30
A little more on spinning bloch spheres http://www.animatedphysics.com/videos/spinningblochsphere.htm" [Broken] (objects inside an empty shell do not feel force from the shell) you get a derived pauli exclusion principle that allows 2 electrons into the proton shell. It they wander outside the proton shell, they get pulled back in by the normal coulomb force but once inside they are no longer attracted to the center. A single electron, once inside this type of sphere, will do not gain any more kinetic energy or need to radiate energy as it is no longer seeing any force. Also don't forget that the bloch sphere can represent anti-particles. If you have a BS spinning in one direction, its axis points either up or down. The antiparticle will be the same BS spinning in the opposite direction but its axis is pointed the same way as the particle.

http://www.kcvs.ca/martin/phys/phys243/labs/sglab/stern_gerlach.html"
http://plato.stanford.edu/entries/physics-experiment/figure13.html" [Broken]

I may not understand this correctly, but is the extra wide scatter at x=0 something odd that happened in the 1922 experiment or is it seen all the time?
 
Last edited by a moderator:
  • #31


edguy99 said:
A little more on spinning bloch spheres http://www.animatedphysics.com/videos/spinningblochsphere.htm" [Broken] (objects inside an empty shell do not feel force from the shell) you get a derived pauli exclusion principle that allows 2 electrons into the proton shell. It they wander outside the proton shell, they get pulled back in by the normal coulomb force but once inside they are no longer attracted to the center. A single electron, once inside this type of sphere, will do not gain any more kinetic energy or need to radiate energy as it is no longer seeing any force. Also don't forget that the bloch sphere can represent anti-particles. If you have a BS spinning in one direction, its axis points either up or down. The antiparticle will be the same BS spinning in the opposite direction but its axis is pointed the same way as the particle.

I don't follow this idea. A bloch sphere is a state vector tool as far as I know. It has a fixed radius because a global phase in QM has no meaning. I think you spoke about the nucleus having multiple spin states earlier, and in the literature on Bloch spheres (eg: that I have found) there is mention that the benefits don't extend higher spin orders -- although, I find that puzzling as I would expect each proton to be represented by a single bloch sphere and a second equation would handle the interaction/entanglement issues.

Protons also have orbits with each other -- I think a brilliant woman was involved in that deduction, Maria Goeppert-Mayer, and the mathematics are quite similar to that of electron orbitals. The so called, "strong" force trumping the EM field, but otherwise playing a similar role. I don't see why, then, if a Bloch sphere models a spin 1/2 particle why several of them would not correctly model the nucleus -- although for my purposes, I don't intend to simulate these orbitals as the concepts are too difficult for a first try.
The bloch sphere, as far as I know, isn't intended to model the angular momentum of orbits -- so I am not sure why 7/2 momentum, etc. have anything to do with this. I expect the electron when orbiting, has lower energy when the intrinsic spin of the electron cancels with the orbital motion and thus would automatically lead to a reduction in the overall magnetic moment. Since the Bohr magneton is a measure of the "orbital" effect, I find it no surprise that the electron has a moment nearly double. The exact value of the ratio is something of a clue as to the interaction between electron spin and orbital angular momentum.

The shell theorem is based on the concept of uniformly distributed charge; The nucleus is several point charges in orbits. I know that as an "approximation" one can consider the charge of an electron spread out in the shape of the probability -- but I have seen mention made that this approximation predicts wrongly in many cases. Slater determinants basically assume that interactions are based on the "average" electric field... that is one problem with them; but then so is determining "where" the charge needs to emanate from as a replacement.

My view is that an electron's field might indeed be spread out because of HUP -- but not over an entire atom for the distances are too large. Thus a true shell is not possible to make. I suppose it could be possible for two electrons of similar wavelength to intertwine, for in that case a single electron might produce a shell around the other electron -- but, the energy required to get them that close would seem to be *nuclear!* and I have no idea how they could be gotten apart again either...

I may not understand this correctly, but is the extra wide scatter at x=0 something odd that happened in the 1922 experiment or is it seen all the time?

I have the same question. In the original analysis I read before studying the actual experiment -- the author indicated that the spike was to be ignored as a facet of the *original* SG magnet shape. However, my analysis is based solely by noticing that the gradient is symmetric about the photo's center x axis. (x,+y = x,-y in photo coordinates) Atoms from both sides of the slit would be attracted toward center -- and any atoms colliding would stop precessing and re-orient their trajectory toward the center because of the gradient. Thus, higher density of atoms along the center line would be unavoidable in any experiment which has a symmetry -- which is all that I have seen... and thus, I would expect a spike. Perhaps only the exact shape of the spike is unique to SG, and later experiments have a different distribution; The author I read did not explain in detail. This is one of the reasons I am looking for data from reproductions of the experiment, like MIT's, ... I'll let you know if/when I find any -- but it is important that I reconstruct an actual experiment, and not a dumbed down analysis.
I have in mind that precession changes the angle of the magnetic loop with time, so that even classically the tilt will always sample the horizontal as well as the vertical gradient on all sides of the magnetic field and that modeling it as a static loop of wire at a fixed angle is surely going to give the wrong result.

In my own analysis, I simplified the QM spin in order to extract only the Z motion in the experiment (Z experiment = x in the photo), but I know that the equation I wrote down was unacceptable in reality -- I just didn't want to do the mathematics for the complete gradient so I ignored part of it. OTOH I was just hoping to get an opinion if the principle of my method is anywhere near "standard". I will find it in the literature, eventually, -- but I had hoped to shorten the time by asking questions and sharing information.

I went back to look at Rice.edu's paper, and they removed it -- I guess they didn't like the traffic; So I can't read that analysis again either -- but I noticed one thing at the Stanford site (besides the really grainy pictures) They rotated the oven slit / OFF magnet photo 180 degrees from the way it was on the postcard (LH side photo) -- so perhaps someone there noticed the same thing I did concerning the density not matching the other photo.
 
Last edited by a moderator:
  • #32


andrewr said:
Since the Bohr magneton is a measure of the "orbital" effect

The Bohr magneton is the measure of intrinsic, spin-related magnetic moment, not orbital magnetic moment.

I know that as an "approximation" one can consider the charge of an electron spread out in the shape of the probability -- but I have seen mention made that this approximation predicts wrongly in many cases.

No, you just minterpreted what was written then.

Slater determinants basically assume that interactions are based on the "average" electric field... that is one problem with them

It makes no such assumptions whatsoever, and as I already told you in this thread, there are exact methods which use Slater determinants. Again you're saying that existing methods don't work when you clearly don't know anything about them.

My view is that an electron's field might indeed be spread out because of HUP -- but not over an entire atom for the distances are too large.

False. Do the math. In fact, try the Feynman lectures on physics where he derives the Bohr radius using the HUP.
 
  • #33
Radguy said:
The posters here seem extremely well versed on the subject that I am interested in asking some questions about.

Not the ones in this thread. Which is why it was moved to the crackpot section for "Independent Research".

I am trying to garner a practical understanding of electron spin. I am thinking that the atomic model really should include photons.

So study QED and its applications in this.

Is the electron pairing explained by a reciprocating photon/s that pull/s the electron into an orbital path at the same time as changing (photon) direction?

No, electron pairing is due to the Pauli Principle, which is due to the spin statistics theorem, and ultimately Special Relativity.

If this is the case, what is the nature of the interaction with the nucleus, and is there photonic polarisation and inversion occurring? Is a photon a record of a quantum event manifested as a moving spatial distortion?

This sounds like Star trek-type technobabble.
The nature of an electrons interaction with the nucleus is electrostatic attraction, and to a limited extent, a magnetic interaction.
 
  • #34


alxm said:
The Bohr magneton is the measure of intrinsic, spin-related magnetic moment, not orbital magnetic moment.

Yes, you're right. I stated that wrong. I stated it correctly earlier.

No, you just minterpreted what was written then.

And you have come into the thread *just* to be negative again -- and aren't giving useful information to solve the problems I am interested in.

It makes no such assumptions whatsoever, and as I already told you in this thread, there are exact methods which use Slater determinants. Again you're saying that existing methods don't work when you clearly don't know anything about them.

Oh, "nothing" is a serious exaggeration -- and "don't work" is a black and white critique -- perhaps you want other's to believe I said that? wishful thinking?

Again -- I already told you, I don't believe you; do you feel better that I can repeat what I said too?
Oooh ... I also said in reply to you, that if you had no more to say, there wouldn't be any more of this stuff in my thread -- but here you are. You didn't just leave it at a correction on the Bohr Magneton...No.
You had to attack my person.

False. Do the math. In fact, try the Feynman lectures on physics where he derives the Bohr radius using the HUP.

Being able to numerically solve for a "classical" Bohr radius which doesn't have any meaning since the electron doesn't "orbit" -- proves nothing. Why do you bring that up? Feynman's point is good for a bedtime story ... its comforting for those who like circular orbits and can't see them any more; but it is also irrelevant.

I Believe Schrodinger originally tried to say that ψψ* and EM intensity were directly related -- and they aren't. HUP says "I don't know and perhaps even I can't know" not "it is PERFECTLY uniform.".
Besides, even if what you say about HF/SD was "perfectly" true in some strange sense -- I really don't care. I am not interested in doing them.
 
  • #35
edguy,
Thanks for all the help. I think I'm going to move on to a different site; Whether or not alexm is right is irrelevant -- the lack of tactfulness is not going to end.

radguy -- best wishes.

--Andrewr.
 
<h2>1. How can I simulate atoms and molecules at home?</h2><p>There are several ways to simulate atoms and molecules at home, depending on the materials and equipment you have available. One method is to use modeling clay or playdough to create models of atoms and then connect them together with toothpicks to represent molecules. Another option is to use small magnets to represent atoms and connect them together to form molecules.</p><h2>2. What materials do I need to simulate atoms and molecules?</h2><p>The materials needed to simulate atoms and molecules will vary depending on the method you choose. Some common materials include modeling clay, toothpicks, small magnets, and a periodic table for reference. You may also need a ruler or measuring tape if you want to create accurate representations of the size of atoms and molecules.</p><h2>3. Can I simulate chemical reactions with this DIY method?</h2><p>Yes, you can simulate chemical reactions with this DIY method. By using different colors of modeling clay or different types of magnets, you can represent different elements and see how they react and bond together to form molecules. You can also use this method to demonstrate the Law of Conservation of Mass by showing how the total number of atoms remains the same before and after a reaction.</p><h2>4. Is this DIY method accurate?</h2><p>This DIY method is a simplified way to simulate atoms and molecules and may not be completely accurate. It is meant to provide a basic understanding of atomic structure and chemical bonding. For more accurate simulations, specialized software or laboratory equipment should be used.</p><h2>5. How can I use this DIY method to teach others about atoms and molecules?</h2><p>This DIY method is a great hands-on activity to teach others about atoms and molecules. You can use it in a classroom setting, at home with friends or family, or even in a science fair project. By creating models and physically manipulating them, it can help individuals better understand the concepts of atomic structure and chemical bonding.</p>

1. How can I simulate atoms and molecules at home?

There are several ways to simulate atoms and molecules at home, depending on the materials and equipment you have available. One method is to use modeling clay or playdough to create models of atoms and then connect them together with toothpicks to represent molecules. Another option is to use small magnets to represent atoms and connect them together to form molecules.

2. What materials do I need to simulate atoms and molecules?

The materials needed to simulate atoms and molecules will vary depending on the method you choose. Some common materials include modeling clay, toothpicks, small magnets, and a periodic table for reference. You may also need a ruler or measuring tape if you want to create accurate representations of the size of atoms and molecules.

3. Can I simulate chemical reactions with this DIY method?

Yes, you can simulate chemical reactions with this DIY method. By using different colors of modeling clay or different types of magnets, you can represent different elements and see how they react and bond together to form molecules. You can also use this method to demonstrate the Law of Conservation of Mass by showing how the total number of atoms remains the same before and after a reaction.

4. Is this DIY method accurate?

This DIY method is a simplified way to simulate atoms and molecules and may not be completely accurate. It is meant to provide a basic understanding of atomic structure and chemical bonding. For more accurate simulations, specialized software or laboratory equipment should be used.

5. How can I use this DIY method to teach others about atoms and molecules?

This DIY method is a great hands-on activity to teach others about atoms and molecules. You can use it in a classroom setting, at home with friends or family, or even in a science fair project. By creating models and physically manipulating them, it can help individuals better understand the concepts of atomic structure and chemical bonding.

Similar threads

  • Atomic and Condensed Matter
Replies
4
Views
1K
Replies
2
Views
2K
Replies
4
Views
405
  • STEM Academic Advising
Replies
9
Views
1K
Replies
14
Views
850
  • Electrical Engineering
Replies
2
Views
303
  • Atomic and Condensed Matter
Replies
10
Views
3K
  • Atomic and Condensed Matter
Replies
0
Views
283
  • Programming and Computer Science
Replies
19
Views
2K
Replies
17
Views
1K
Back
Top