Resources on Simulating Charges in Magnetic Fields

In summary, resources on simulating charges in magnetic fields are widely available and offer a variety of tools and techniques for accurately modeling and predicting the behavior of charged particles in magnetic fields. These resources include software programs, online simulations, and instructional materials that can aid in understanding the complex interactions between charges and magnetic fields. Whether for educational purposes or practical applications, these resources provide valuable insights and practical solutions for studying and simulating charges in magnetic fields.
  • #1
Drakkith
Mentor
22,916
7,265
TL;DR Summary
Looking for resources about simulating charged particles moving in magnetic fields.
Hey all. I started messing around with making a simulation involving charged particles moving in magnetic and electric fields and I was wondering if anyone had any good resources on the subject. I should be fine on equations, as I already have a book that should have everything I need about the fundamental formulas and equations that are used. I'm looking for tips and tricks on things like how to efficiently program in a magnetic field from something like an electromagnet, or how to simplify things to avoid absurd scaling as you add more particles.

Thanks!
 
Technology news on Phys.org
  • #2
Are you modelling in a vacuum, or in an atmosphere where the mean free path becomes critical ?
 
  • #3
Drakkith said:
TL;DR Summary: Looking for resources about simulating charged particles moving in magnetic fields.
In general this is an "n-body problem", however most of the easily available resources for these problems are for gravitational systems which have different issues to solve. Resources for charged particles are harder to track down, but they are out there, often under the topic of "plasma simulation" or "plasma modelling" - you will see that this is an area of active research with centres at many of the top institutions. You could have a look at https://en.wikipedia.org/wiki/Particle-in-cell and the links from there or you could get a flying start with some code in Python from here: https://github.com/pmocz/pic-python.

As you can probably tell from this list, particle-in-cell (PIC) methods have become the go-to solution for dealing with scaling to large systems in this field (as opposed to Barnes Hut and other tree-like methods for gravitational systems).
 
  • Like
Likes aaroman and Drakkith
  • #4
Is this a project where the goal is to build it, or is the goal; to get an answer? If the latter, Grant can handle this. It may be overkill.
 
  • #6
jedishrfu said:
The best algorithm is usually Runge-Kutta for any kind of complex ODE/PDE simulation.
I agree, but significant nonlinearities in the system can change the best choice, so the statement should be limited to linear ODE/PDE
 
  • Like
Likes aaroman
  • #7
jedishrfu said:
The best algorithm is usually Runge-Kutta for any kind of complex ODE/PDE simulation. The simpler algorithms will usually introduce error/energy into the sim
I disagree: for instance explicit methods (which include the Runge-Kutta methods usually used for non-stiff systems) will always introduce energy into a gravitational sim: even the simplest symplectic methods (like velocity Verlet) do better over long time spans.

What (some members of) the RK family are good at is adaptive step control in general situations, which does I agree make them good general purpose algorithms. But for any particular problem they are not usually the best.
 
  • Like
Likes aaroman
  • #8
My apologies for not responding in the past day or two. I developed a case of food poisoning mere hours after posting and was laid out (on the bathroom floor in a pallet of towels and a blanket at one point) for almost two days.

Baluncore said:
Are you modelling in a vacuum, or in an atmosphere where the mean free path becomes critical ?
Vacuum.
 
  • #9
I've barely been able to look into this thanks to a combination of plumbing repair work I've been struggling with along with lack of sleep from either getting up early or being unable to fall asleep. I had to replace a leaking lead pipe that was INSIDE the slab of my house, during which I discovered a 8'x8' void underneath the slab that was nearly a foot deep with about six inches of water filling it. But that's all mostly fixed now. Thankfully.

Anyways, I put together a very simple simulation of 2N particles (N electrons + N ions ) moving in a homogenous magnetic field using what could be called the particle-particle (P-P) method, which is just saying that each and every particle is simulated without any shortcuts or simplifications. Not really ideal for what I want, but I needed somewhere to start just to dip my toes in. I discovered a couple of things though:

1. My simulation causes my particles to accrue ever increasing amounts of velocity, quickly leading to FTL velocities for the electrons and then NaN errors in my program. I attribute this to the extremely simple solver I made which adds velocity at discrete timesteps with nothing to correct for the fact that real particles moving in real magnetic fields have a constantly acting force, not a discrete one. I need something to correct this, perhaps a scale factor each step that keeps the magnitude of the velocity vector or the kinetic energy the same.

2. There are many different ways of simulating plasmas. Particle in cell, particle mesh, particle-particle, gyrokinetic-ion fluid electron, particle-ion fluid-electron, particle-ion fluid-ion fluid-electron, guiding-center-ion fluid-electron, fluid-electrons-in-nougat topped with creamy particle-ion-whipped-chocolate... the list goes on and on. The last sounds delicious, but I'm worried about texture issues between the nougat and the chocolate.

3. Good resources are expensive. Or perhaps just hidden in the vast depths of the internet where my tired brain can't find them.

Anyways, still got some work to do in the bathroom. Learning how to pour concrete and replace tile, and then I can focus more on this.
 
  • #10
Drakkith said:
1. My simulation causes my particles to accrue ever increasing amounts of velocity, quickly leading to FTL velocities for the electrons and then NaN errors in my program. I attribute this to the extremely simple solver I made which adds velocity at discrete timesteps with nothing to correct for the fact that real particles moving in real magnetic fields have a constantly acting force, not a discrete one. I need something to correct this, perhaps a scale factor each step that keeps the magnitude of the velocity vector or the kinetic energy the same.
It sounds like you need a symplectic solver. I may have mentioned this before.

Drakkith said:
2. There are many different ways of simulating plasmas. Particle in cell, particle mesh, particle-particle, gyrokinetic-ion fluid electron, particle-ion fluid-electron, particle-ion fluid-ion fluid-electron, guiding-center-ion fluid-electron, fluid-electrons-in-nougat topped with creamy particle-ion-whipped-chocolate... the list goes on and on. The last sounds delicious, but I'm worried about texture issues between the nougat and the chocolate.
You just need to emulsify them into an OpenFOAM :wink:

Drakkith said:
3. Good resources are expensive. Or perhaps just hidden in the vast depths of the internet where my tired brain can't find them.
The latter. As I say I can't help you because charged particles are not my thing, but you should be able to find quite a lot out there as there are LOTS of groups researching this.
 
  • Like
Likes aaroman and Drakkith
  • #11
pbuk said:
It sounds like you need a symplectic solver. I may have mentioned this before.
If you did, it must have been in a now deleted post, as I don't see it in your previous posts here. I never got the chance to see what was here before the thread was edited, as I was down with food poisoning for several days right after making the thread.
pbuk said:
You just need to emulsify them into an OpenFOAM :wink:
Indeed. I'd ask my fiance to help, as she's wonderful at baking, but I don't think our oven goes up to 107 kelvin.
pbuk said:
The latter. As I say I can't help you because charged particles are not my thing, but you should be able to find quite a lot out there as there are LOTS of groups researching this.
Yep. I'm sure I just haven't put enough time into it. It's rather difficult when you're running on 3 consecutive days of 4-5 hours of sleep and you've just come back from the hardware store for the 3rd time in a day.
 
  • #12
I did go ahead and buy a used copy of The Hybrid Multiscale Simulation Technology by Alexander S. Lipatov. I browsed through a portion of it on Amazon and it looked to be exactly what I need. Unfortunately it'll be about a month before it can arrive, as it has to ship from the U.K. over to me.
 
  • #13
Drakkith said:
Unfortunately it'll be about a month before it can arrive, as it has to ship from the U.K. over to me.
And people thought boats from China were slow! :wink:
 
  • Like
Likes Drakkith
  • #14
Got my book in a few weeks early, plus I found an online copy a few weeks before that, so I've been reading and putting a small program together in Unity while not doing holiday things or house repairs (about to rip up the whole floor in the bathroom to replace it). Spent most of today optimizing some code and I managed to get huge improvements in performance.

100k non-interacting particles moving in a homogeneous magnetic field. Times are per master loop of the code.

Single thread: 15ms per loop.
Single thread w/ Burst Compilation of calculation methods: 5.6ms, almost 3x gain in performance.
Multi-Thread on my AMD Ryzen 5600X: 5 ms per loop, a 3x increase in performance.
Multi-Thread using Unity's Burst Compilation and some other optimization: Around 0.17ms per loop, an 88x increase in performance over single-thread, unoptimized code.
 
Last edited:
  • Like
Likes jedishrfu
  • #15
Are you using C# for programming in Unity?

What frameworks are you using for handling the physics of the EM field?

I had students use an earlier VR supported Unity a few years ago for an underwater flight simulation of the Marianas Trench. They used a combination of C# and Javascript although it seems now that Unity focuses on C#.
 
  • Like
Likes Drakkith
  • #16
jedishrfu said:
Are you using C# for programming in Unity?
Yes, with Unity's 'flavor' tacked on.
jedishrfu said:
What frameworks are you using for handling the physics of the EM field?
I haven't been able to dig into this yet except to use a homogenous field and to lay a bit of groundwork for the future.
 
  • Like
Likes jedishrfu
  • #17
Put this pet project on the backburner for a while and got back into it last night.
After getting CPU multithreading working I decided to figure out how to use the GPU for potentially massive parallelization gains. There are several different ways of doing this, but I decided the best way was to use Compute Shaders, as Unity already has built-in functionality with them and will compile them to work on many different GPU's. After some bumpy starts I managed to get it to work, but the performance gain was barely 2x at best. After some investigation I learned two things:

1. GPU's REALLY don't like loops.
2. There's some weird truncation or precision errors that I have to deal with.

I first had my particle-mover loop in a C# script, but I think that I was getting bottlenecked by buffer reads and writes. I then moved the loop to inside the compute shader. I saw little to any performance gain even doing this. After a bit of research, I discovered I could 'unroll' the loop inside the shader (basically removing the loop and duplicating each statement inside the loop once for each iteration). Unrolling the loop to 100 iterations gave me significant performance gains, but any larger and the shader became too large to compile quickly. So, I added another loop and nested the unrolled loop inside this one, which remains 'rolled'.

Wow... with 100 external loop iterations and an internal loop 'unrolled' to 100 iterations for a combined 10,000 calculation steps per frame, total calculation time was about 0.004 ms, or 4 microseconds. Compare this to my best CPU multithread time of 0.3 ms per step. This is roughly a 75x gain.

Now for something weirdly funny. I guess the GPU does things a bit differently from the CPU when it comes to precision. I ran into the issue that a population of electrons moving in a homogenous magnetic field will quickly sort themselves into 'sheets' or 'layers' at roughly powers of two ratios from each other in distance along the field lines, with a handful of outliers that don't fit into the ratio. See the images below:

Screenshot 2023-02-26 19.27.51.png

Screenshot 2023-02-26 19.27.17.png
First image is electrons in their initial positions. All electrons are within 1 unit (meter) of the origin along each axis. A homogenous magnetic field with a vector of [0,0,1] is applied, which points in the direction of the blue arrow here in the Unity editor. The electrons quickly move along the z-axis until they sort themselves into the layers shown in the bottom image. They don't stop moving completely, they still gyrate around the magnetic field lines as if they were moving along the z-axis, but they don't actually move along that axis any longer.

The layers have a z-value of positive and negative 4, 2, 1, 0.5, and 0.25, with a few in between once you get close to the origin.

I haven't really even begun to investigate this phenomenon, as I thought it was too funny and interesting to wait to share.
 
  • Like
Likes berkeman
  • #18
Drakkith said:
1. GPU's REALLY don't like loops.
2. There's some weird truncation or precision errors that I have to deal with.

As for 1, GPUs don't mind loops at all. What they hate is branching.
GPUs are not fast. They are wide. Remember that, and it will be clear where they do well and where they do poorly.

Which gets us to the second point. If you have, say a 512-bit wide GPU, you can use it to do 16 simultaneous 32-bit calculations, or 32 16-bit calculations, or 64 8-bit calculations. So there is a premium for lower precision, and usually the defaults consider this.
 
  • Informative
Likes berkeman
  • #19
Well, after some experimenting it turns out it's not an issue with the GPU, as it happens when I use the CPU as well. I just hadn't let the simulation run long enough to notice the issue (because it takes MUCH longer for the simulation to advance far enough on the CPU than on the GPU) . I believe it comes about when multiplying the velocity of a particle by a very small time step value and then trying to add that result to to the particle's position while using single precision floating point numbers. Simply increasing the time time step improved the issue, but didn't fix it entirely. Unfortunately GPU's all use single precision floats, not doubles.
 
  • #20
Well, I have good news and bad news.

The good news is that many, possibly most, GPUs do double precision. The bad news is that there is a performance penalty of at least a factor of 2, and often more. (See above re: wide not fast).

In general, the "pros" try and avoid double precision, often by using some algebra so you don't need to handle both very big and very small numbers. This improves the numerical stability, as well as enabling faster calculation.
 
  • Like
Likes Drakkith
  • #21
Vanadium 50 said:
The good news is that many, possibly most, GPUs do double precision.
Can you elaborate? The little bit of reading I have done has said that they only accept single precision floats, not doubles. Did I miss something or do you mean that there is some sort of workaround using singles?
 
  • #22
Depends on the GPU. But be careful - just because the code compiles and runs does not mean it will be efficient.

But let me say this more clearly - double precision is usually a band-aid placed on numerically unstable code. You are much better off fixing the fundamental issue. ("You measured it with an 8-bit ADC. How can you tell me 32 bits is not enough?)

In particular, try and remove all cases where small numbers arise from the subtraction of two big numbers.
 
  • #23
Just for fun, I compared two pieces of code on an Nvidia Turing vs. an AMD Ryzen. Remember when I said GPUs are wide, not fast? Code was optimized to take advantage of width and especially to minimize branching..

Code #1 was 68x faster on the GPU than the CPU.
Code #2 was 150,000x faster on the GPU than the CPU.

In both cases, the CPU code was single-threaded, so there is potentially a factor of 24. I don't think arguing about what is and is not a "fair" comparison. The lesson is that there are items of code that GPUs can do very, very fast, and this can be taken advantage of, up until you hit the limit of Amdahl's Law.
 
  • Informative
Likes berkeman
  • #24
Indeed. I was able to further improve things by eliminating some repeated buffer reads in the GPU code. Compute time per timestep went from 4 μs to 1.4 μs. That's about 10,000x better than my earlier unoptimized single-thread CPU code and 164x better than my optimized multi-thread CPU code.

Implementing a custom set of functions to use two single floats in place of a double for all the calculations brought my compute time up to 1.9 μs per timestep. For anyone interested, here's a link to the paper I used: https://andrewthall.org/papers/df64_qf128.pdf

*Note that there is no difference function in the paper, even though the division function calls for one. You just negate one of the numbers passed into the addition function.
 
  • Like
Likes berkeman

1. What are the main resources for learning about simulating charges in magnetic fields?

The main resources for learning about simulating charges in magnetic fields are textbooks, scientific articles, online tutorials, and simulation software.

2. What is the purpose of simulating charges in magnetic fields?

The purpose of simulating charges in magnetic fields is to understand and predict the behavior of charged particles in magnetic environments, such as in particle accelerators or magnetic resonance imaging (MRI) machines.

3. What are the key principles behind simulating charges in magnetic fields?

The key principles behind simulating charges in magnetic fields are the Lorentz force law, which describes the force on a charged particle in a magnetic field, and the equations of motion, which govern the trajectory of a charged particle in a magnetic field.

4. What are some common simulation techniques for simulating charges in magnetic fields?

Some common simulation techniques for simulating charges in magnetic fields include the particle-in-cell (PIC) method, the Monte Carlo method, and the finite element method.

5. How can simulating charges in magnetic fields be applied in real-world situations?

Simulating charges in magnetic fields can be applied in a wide range of real-world situations, including in the design and optimization of particle accelerators, the development of new medical imaging techniques, and the study of space weather and its effects on satellites and spacecraft.

Similar threads

  • Classical Physics
Replies
8
Views
1K
  • Programming and Computer Science
Replies
4
Views
3K
  • Programming and Computer Science
Replies
12
Views
1K
Replies
8
Views
755
Replies
27
Views
2K
Replies
4
Views
958
  • Electromagnetism
Replies
2
Views
1K
  • Electromagnetism
Replies
8
Views
797
  • Electromagnetism
Replies
17
Views
1K
  • Programming and Computer Science
Replies
4
Views
1K
Back
Top