Question about LQCD and parallelization

In summary, the person is making progress with their lattice simulations and using OpenMP to parallelize their code. They have a question about the process of thermalizing the lattice and are experiencing discrepancies in their results when updating even and odd layers in parallel. It is suggested to use a parallelization method that ensures consistent order of updates and to make sure the code is thread-safe.
  • #1
korialstasz
10
0
Hi there, I'm currently working on a relatively simple code to do some lattice simulations. I have access to a computing cluster at school and have been learning how to use OpenMP to parallelize my code (each node has 16 cores). I'm currently not planning to use MPI.

My main question is regarding the process of thermalizating the lattice. I'm currently using a method due to Creutz to perform updated on links (note I'm simulating pure SU(2), not SU(3), no fermions or anything). My code is written in Fortran 90, and looks something like

Do t=1,L
Do z=1,L
Do y=1,L
Do x=1,L
Do d=1,4 !each link at point (x,y,z,t)
Update the link specified by (t,x,y,z,d)
End Do
End Do
End Do
End Do
End Do
Updating a link depends on the links that make up plaquettes containing the link to be updated, so if I want to parallelize the thermalizing process I have to be sure that each thread isn't trying to update two links that share a plaquette at the same time. I thought the simplest way to do that would be to just split up the updating process so that each thread updates a layer (say at constant t) which isn't adjacent to any other layer being updated. So I wrote my code to update all the even layers then odd layers in parallel. The problem is, the results I'm getting now don't agree exactly with the results I get when I update sequentially. Simple observables like Wilson loops don't display any difference, but when I measure correlators of spacelike-separated timelike links, I find they disagree slightly at large distances. The parallel updating seems to yield results that are incorrect at large distances (comparing to results from a paper I've been given to reproduce). Can anyone explain why it might be that updating even layers then odd layers would yield different results than just updating lattice sites one by one? Should it make a difference? I can't see any reason it would, and it seems to me a fairly obvious simple parallelization method. everything I've found refers to breaking the lattice into chunks, and controlling for the parallel updating of dependent links at the boundaries between chunks. I don't really want to deal with anything that involved currently. Any help appreciated.
 
Last edited:
Physics news on Phys.org
  • #2


Hi there,

Thank you for sharing your current progress with us. It sounds like you are making great progress with your lattice simulations and learning how to use OpenMP to parallelize your code. As for your question about thermalizing the lattice, it's important to make sure that the parallelization process does not introduce any errors or discrepancies in the results.

One potential reason for the differences in results when updating even and odd layers in parallel could be due to the order in which the updates are being performed. When updating sequentially, the order of the updates is consistent and each link is updated only once. However, when updating in parallel, the order of the updates may vary and some links may be updated multiple times, leading to discrepancies in the results.

To avoid this, you may want to consider using a parallelization method that ensures the order of the updates is consistent, such as breaking the lattice into chunks as you mentioned. This can help control for the updating of dependent links at the boundaries between chunks and ensure that each link is updated only once.

Additionally, it's important to make sure that your code is thread-safe, meaning that it can be run in parallel without any issues or errors. This may require implementing some synchronization mechanisms, such as using mutexes or locks, to prevent threads from accessing the same data at the same time.

I hope this helps and good luck with your simulations!
 

1. What is LQCD?

LQCD stands for Lattice Quantum Chromodynamics, which is a numerical method used in theoretical physics to solve quantum field theories, specifically those involving the strong nuclear force.

2. What is the purpose of parallelization in LQCD?

Parallelization is used in LQCD to speed up the calculations and simulations involved in solving the quantum field theories. By dividing the workload among multiple processors, the calculations can be completed faster, allowing for more accurate and efficient simulations.

3. How is parallelization implemented in LQCD?

Parallelization in LQCD is typically implemented using a technique called domain decomposition, where the simulation volume is divided into smaller sub-volumes or domains. Each domain is then assigned to a different processor, allowing for simultaneous calculations to be performed.

4. What are the advantages of using parallelization in LQCD?

Parallelization allows for faster and more efficient calculations, which can lead to more accurate simulations and results. It also allows for larger simulation volumes to be used, as the workload is divided among multiple processors.

5. Are there any limitations to parallelization in LQCD?

While parallelization can greatly improve the speed and efficiency of LQCD calculations, there are some limitations. The effectiveness of parallelization depends on the specific problem being solved and the hardware being used. Additionally, there is a limit to how many processors can be used, as communication and synchronization between processors can become a bottleneck if too many are used.

Similar threads

  • Atomic and Condensed Matter
Replies
3
Views
845
  • High Energy, Nuclear, Particle Physics
Replies
1
Views
1K
Replies
13
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
3K
  • Atomic and Condensed Matter
Replies
1
Views
2K
  • Programming and Computer Science
Replies
1
Views
921
Replies
2
Views
622
Replies
44
Views
3K
  • Programming and Computer Science
Replies
3
Views
1K
Back
Top