Timo said:
So I had a look at the code. I've never seen Fortran code before, but here's a few comments based on what I understood:
- Most importantly, the line DO while ((ln_f > eps).and.(istep < MCtot)) seems to terminate the sampling when MCtot steps are reached. Your configuration file sets them at 10k steps. That is way too little for a proper sampling.
- You seem to check the histogram of visits every N steps, where N is the number of spins. That's technically okay, but your code will spend most of its time checking your histogram. Do that every few thousand steps, instead (I check every 20k steps, I think).
- I remember the flatness-check as min(H) >= tol * max(H). You seem to implement min(H) >= tol * mean(H). I think the tol*max(H) version is more robust on a theoretical level (it removes the possibility that one state has been visited extraordinarily often and causing the finer-grain loops to never reach it, anymore). Not sure if that matters in practice, though.
- I don't feel well about your line do n1 = 0, int(N-n0)/2 . You probably want to abuse the symmetry of you Hamiltonian there. But the naming of your variables (n0, n1 and n_1) makes my head hurt and I cannot see if that really works (and if you properly take all rounding effects into account). Since you have an explicit cleanup stage to remove degenerate states later, I recommend to just run the loop over all possible states.
- You seem to model each spin explicitly in your MC state. That is fine. But if you are a bit careful about your MC steps, it is sufficient to model the microstate as a tuple of three integers (N+, N-, Na). I originally thought that would make a lot of performance difference, but I am not sure anymore.
- I am a bit worried about your checks for rounding errors, and if you shoot yourself in the foot with them (what happens when eps == alpha2 ?). If you have exactly one function E(N+, N-, Na) that takes integer numbers, and if you use that function exclusively to calculate energies, then you do not need these checks. Just mentioning that, though. For now, I recommend to leave the checks in.
Hello again:
First of all, thank you so much for your advice. It took a little bit to me to implement all your observations to my work, but I finally did it. However I'still struggling to find the right solution. I'm going to answer to your comments and then I will expose the problems I get now.
As you say, I don't need Wang-Landau sampling to calculate the density of states for the a fully connected network, in fact I want to apply it to other kind of networks, but I'm starting for the fully connected case because is the simplest, therefore I could gain some confidence on my code before. I have though written a small code in python which calculates the exact density of states so I can compare the results and be sure they are ok.
1. About the number of steps used to achieve the selected threshold for the modifier ln_f, you are right, is too low, just works this way for N=14, but I change it when I try to calculate higher sizes. The problem is that even with a huge number of steps the program fails for N>=16, there are unvisited configurations and the histogram does not become flat, not even once, yet for N=14 works just fine.
2. Same thing as before, I am aware that the checking should be done less frequently, I just adjust it in every run. And sometimes it is useful to me to set it to '1' because this way I can follow every step of the calculation printing on the screen. But you are right, it makes the code awfully slow.
3. It's good to know, however I just have checked it out and the problem persists.
4. I have followed your advice in the new code I have written, but I have not tried to changed it in the old one. However I have checked carefully that this wasn't the reason it failed using a program in python which compares the arrays obtained with the two methods and tests that they match (checked up to N=100).
5. This was also suggested by my professor so I ended up doing it. The second code is now visible in the github repository (
https://github.com/IreneFerri/alpha3states.git) . It achieves flat histograms for any size N, however the results are wrong. I attach here a comparison of the output of my 3 programs: the exact calculation in python, the first Wang-Landau code we already spoke about, and the new program which uses the tuple (n1, n0, n_1) instead of the explicit spins array. The problem is described in detail in Github>issues, but it is easy to see that the results of the first code did match the exact solution, so maybe the problem for larger N was related to the rounding errors, as you suggested in the comment 6. This is not a problem anymore since I can find the index with an exact function.
6. Already explained in 5.
So, basically now I have the old code, which I know it works for N=14 but cannot perform calculations for bigger sizes, and a second code that achieve flat histograms for any size but delivers crazy results. I am stranded again, don't know what to try next.
Thank you for reading so carefully the code, I know I am super-newbie and it must be pretty hard to read, I tried to make it more clear this second time.
One last thing, I also attach here a table with the execution times for every program, just to remark that for N=14 the first code is faster than the second one. Not sure if that is true for larger N's.