edpell said:
So the physical universe has some structure some lumps and bumps (or more correctly voids and walls and filaments) and this means at some level of accuracy simple calculations based on simple uniform distributions are not accurate enough.
Or maybe they are. Not clear right now.
Understandablely the folks doing the computations do not want harder work and so resist the idea.
Utter and total non-sense.
The first thing that you try to do when you have a problem like this is to do a quick "is this a totally nutty idea or not" calculation which was what I was planning to do when I read Wiltshire's paper. However Teppo Mattsson already did the calculation that I was planning on doing on page 13 and 14 of the paper that I referenced earlier. What he is showing that if you are sitting in a big empty bubble that's 300 Mparsec's wide, that yes it clocks can slow down to make it look like the universe is accelerating. Now this probably *isn't* anything like the real universe. But it's a quick toy calculation that says that this is a half-decent idea that we need to look into further.
What Wiltshire is trying to do is to take things from being a "toy model" into something that you can actually compare to real experiments. Now that I understand what he is trying to do, it's a decent idea. One problem with the way that Wiltshire is going about it is that he is using math that's great for human number crunchers but totally awful for computers.
Until some hungry young guy/gal thinks hey if I do the work and it is important I will be a winner. Then they do it and receive acclaim or find they wasted five years of effort.
If someone goes through the effort of figuring out whether or not it works or not, and it doesn't, it's not a wasted effort. If nothing else you understand how inhomogenities in GR work. If someone spends about five years and then comes up with an airtight argument why none of this will work, that's worth a Ph.D. Also the cool thing is that while you are looking for X, you invariably stumble onto Y.
Why is this viewed as such a complex calculation? You make a series of Monte Carlo model universes and do the integration at several points and compare? It is the computer that is doing the work.
Well computers need programmers. We are talking about 10 coupled non linear equations *just for the gravity* in a 10,000x10,000x10,000 cube with maybe 100,000 time steps. If you run the full simulation, it's just not doable with current technology. So you end up with clever ways of reducing computer time, which "cross your fingers" don't actually destroy the calculation.
These simulations can eat up a month of supercomputing time. If you just dump the equations into a computer, changes are that the computer will just spit out "I can't do this calculation" and give you random noise. The first time you do a test run, the simulation will invariably not work. So you spend a few months debugging, and debugging, and finally you come up with something that looks reasonable. But is it?
And even getting to the point where you can code it is a challenge.
For example, one problem with the way that Wiltshire does the problem is that he splits things into "calculations you do at the voids" and "calculations you to in the non-voids". If you try to put it into a computer program, then chances are the computer will go nuts at the boundary conditions. Also you don't want if statements in a computer program. The computer chips like to add arrays of numbers. If you have branching statements, then the chip has to go down two different code paths, your pipelines get trashed, your L1 caches get overwritten, and a calculations that would have taken two weeks, now will take a year and can't be done. Also he does a lot of averaging. Averaging is bad. What do you average? How do you average?