Here is what I have figured out regarding Lad's analysis. Why he believes this analysis debunks Mermin's version of the mystery of Bell state entanglement I cannot say. As you will see, Lad's analysis is in perfect agreement with Mermin's analysis.
I'm going to assume you have looked at one of Mermin's papers, so you are familiar with the Mermin device, its data collection method, and its mystery. I'll refer to "Quantum Mysteries for Anyone" linked above unless specifically noted otherwise.
Figure 4 shows a sample of the data collected, e.g., 13RG, 22RR, 32GG, 31GR, ... . To align the data with Lad's presentation thereof, we would need the number of data points to be divisible by 9 with each possible setting pair (11, 12, 13, 21, 22, 23, 31, 32, 33) occurring in equal number. Lad then records 13RG as 13(-1), since the outcome is different colors. Likewise, we would have 22(+1), 32(+1), 31(-1), from the example data I just gave. Now one can partition the data into 9-dim vectors (called "G9 vectors") with each component giving the +1 or -1 result for a setting pair, e.g.,
11 12 13 21 22 23 31 32 33
+1 -1 +1 -1 +1 +1 -1 -1 +1
If we organize these vectors into the rows of a table, then the first column would be all of the 11 results, the second column would be all of the 12 results, etc. Suppose we have 1,000,000 such rows, then Fact 1 about case (a) says the sum of +1 entries for columns 1, 5, and 9 would be 1,000,000 each (same outcomes occur 100% of the time for same settings). The sum of +1 entries for any other column would be ~250,000 (same outcomes occur 25% of the time for different settings, Fact 2 about case (b)).
To account for Fact 1, Mermin introduces eight possible "instruction sets" dictating the R or G outcome for Alice or Bob for any of their three settings (1,2,3). Since Alice and Bob always get the same outcome for the same settings, the instruction sets for Alice and Bob in each trial are always the same, e.g., RGG meaning they will both obtain R for setting 1, G for setting 2, and G for setting 3. Here are the four unique G9 vectors for the eight instruction sets with their names at the far right:
11 12 13 21 22 23 31 32 33
GGR RRG +1 +1 -1 +1 +1 -1 -1 -1 +1 G9-1
GRR RGG +1 -1 -1 -1 +1 +1 -1 +1 +1 G9-2
GRG RGR +1 -1 +1 -1 +1 -1 +1 -1 +1 G9-3
GGG RRR +1 +1 +1 +1 +1 +1 +1 +1 +1 G9-4
Notice that we can get pretty close to the QM results (Facts 1 and 2) if we use only G9-1, G9-2, and G9-3 in a ratio of 1:2:1. That gives us the following number of +1 results for each column: 11 12 13 21 22 23 31 32 33
1,000,000 250,000 250,000 250,000 1,000,000 500,000 250,000 500,000 1,000,000
You can see that four of the six case (b) settings reproduce QM. [Of course, all three of the case (a) settings reproduce QM by design.] This result holds for any 1:2:1 combination of these three G9 vectors with the "outliers" of 500,000 just changing columns. If you add all the case (b) results you get 2,000,000 of the +1 outcomes in 6,000,000 trials or overall agreement for case (b) of 1/3. As Mermin explains (but not using G9 vectors), the instruction sets with two R(G) and one G(R) will always produce this 1/3 agreement for case (b), regardless of their distribution. Adding any occurrences of the RRR or GGG instruction sets just increases that fraction. This "must be at least 1/3 agreement for case (b)" is the Bell inequality for the Mermin device and QM violates it (producing 1/4 agreement for case (b)).
Now Lad points out that columns 2 and 3 of our G9 table are exactly the four possible pairings of +1 and -1. If we consider those to be the domain of a function to the other seven columns, we can write that function
23 --> 1456789. There are eleven other such functions, but they're all the same idea, so let's just look at what he did with this 23 --> 1456789 (or 23 for short).
He writes a Monte-Carlo simulation generating 1,000,000 G9 vectors so that columns 2 and 3 each produce +1 outcomes with a frequency of 1/4. He obtains:
1000000 250191 250332 250191 1000000 625225 250332 625225 1000000
Let N1 denote the number of G9-1 vectors in his distribution, N2 the number of G9-2 vectors, N3 the number of G9-3 vectors, and N4 the number of G9-4 vectors. Then, we can deduce exactly what his Monte-Carlo simulation produced using the following four equations:
N1 + N4 = 250,191
N3 + N4 = 250,332
N2 + N4 = 625,225
N1 + N2 + N3 + N4 = 1,000,000
The answers are:
N1 = 187,317
N2 = 562,351
N3 = 187,458
N4 = 62,874
Notice that his case (b) outcomes exceeded the 1/3 lower limit (they give +1 outcomes in 2,251,496/6,000,000 = 0.375 of the case (b) trials). This is exactly in accord with what Mermin said, since he has added 62,874 GGG/RRR instruction sets to the distribution. The other eleven functions for all of his other Monte-Carlo simulations produce virtually the exact same distributions, so that overall each case (b) column produces like outcomes in 0.375 of the trials.
Conclusion: Contrary to his claim to have debunked Mermin's analysis, he has just substantiated it.
Question: Why does he claim otherwise?