Ok lets see how we go.
Lets assume we are using the standard convention that has been used in this thread for N's P(x)'s and so on.
First lets look at the N = 2 case.
If we have N_A, N_B, P_A(x), P_B(x) and lets say that our row is fixed with N = 3 and T = N_A + N_B. Now we have
w_1 = N_A/(N_A+N_B) and w_2 = N_B(N_A+N_B)
But we know that N_A/(N_A+N_B) + N_B(N_A+N_B) = 1 using some simple algebra.
This means that w_1 = 1 - N_B/(N_A+N_B) and w_2 = 1 - N_A/(N_A+N_B) which gives our formula for the N = 2 case. The derivation was a little different and was in a different form, but this is why it was the case.
Now for N = 3.
Lets define N_A, N_B, and N_C in the usual way and P_A(x), P_B(x) and P_C(x) in the usual way and T = N_A + N_B + N_C.
w_1 = N_A/T, w_2 = N_B/T, w_3 = N_C/T
Now if you wanted the same 'form' as the early derivation we do the same thing by writing things in terms of 1 - blah in the same way for N = 2.
So we know N_A/T + N_B/T + N_C/T = 1 which means we can use the following substitutions:
w_1 = 1 - (N_B + N_C)/T, w_2 = 1 - (N_A + N_C)/T, w_3 = 1 - (N_A + N_B)/T and then use those weights like we did in the very first derivation of N = 2 case.
So basically the link is that in the first derivation we looked at things in terms of complementary weights by using the fact that all weights sum up to 1 and in the second case we just use the simple idea of calculating a weight to be the fraction of the data in one matrix for one row relative to the total data for all matrices in that row. By using the algebraic trick of writing weight_blah = 1 - rest of weights we get different expressions that mean mathematically and intuitively the same thing.