PeterDonis said:
Formally, ##d \alpha = \beta \wedge \omega## is another possibility to make the first term vanish, yes. However, I believe it is ruled out because continuing to apply ##dd = 0## leads to an infinite regress.
I went back and looked at this again, and I don't think my previous comment, quoted above, was correct.
If ##d\alpha = \beta \wedge \omega \neq 0##, then ##d d \alpha = 0## gives ##d\beta \wedge \omega + \beta \wedge d \omega = 0##, which in turn gives ##\left( d \beta + \beta \wedge \alpha \right) \wedge \omega = 0##.
There are two ways to satisfy that last equality. First, we could have ##d \beta = 0## and ##\beta \wedge \alpha = 0##. But ##\beta \wedge \alpha = 0## means ##\beta = k \alpha##, and thus ##d \beta = 0## means ##dk \wedge \alpha + k d \alpha = dk \wedge \alpha + k \beta \wedge \omega = 0##. Since ##\alpha##, ##\beta##, and ##\omega## are all linearly independent (by hypothesis, since we have assumed that ##d\alpha \neq 0##), the only way to satisfy the last equality is to have ##dk \wedge \alpha = 0## and ##d \alpha = 0##. But ##d \alpha = 0## contradicts our hypothesis (just stated in the parentheses above), so this possibility cannot be correct.
The second way to satisfy the last equality in the 2nd paragraph above is to have ##d\beta = - \beta \wedge \alpha = \alpha \wedge \beta##. But this is the same equation that is satisfied by ##d \omega## and ##\omega##, so we must have ##\beta = \omega## for this case--but then, once again, we get ##d\alpha = \beta \wedge \omega = \omega \wedge \omega = 0##, which contradicts our hypothesis. So this possibility cannot be correct either.
In short, there is no way to have ##d\alpha = \beta \wedge \omega \neq 0## without leading to a contradiction.