- #1

chiropter

- 25

- 0

This sort of explanation is at least understandable and self-consistent, if not rigorous mathematically: "Degrees of freedom are a way of keeping score. A data set contains a number of observations, say, n. They constitute n individual pieces of information. These pieces of information can be used to estimate either parameters or variability. In general, each item being estimated costs one degree of freedom. The remaining degrees of freedom are used to estimate variability. All we have to do is count properly. "

However, I don't like that I am doing relatively simple mathematical operations without an understanding of what the mathematical justification is. If I learned matrix algebra enough to understand the 'quadratic form' sense of d.f., will it make more sense to me why we use a denominator of e.g. n-1 for estimating average variance, or will I just know a more complicated way of deriving degrees of freedom?

(I have very limited exposure to matrix algebra, and it didn't seem that intuitive, i.e. hard to translate into non-matrix terms, but maybe that would change if I studied it more seriously and at least got used to its rules).

Thanks in advance!