- #1
etotheipi
The significant digits of numbers in sets of numerical data supposedly follows "Benford's Law", which asserts that the probability that the first digit in a given data point is ##D## is about ##\log_{10}(1+ \frac{1}{D})##. An upshot is that we expect ~30% of significant digits to be ##1##.
The proof is outlined here and I can follow their reasoning but can't understand the very first step. They say
If you take that to be true you can show ##f(k) = \frac{1}{k}##, though I wondered how you come up with the above assertion in the first place? What do we mean by scaling - I thought ##P(x)## was just supposed to model a PDF over the digits from 1 to 9?
The proof is outlined here and I can follow their reasoning but can't understand the very first step. They say
Benford's law applies to data that are not dimensionless, so the numerical values of the data depend on the units. If there exists a universal probability distribution ##P(x)## over such numbers, then it must be invariant under a change of scale, so ##P(kx) = f(k)P(x)##
If you take that to be true you can show ##f(k) = \frac{1}{k}##, though I wondered how you come up with the above assertion in the first place? What do we mean by scaling - I thought ##P(x)## was just supposed to model a PDF over the digits from 1 to 9?