Lets define D1(n) to be the first digit of a natural number n, e.g. D1(571) = 5, D1(119) = 1, ... Naturally one would expect that all digits come with the same probability 1/9, but looking at it in more detail one finds rather counter-intuitive results. If one counts these digits and calculates the probability using an interval [1,N] one finds that the limit for N to infinity does not exist! The probability can be defined using a function C'1'(n) which counts the numbers in [1,n] starting with digit '1'. This counting function applied to the numbers n = 1, 2, 3, ..., 9, 10, 11, ..., 19, 20, ..., 99, 100, 101, ... results in 1, 1, 1, ..., 1, 2, 3, ..., 11, 11, ..., 11, 12, 13, ... The probability finding '1' as the first digit can be defined as P'1'(n) = C'1'(n)/n. So one counts the numbers in the interval [1,n] starting with '1' and divides by n. One immediately sees that this probability has a minimum value of 1/9 whenever n=9, 99, 999, ... for which C has the values 1, 11, 111, ... and it has a maximum value (which approaches 5/9 from above) whenever n is of the form 19, 199, 1999, ... That means that P'1'(n) = C'1'(n) / n does not converge to a fixed value p as n goes to infinity! How can one resolve this issue? Is there a unique way to define p differently - with the "correct" results 1/9? Or is is this probability not defined at all? Let me note that I think that Benford's law does not apply here. There is no reason for scale invariance; we are not talking about random numbers; we are not talking about measurement of sizes, areas or something like that, just the natural numbers.