Excluding mean in variance calculation

I'm reading a finance book in which the author proposes to exclude the mean when calculating the variance of returns, because he thinks it's difficult to distinguish the drift of the price from the variance of that time series. So he basically calculates the sample variance like this $\frac{1}{N}\sum_{i=1}^n (x_i)^2$. I haven't seen anybody calculate variance like this before, so my question is whether this is a common way to calculate it or whether this could yield problematic results?