# What's the standard deviation of values in the histogram bar

1. Nov 17, 2017

### whatisreality

1. The problem statement, all variables and given/known data
If one bar of a histogram has been generated with $n$ entries from a total of $x$ measurements, i.e. the event occurs randomly $n$ times in the $x$ event interval, then what is the standard deviation of values in this bar? Let $k$ be the range of values that could have been measured for this particular bar of the histogram, and assume that the expectation value of $k$ is $n$.

2. Relevant equations

3. The attempt at a solution
I'm finding the wording tricky to understand. $n$ is the number of events in the interval, and the average value of $k$ is also $n$? The variance is the square of the distance from the mean and if all the events were equally likely, I'd divide by $n$ and have something like
$\frac{1}{n}\Sigma(k_i - n)^2$
But I'm not sure that's what it means when it says the event occurs randomly a total of $n$ times. I'd really appreciate any hints on how to tackle this, thanks for any help!

2. Nov 17, 2017

### Staff: Mentor

It is strange to define k as a range. The range should be 0 to x.

I guess you have to assume that the x measurements are independent.

3. Nov 17, 2017

### whatisreality

Would you say then that $k$ is the class width of the single histogram bar being considered? That's how I've interpreted it. I also thought it would be quite a strange coincidence if the number of events $n$ recorded, which corresponds to the area of the bar, happened to be also the average value of $k$. Which is what I take all the above to mean.

I think the wording is very confusing, but I've written it as it was given to us.