# Minimum time window needed to capture frequencies

I'm pretty sure there have to be some theorems out there, but I am probably not putting in the right search terms to be able to find them. Here's the problem:

I have a signal uniquely composed of a finite summation of standing wave sinusoids (well there's some DC and other background, but let's ignore those). Let's say I sample at some rate NT, and that my highest frequency is NT/2 (so I'm good on Nyquist). However, let's also say that I can only watch this signal for some time $\tau$, so I'm really only detecting $\tau=NTp$ (where p is number of samples) time overall.

So let's actually ignore the discrete time samples for a second, in continuous time I would see
$$g(t)=\sum_{N=1}^p \cos(\frac{2\pi \nu t}{N})$$

So on one hand, how much time do I have to sample for to pick out all the correct frequencies. But, additionally, given that I am actually only measuring steps of NT seconds (sample and hold) does this affect the consequences of having a finite time window to measure all these beats correctly. Spectral leakage is pretty close to what I'm looking for, but not quite.