ThereIam
- 63
- 0
I'm doing an experiment using a super fancy oscilloscope. I'm measuring a tiny output voltage that varies with time. I was instructed to make my sampling rate as small as possible (~500S/s).
Now the signal that I'm looking at is, in other applications when its being sent to a computer and not my scope, converted to a table of about 1 million values, generated about once a second. So I'm literally getting 500 of a million values, and I can't wrap my head around why.
Can you guys think of reasons why the sampling rate ought to be so low? (I do trust this guy, very knowledgeable, but he's on vacation). I get the impression that he did NOT think we were needlessly tossing away data, though having more data would obviously be better for what we're doing. It was a practical limitation of the way the scope was receiving the signal, or somethin' like that.
I'm an extreme novice when it comes to electronics.
Thanks.
Now the signal that I'm looking at is, in other applications when its being sent to a computer and not my scope, converted to a table of about 1 million values, generated about once a second. So I'm literally getting 500 of a million values, and I can't wrap my head around why.
Can you guys think of reasons why the sampling rate ought to be so low? (I do trust this guy, very knowledgeable, but he's on vacation). I get the impression that he did NOT think we were needlessly tossing away data, though having more data would obviously be better for what we're doing. It was a practical limitation of the way the scope was receiving the signal, or somethin' like that.
I'm an extreme novice when it comes to electronics.
Thanks.