- #1
fluidistic
Gold Member
- 3,957
- 266
Hello people,
I am looking for a high definition DIY spectrometer website or resource. If you find a good one please let me know.
I've watched several youtube videos about it, used both googles and duckduckgo but I am left unsatisfied.
There are several things I do not understand, both about the physics itself and about the setups I've seen.
In one tutorial, I have seen they made a dark tube where on one end there was a little slit made with 2 razor blades, and in the opposite end there was a hole covered by a CD surface. IMO both ends comply the same role, i.e. diffract light, so I do not understand the need of both of them.
Another thing I don't understand is about the photosensor. Does it need to be moved (or the prism rotated if a prism is used to diffract the light of interest) so that all parts of the incoming light's spectrum interacts with it? Or would the use of a lens to focus the whole spectrum onto the photosensor do the same job?
I have seen many DIY project with either a webcam or a regular camera photosensor. But as far as I know, these sensors are biased in the sense that they try to mimic the human eye, so that they are about 3 times better at catching green light than red. Is the software used to display the result as a spectrum aware of this bias? If each photosensor is biased differently, how does the software knows how to unbias the raw data?
How does a software translate a picture, or image, into a spectrum that shows the intensity/count of each part of the EM spectrum?
I am looking for a high definition DIY spectrometer website or resource. If you find a good one please let me know.
I've watched several youtube videos about it, used both googles and duckduckgo but I am left unsatisfied.
There are several things I do not understand, both about the physics itself and about the setups I've seen.
In one tutorial, I have seen they made a dark tube where on one end there was a little slit made with 2 razor blades, and in the opposite end there was a hole covered by a CD surface. IMO both ends comply the same role, i.e. diffract light, so I do not understand the need of both of them.
Another thing I don't understand is about the photosensor. Does it need to be moved (or the prism rotated if a prism is used to diffract the light of interest) so that all parts of the incoming light's spectrum interacts with it? Or would the use of a lens to focus the whole spectrum onto the photosensor do the same job?
I have seen many DIY project with either a webcam or a regular camera photosensor. But as far as I know, these sensors are biased in the sense that they try to mimic the human eye, so that they are about 3 times better at catching green light than red. Is the software used to display the result as a spectrum aware of this bias? If each photosensor is biased differently, how does the software knows how to unbias the raw data?
How does a software translate a picture, or image, into a spectrum that shows the intensity/count of each part of the EM spectrum?