- #1
BetterSense
- 3
- 0
This is not homework. I just need help tackling this analytical problem which is apparently beyond my skills.
I am modeling an imaginary physical situation. What I have here in my brain, is a photo sensor in a dark room, pointed squarely at a window, from some distance away. Actually the window is a diffuse hemispheric dome, 'doming' outward from the room. It is perfectly diffuse, so all parts of it are the same brightness. I need to figure out how close to the window I should hold the sensor, to achieve the maximum reading from the sensor.*
I consider the effect on the sensor of an infinitesimal area element of the dome, so that I can integrate over the whole dome later. The light falling on the sensor from such an element of the dome falls off as a function of the distance from said element, and as a function of the angle between that element and the center axis of the sensor (the sensor itself's sensitivity is a function of acceptance angle). I have already done all this, and this is not the problem.
The problem is that I can integrate over the whole dome at any given distance from the window and obtain a "brightness factor" for that distance from the window. Doing so involves integrating over a range of distances equal to the radius of the hemispherical window, though. Now, what I need is that (calculable for any distance!) brightness factor as a function of distance from the window. Basically what I need to do is do that integral over the dome surface for every distance from touching the inside of the hemispheric window to infinity. And I don't know how to do that.
In real life what I would do is use a spreadsheet to calculate the integral for a 'lot' of 'closely-spaced' distances, make an XY plot, and pick what looks like the peak. However, I want to be able to do this analytically. Help?
*The answer is not "as close as possible" because the sensor's sensitivity is a function of the angle so as the sensor approaches and intrudes into the hemisphere, less of the hemisphere's surface can be "visible" to the sensor.
I am modeling an imaginary physical situation. What I have here in my brain, is a photo sensor in a dark room, pointed squarely at a window, from some distance away. Actually the window is a diffuse hemispheric dome, 'doming' outward from the room. It is perfectly diffuse, so all parts of it are the same brightness. I need to figure out how close to the window I should hold the sensor, to achieve the maximum reading from the sensor.*
I consider the effect on the sensor of an infinitesimal area element of the dome, so that I can integrate over the whole dome later. The light falling on the sensor from such an element of the dome falls off as a function of the distance from said element, and as a function of the angle between that element and the center axis of the sensor (the sensor itself's sensitivity is a function of acceptance angle). I have already done all this, and this is not the problem.
The problem is that I can integrate over the whole dome at any given distance from the window and obtain a "brightness factor" for that distance from the window. Doing so involves integrating over a range of distances equal to the radius of the hemispherical window, though. Now, what I need is that (calculable for any distance!) brightness factor as a function of distance from the window. Basically what I need to do is do that integral over the dome surface for every distance from touching the inside of the hemispheric window to infinity. And I don't know how to do that.
In real life what I would do is use a spreadsheet to calculate the integral for a 'lot' of 'closely-spaced' distances, make an XY plot, and pick what looks like the peak. However, I want to be able to do this analytically. Help?
*The answer is not "as close as possible" because the sensor's sensitivity is a function of the angle so as the sensor approaches and intrudes into the hemisphere, less of the hemisphere's surface can be "visible" to the sensor.