sophiecentaur said:
That sounds to me the sort of argument that would tell you that Stacking is not worth doing. A single star that is hardly visible becomes visible when you stack many images. There is precisely the same advantage (in a different context) from looking at all the available point sources and adding their effects together to locate the position of the whole image.
No it's not. The problem is that, if you can't find one single star to guide off of in your FOV,
you don't have any point sources at all! None that you can guide off of at least. They're all buried in noise! Your software (and your eyeball) would not be able to reliably tell where the centroid of each star is between successive exposures because the noise causes the pixel values to fluctuate too much. In most cases you wouldn't even be able to see that there's a star there at all.
Stacking exposures to produce a high SNR image works because the process is essentially identical to taking a longer exposure, just with noise added in from the thermal and readout noise of each exposure. But stacking only works if you know that your exposures are all pointing in the same direction to within a pixel or two. Trying to look at many different sources within the same exposure doesn't work if your sources are all buried in noise. And you can't sum different sources together to somehow increase the SNR of the image either (well, you can, but only by binning the pixels together and losing most of your resolution).
sophiecentaur said:
The relative positions of the stars are not likely to change over the period of the feedback intervals so you compare one image with the identical image - shifted by a very small number of pixels (the seeing would be a problem as always).
The problem is that you don't know the position of the stars in the first place. Pixel 5436 might be higher in one exposure, and then, because of noise, pixel 5442 might be higher the next exposure, regardless of where the centroid of the star is actually located on the sensor. It might be at pixel 5436, it might be at 5442, or it might be at another nearby pixel. You don't know because the noise is causing such a large fluctuation (large relative to the star's signal) at each pixel between exposures.
sophiecentaur said:
SNR improvements can be very significant in threshold conditions. This is a process that is bordering on the trivial in the context of compression of moving pictures. It is the sort of thing that allows very good slo-mo to be taken from normal frame rate TV pictures.
What SNR improvements? What are they actually doing in those slow-mo videos to increase the SNR? What techniques are they using?
sophiecentaur said:
I am not qualified to comment on how 'worth while' improved tracking over what one guide star will give you in the context of today's equipment and practices but I can comment on the fact that the whole image (which you guys refer to as 'many guide stars' for only historical reasons) contains more positional information than the image of a single guide star.
Okay. Provide some evidence for this please. What exact method would be used? How does it work? How does it apply to auto-guiding? And I'm not asking for a hand-wavy explanation, I'm asking for something more concrete, preferably with some formulas to back it up if you have them.
sophiecentaur said:
The human brain does processing along precisely the same lines as the system I am suggesting. When you are involved in sport , hunting , fighting etc. You assess the whole of your visual image to guide your motions. If you were playing football in the dark and players and ball were lit with a single bright LED (which is the equivalent of single guide stars) then performance would be much worse. That is pretty obvious.
I'm sorry but I don't see any connection between your analogy and autoguiding. You start out by saying that you're evaluating the entire FOV for positional information, which to me implies that the players and ball are "stars", but then you switch and talk about illuminating them all with a single LED, with the LED being a single guide star. Or, as just occurred to me, did you mean to say that each player and the ball had a single LED on them, so that you'd see two dozen points of light when looking out over the field?
sophiecentaur said:
You make an important point there and I am aware of it. However, the justification for not considering something should never be 'because we have always done it this way'. I am not suggesting just "guiding off multiple stars".
I cannot fathom how you came to the conclusion that our arguments were based on the notion that we've always done autoguiding this way and it shouldn't be changed. No. Our arguments are based on our own understanding of how autoguiding and imaging work. Now, it's always possible that we might somehow be wrong in the end, but that does not mean we don't have valid reasons for arguing against your idea.
As for your idea not being about guiding off of multiple guide stars, I'm afraid I can't see any other way to describe it. You've looking at multiple stars in the image and getting positional information from them to calculate correction signals for the mount, right? And this is done by looking at the position of multiple stars in the FOV between successive exposures, right? But that's exactly how autoguiding off of a single star works. You might be thinking of throwing in some other processing techniques into play, but the core idea is still about guiding off of multiple guide stars, is it not?
sophiecentaur said:
I accept your doubts about the present viability of my idea but I would really like to think you appreciate what's actually involved and the advantages the could be gained which are significant.
We can't appreciate what's involved if we don't think it's a valid idea. That doesn't mean we're not listening to you or that we aren't open to your idea being valid, we just don't think it is.