Yes. That's what I meant. Identifying a player and following him is not as easy if there's just one led on him. The total player image is what your eye makes best use of. This is particularly relevant, of course, when the lighting is low (low SNR). This is precisely what the processing in my idea would do. I am disappointed that the reaction to this idea has been so negative. Stacking is just a way of using information gathered over an extended time. My idea is merely gathering more relevant information by looking over an extended spatial region; there is a direct correspondence. The process is not as obvious as the way a single star guider uses. The conventional guider can only work for a star that is bright enough for a simple algorithm to work. Is it not obvious that there is far more information available in the total field than in the position of just one star? You do not need to identify the positions of individual stars and work on each individually. It is the whole field that would be processed. Here's one method that would work. Take the 2D correlation function between successive captured frames. That will give a peak which corresponds to the displacement vector. For a totally random distribution there will be a single peak and there can be other subsidiary peaks where there are regular features in the sky but the sky has no repeating patterns so the correlation peak would be high. You can take frames at as high a rate as your correlation process time and camera noise will allow. Noise and hot cells would be addressed in the same way as normal imaging is treated.