Is there a way to improve CRT raster scan efficiency by scanning both ways?

  • Thread starter Thread starter artis
  • Start date Start date
  • Tags Tags
    Crt
Click For Summary
The discussion centers on the potential for improving CRT raster scan efficiency by scanning in both directions, which could reduce flicker and increase frame rates. The original method requires the electron beam to reset after each line, causing delays that could be eliminated by alternating scan directions. However, this approach raises concerns about non-parallel scan lines and the complexity of vertical deflection circuitry. Additionally, while modern displays like LCDs and OLEDs do not face these issues, implementing such a system in CRTs could lead to technical challenges, including maintaining image quality and synchronization. Overall, the concept presents intriguing possibilities but is fraught with practical difficulties.
  • #61
Well as far as I'm aware we also don't have camera sensors that could catch all of the light passed onto them at every instant at once.
As for screens not sure whether that is technically doable because for a TFT screen that would need that each subpixel transistor has individual access wires to it's gate, drain while only the source could be left common for all the pixel transistors. This would take up much more of the space otherwise reserved for light to pass through from behind.
But then I got thinking about OLED display and maybe one could implement this for them because in OLED the very subpixel itself is the light emission source so no need for transparency like in TFT so technically you can cover the backside of the pixel matrix with as many wires/ wire mesh as you like or is technically possible.
Then there is the question of how fast could you possibly drive them in a demanding video.
Although on a second thought the driving speed shouldn't be that high because in any video the actual speed with which pixels change their color is not that high.
As of now an average TFT panel is driven in the Mhz as far as I know and that is because you have to assemble say 50 frames per second where every frame you have to drive through it row by row but in a rasterless scanless method I think on an average video you could actually have lower overall panel drive frequencies than currently if you had access to each individual pixel. Then you don't need to rush as fast in order to get to every pixel in time...Would I be correct in saying that the main problem here is not the speed/frequency but getting the physical layout where every pixel is individually controllable and then have a drive circuit that is capable of managing so many pixels all at once.
 
Engineering news on Phys.org
  • #62
As far as I'm concerned this kind of image capturing and processing likely will be about interfacing some kind of neural network: either biological or artificial.
And as long as our inbuilt 'instruments' we born with can provide satisfying performance in interacting with the usual 2X2D image we perceive I do not expect to have the current situation/way of image capture/transmit/display changed.

The special areas/cases would be sufficient to drive this direction of development forward, though.
 
  • #63
entropivore said:
if there's even a there there.
I'll let you know, when we get there.
entropivore said:
if the pixels are sent in apparently random order, then the overhead of sending addresses along with the pixel data becomes a heavy burden.
Every quantity of data is accompanied by its coding rules. The simplest system (e.g. old TV) uses coding rules that only change when the system is changed; that's bad value but easy to engineer. Beyond that there is always some header information which deals with the coding and error reduction. Any system that's worth its salt will use less overhead data than it saves.
Rive said:
the usual 2X2D image we perceive
We may be shown a 2D image (in fact, it's a sequence of images) but we perceive much more than that. You are ignoring the motion portrayal and the way our brains 'remember' what's behind that car which just parked there. Also, we used to watch many "satisfying" performances on 405 line B/W telly and people often say the pictures are better on Radio.
 
  • #64
sophiecentaur said:
people often say the pictures are better on Radio.
Even better in books. The thing that's similar between picture books and those with words are that when you start reading them their all picture books.

Anyway , data wise I don't think transmitting a 4k image where each pixel is sent at the instant it changes in the camera (sort of like in the old school TV) is that difficult , given we now have AI software that can basically given a big database recover object forms, sizes and colors from a blurred out/faulty image we could essentially have instant image transmission just in "bad quality" whereby instead of sending each subpixel you send a larger block of the screen as one data unit and then in the end the software sees the "square bullet" and rounds it off, so in a sense even though you are sending live image as it changes all at once you are just making it have lower resolution and then gain back the missing resolution at the end. Sort of like recording a live concert and transmitting it in mp3 or some other "squeezed" format then restoring it back in the end.

Although I'm not sure how much can you "chirp off" at the original end and still recover it at the user end.
But given the user end would have sophisticated software I assume quite a lot and still qualify as real time no raster/scan image.
 
  • #65
artis said:
Well as far as I'm aware we also don't have camera sensors that could catch all of the light passed onto them at every instant at once.
I don"t know what you you mean by this. Can you elucidate your meaning? Do you mean different colors?
 
  • #66
artis said:
Anyway , data wise I don't think transmitting a 4k image where each pixel is sent at the instant it changes in the camera (sort of like in the old school TV) is that difficult ,
I can't make sense of this. Even if you had 4K X 16bit data transmitted in parallel, the time occupied by each clock pulse and the detection time would be significant. And I think you are forgetting that still images are very often of very limited use so you would still need to take some time (even if compressed) to send a few seconds' worth of a movie.
Wherever you turn, the fundamental limitations of bandwidth and noise are always there. In the camera, each exposure takes a finite time so the output rate is limited - for each pixel, however the data is arranged and transmitted. "Old school" TV is incredibly wasteful because it sends the same information every frame. You could sometimes save a lot of capacity by sending a message "one hour of test card F" (23 ASCII characters per hour).To make good use of a channel, the sequence of scanned images needs to be analysed to find the amount of actual information that's needed. To do this on the fly (one frame at a time) is fast and wasteful. If you are prepared to introduce a delay in transmission, you could examine a sliding time window of, say 1s or 100s and make sure you send as little repeated data as possible.
None of the above is relevant to a scanned or random access imaging; if you can extract the actual picture content then you can send it and display it in any way you want. We already standards convert to suit our device screen and resolution.
 
  • Informative
Likes nsaspook
  • #67
hutchphd said:
I don"t know what you you mean by this. Can you elucidate your meaning? Do you mean different colors?
sophiecentaur said:
I can't make sense of this. Even if you had 4K X 16bit data transmitted in parallel, the time occupied by each clock pulse and the detection time would be significant. And I think you are forgetting that still images are very often of very limited use so you would still need to take some time (even if compressed) to send a few seconds' worth of a movie.
Wherever you turn, the fundamental limitations of bandwidth and noise are always there. In the camera, each exposure takes a finite time so the output rate is limited - for each pixel,
My meaning was this. All known digital image sensors like CCD for example are not continuous , they instead have an exposure time and a transfer time/shutter time. The faster ones like "frame transfer" and "interline transfer" are simply very fast but they still have a "dead time" aka a time when the MOS structure doesn't accept photoelectrons and is instead configured by the gate electrodes to move the lines of pixel charges into serial register for outputting.

Now not considering crosstalk, bandwidth , or any other problems what I was thinking was more like an analog capture frame, where each pixel instead of being charged/moved/discharged is constantly on. Much like a diode but with no voltage drop. So that each pixel the moment it is hit with a photon outputs a signal that is proportional to the number of photons that land on the pixel. The same is true for CCD's for example where each pixel charge is the representative of the number of photons that hit the pixel during exposure time the difference would be that instead of that charge then being moved/read out it is read out continually with no delay. This would most likely necessitate an analog approach.

The problem of curse is that even if one manages to build a pixel frame where each pixel can be continually read out it would still most likely ask for some sort of data management and possibly ADC down the road as I cannot image how one could transmit 4k pixels continuously.

Or you could still read the entire pixel matrix continually and then ADC each one and end up with a gigantic amount of continuous digital data representing the matrix at each instant, so in the end it would still not be continuous but the frame rate could be kicked sky high as the screen would not be made from one scan line at a time but instead all lines at once and then as fast as one can remake those lines in a second.Another way to do this would be with representing each pixel with a photodiode and then transmit the whole frame optically but that again leads to pretty much the previously mentioned problems.
But it is a interesting though puzzle.
 
  • #68
Comparing this idea to the old school CRT method it would be similar to having a tube not of one electron gun with a high voltage and deflection but instead having a multitude of small guns close to each pixel/sub pixel working not in a raster scan fashion but instead continually illuminating each pixel or not illuminating - during dark moments. In this regard the whole screen would not be divided into frames but instead lit continually while the brightness of individual subpixels/pixels changes due to motion of the video.I think the closest we have ever come to anything like this is plasma screens but IIRC they too are scanned instead of continually changing each pixel brightness, although in theory they could if the control circuitry and signal processing was up to task with that.

This approach I believe would make a video seem 100% natural as that is how we perceive stuff in nature and how light/vision works naturally whereby the light source (sun for example) is continuous and any change in the light reflected by a moving body for example is also continuous and not framed or scanned.
Best things in life are all analog I guess...
 
  • #69
I don't understand your point. The front end of most existing solid state image sensors is essentially one analog photodiode (sometimes a phototransistor) per pixel. The quantum efficiency can be quite high (so in that sense it is "digital") and the "down time" is minimal.
How the resultant electrons are then handled depends upon the device but typically they ~continuously charge a capacitor that is read out in a variety of clever ways. Light energy is not squandered.
So what are you attempting to improve?
 
  • #70
artis said:
My meaning was this. All known digital image sensors like CCD for example are not continuous , they instead have an exposure time and a transfer time/shutter time. The faster ones like "frame transfer" and "interline transfer" are simply very fast but they still have a "dead time" aka a time when the MOS structure doesn't accept photoelectrons and is instead configured by the gate electrodes to move the lines of pixel charges into serial register for outputting.
I interpret this a meaning that you appreciate that all such systems are sampled. Yes they are, because, with the exception of analogue sound, every form of transmission or recording involves sampling - whether explicit or not.
artis said:
Another way to do this would be with representing each pixel with a photodiode and then transmit the whole frame optically but that again leads to pretty much the previously mentioned problems.
But it is a interesting though puzzle.
But even this is a sampled system (spatially, in pixels). Transmitting 4k separate analogue channels would be a pointless extreme. Every information channel is limited in power, bandwidth and space. By space, I mean available signal pathways. Bandwidth and space can be described to gather in terms of bandwidth - i.e. two channels of a given data rate are the equivalent to a single channel of twice the data rate and the bottleneck is always in the channel bandwidth (or the equivalent for recording media).
It's already been found that a choice of efficient coding always involves finding as much about the psychology of human perception. The research that produced Mpeg, in all its versions, involved a lot of subjective testing to get the most possible juice out of the lemon but such systems are stuck with backwards compatibility problems. One advantage we do have is that processing power is still going up and up so we can look forward to better and better experiences of image and sound transmission.
 
  • #71
All this stuff about color is beyond the original question, as none of that was known when the standards were set.
artis said:
Were there any ideas to have the raster scan pattern from left to right and then in the next row from right to left back and then again from left to right etc. This way instead of moving the beam back and starting a new line a new line would simply be drawn from the side where the beam left the previous line.
The quick answer: What you are describing is technically possible, and might be a more efficient use of bandwidth. However, vacuum tubes in the 1940s were not up to the task. (TVs needed to be affordable by ordinary people, so the number of vacuum tubes needed had to be kept to a minimum.) In all the analog standards, the horizontal and vertical positioning are achieved by simple sawtooth waves. One at 50/60 hz, the other at about 15,000hz. The back-and-forth pattern you describe would be easy: Just replace the 15,000hz sawtooth wave with a triangle wave at the same frequency. But the consequences for vertical positioning would be surprisingly complex, as other posters have mentioned. It would require subtle voltage control that those tubes could not achieve.

Once the standards were set they could not be changed without some massive benefit to offset the cost. The transition to Digital TV is when that happened.

Edit: Oops, I didn't see the other page. Sorry if this is redundant.
 
Last edited:
  • #72
Algr said:
The transition to Digital TV is when that happened.
Absolutely. You only need to look at the results of analogue standards conversion between US and European TV signals (and Telecine) to see the limitations of the methods available. To be fair, though, they got some very reasonable and watchable results, which allowed us access to all that rich vein of US TV culture when we had been starved for years, after the war.

Digital processing that's sufficiently powerful, allows you to accept TV images of any standard, to get the best 'quality' information about the original moving scene and to replay it in any other standard. 'Legacy' becomes less and less of a problem.
 
  • Like
Likes Algr
  • #73
sophiecentaur said:
To be fair, though, they got some very reasonable and watchable results, which allowed us access to all that rich vein of US TV culture when we had been starved for years, after the war.
True. When PAL was converted to NTSC in the 70's to 90's, the result gained a subtle motion shimmer that resembled a film transfer. IMHO this could improve the video sequences and make them rather more cinematic. It made a mess of sports coverage though - I remember the olympics not looking so good if it was PAL-to-NTSC. We still have this issue with 50hz to 60 hz conversions. But I suppose motion interpolation can eliminate converter lag most of the time.
 
  • Like
Likes sophiecentaur
  • #74
Algr said:
But I suppose motion interpolation can eliminate converter lag most of the time.
I wouldn't say it "eliminated" the lag - perhaps it 'mitigated' the problem a bit. I only ever saw NTSC pictures that had been converted to PAL so your experience of conversion in the other direction may imply something about the relative qualities of the two source systems.
But the way it had to convert the the field rate difference was necessarily crude due to the limited technology of the ultrasonic field delay lines. The old systems were seriously struggling whereas the newest systems have greater capability than actually needed.
 
Last edited:

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
4K
Replies
32
Views
3K
Replies
12
Views
2K
  • · Replies 8 ·
Replies
8
Views
10K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
8
Views
4K
Replies
8
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K