Is there a way to improve CRT raster scan efficiency by scanning both ways?

  • Thread starter Thread starter artis
  • Start date Start date
  • Tags Tags
    Crt
AI Thread Summary
The discussion centers on the potential for improving CRT raster scan efficiency by scanning in both directions, which could reduce flicker and increase frame rates. The original method requires the electron beam to reset after each line, causing delays that could be eliminated by alternating scan directions. However, this approach raises concerns about non-parallel scan lines and the complexity of vertical deflection circuitry. Additionally, while modern displays like LCDs and OLEDs do not face these issues, implementing such a system in CRTs could lead to technical challenges, including maintaining image quality and synchronization. Overall, the concept presents intriguing possibilities but is fraught with practical difficulties.
artis
Messages
1,479
Reaction score
976
Someone surely has though of this before but I still wonder.
The original electron gun raster scan pattern was from top left to right and then the gun is switched off while the deflection coils reset so that the electron beam would start again at the left side but now a pixel row or two (as in interlaced) lower.
It takes time for the magnetic field to change and so the electron beam cannot do useful work at that moment.
Were there any ideas to have the raster scan pattern from left to right and then in the next row from right to left back and then again from left to right etc. This way instead of moving the beam back and starting a new line a new line would simply be drawn from the side where the beam left the previous line. Magnetic deflection coils would not have to be reset instead they could just go continually from left to right and back and the resetting would only need to happen at the end of each full frame from bottom to top.

This would mean that each frame could be drawn quicker so more frames could have been packed in a given time and less flicker. Probably increased bandwidth would also be needed.

In modern flat panel technologies like LCD or OLED etc this is probably not a problem because there is no beam to reset and the dispaly is driven electronically from a driver chip, so each next pixel scan line starts as soon as the last one ends with some small delay probably.
 
Engineering news on Phys.org
I looked at this a long time ago, and there is a big issue with it. With the current system the scan lines are parallel. If you just turn the beam around and head back the other way while the vertical deflection is continuing, you do not have parallel lines anymore.

You could try to come up with a vertical deflection system that paused at each line while it was traced out and then jumped down one line spacing and paused again, etc. But that vertical scanning circuitry would be pretty complicated to do with magnetic deflection.
 
  • Like
Likes Algr, KalleMP, DaveE and 3 others
@berkeman you are I think referring to CRT scan?
Why would the next line be not parallel to the previous one? Surely the scan line that starts from left to right is horizontal , why couldn't a scan line that starts from right to left also be horizontal.Although not to do with CRT's but in theory a flat panel TFT system could be made where in each frame instead of scan lines the whole frame was lit up altogether at the same time , one would then need to control each pixel individually as is done now but control all of them at the same time. In theory done this way even with current frame/refresh rates of a couple of 100hz the picture would probably look even more natural, although it already does given the fast framerates and limitations of human vision.
The control circuitry would then probably need to be much more complicated.
Maybe @sophiecentaur has something to say about this whole deal, I've noticed you to be knowledgeable about radio and Tv issues.
 
artis said:
Why would the next line be not parallel to the previous one? Surely the scan line that starts from left to right is horizontal , why couldn't a scan line that starts from right to left also be horizontal.
The raster scan lines are tilted slightly, due to the downward scan rate at the same time the horizontal scan is happening. If you turn around and scan back the other way, you will end up with a zig-zag pattern.

And please don't call me "Shirley". :wink:

1625601660378.png

https://retrocomputing.stackexchang...t-a-parallelogram-instead-of-a-true-rectangle
 
  • Like
Likes KalleMP, Keith_McClary and Merlin3189
When John Logie Baird made the first demonstration of television in 1926 he found that scanning left to right gave less flicker. This is thought to be because we read in that direction and the eye is trained accordingly.
Another objection to the double scan proposed is that the slight non linearity of scan caused by the charge and discharge of the timebase capacitor would shift the pixels on alternate lines sideways, giving a ragged appearance..
 
  • Informative
  • Like
Likes KalleMP, Keith_McClary and berkeman
There may be other issues but non-parallel lines to me should not matter. If the camera tube is scanned in the same manner, it should reproduce the image the same way. That is not to say there weren't other technical/practical issues. I worked in the NTSC video test equipment field for a while and such a thing had not occurred to me.
-
That system would have made the comb filter that used a delay line to delay the luminance signal by one horizontal line worthless.
 
Averagesupernova said:
There may be other issues but non-parallel lines to me should not matter. If the camera tube is scanned in the same manner, it should reproduce the image the same way. That is not to say there weren't other technical/practical issues. I worked in the NTSC video test equipment field for a while and such a thing had not occurred to me.
-
That system would have made the comb filter that used a delay line to delay the luminance signal by one horizontal line worthless.
Please correct me if I am wrong but I thought the delay line was part of PAL and SECAM but not used in NTSC.
 
tech99 said:
Please correct me if I am wrong but I thought the delay line was part of PAL and SECAM but not used in NTSC.
The phase of the chroma signal is 180 opposite from one line to the next. Since all information changes very little from one line to the next, the video is delayed by one line and added to the undelayed to get the chroma and luma seperated.
 
Averagesupernova said:
The phase of the chroma signal is 180 opposite from one line to the next. Since all information changes very little from one line to the next, the video is delayed by one line and added to the undelayed to get the chroma and luma seperated.
Is that not the PAL (Phase Alternate Line) system?
 
  • #10
The "just move the electron beam down a pixel and scan it the other way" is nontrivial to implemaent. The existing interlace scan (developed for analog tubes) requires only a single vertical top to bottom sweep for each even frame (at 60 Hz) and an equivalent one for each odd frame.
The stairstep approach required by this proposed scheme is much more difficult to implement and requires vertical steps synched at the horizontal scan rate. Not easy to do. And this is to "improve beam efficiency"?
The folks who did TV (much of it invented at RCA a mile from my present abode) wrung pretty much every last bit out of that system.
 
  • Like
Likes Merlin3189
  • #11
tech99 said:
Is that not the PAL (Phase Alternate Line) system?
No idea. If you do the math of NTSC you will find what I say to be true. Chroma subcarrier is 3.579545 Mhz. Horizontal scan is 15734.266.
 
  • #12
tech99 said:
Is that not the PAL (Phase Alternate Line) system?
Another objection I can see is that at the edges of the picture the lines converge, giving only half the vertical resolution. Ideally the scan lines should be as wide as the spacing, so they disappear.
 
  • Like
Likes KalleMP
  • #13
Electromagnetic fields provide many tasks inside a 'cathode-ray tube' (CRT) including deflection as described. Electrostatic deflection of the beam provides precise placement and rapid recovery depending on the application and design specifications including the phosphor coating and the expected display.

Early radar screens such as the UPA series combined electrostatic and electromagnetic circuits to collimate, direct and 'fire the gun'; that is, pulse the CRT.

1625605589091.png UPA-35 CRT mounted for radar display (note the orange phosor).

CRTs to display data from rotating antennae often draw a radius line cursor from the origin that rotates similar to the receiver antenna, placing the operator at a simulated center. The rotating cursor appears to paint series of pixels representing processed target returns, symbols, numbers, characters and related information such as identity and time stamps.

NTSC provided a compromise vision system including the raster scan rate, broadcast television limitations, physics and technology, with safety standards. I have worked with displays somewhat as @artis describes, most not suitable for long term home use. Children sat directly before the screen staring into the CRT. Safety concerns may have discouraged manufacturers from employing otherwise sound ideas.

Electromagnetic theory as applied to NTSC television, home and citizen-band (CB) radio transmitter/receiver rx/tx; combined electronic and magnetic fields, usually visualized as normal (orthogonal) to each other, as an EM field. An EM field propagates normal to the combined electronic and magnetic components even as each periodically zeroes.

EM field response times generally exceed information update rates. For instance, persistence of vision varies near a seventh of a second. A frame rate of 28-30 hertz provides the appearance of continuous motion; well inside raster flyback and EM field recovery time rates.
 
  • Like
Likes sophiecentaur
  • #14
hutchphd said:
The stairstep approach required by this proposed scheme is much more difficult to implement and requires vertical steps synched at the horizontal scan rate. Not easy to do.
It might require more than 17 vacuum tubes 😲 !

Also
In very early vacuum tube television sets, the EHT was derived directly from a high voltage winding on the mains transformer using a half wave rectifier. In later television sets, the EHT supply was invariably generated by rectifying the flyback pulses from the scanning circuitry rather than directly from the mains supply (a practice that survived the transition to transistor circuits). Although this provided a greater degree of safety, the reason for the change was that the mains transformer had been eliminated from sets produced from the 1940s onwards. Wikipedia
 
  • Like
Likes hutchphd and Rive
  • #15
I have always wanted to set up a Lissajous figure scan pattern. The advantage there is that the beam deflections are fundamental sinewaves. The system would be more efficient because the deflection coils could be resonant. Each pixel would be scanned twice, sort of like interlaced pictures, but once on each diagonal. The outside edge would be darkened.

Select two mutually prime integers with a ratio close to the aspect ratio. Multiply those by 25 or 30 Hz depending on the 50 or 60 Hz supply frequency, so the frames repeat. The centre of the screen will then be diagonally cross-hatched.

Klystron said:
CRTs to display data from rotating antennae often draw a radius line cursor from the origin that rotates similar to the receiver antenna, placing the operator at a simulated center. The rotating cursor appears to paint series of pixels representing processed target returns, symbols, numbers, characters and related information such as identity and time stamps.
Early small boat radar had only one deflection coil, it rotated about the neck of the tube, synchronously with the antenna.
 
  • Like
  • Informative
Likes KalleMP, sophiecentaur, Klystron and 2 others
  • #16
berkeman said:
And please don't call me "Shirley".
not quite sure what you mean by that.
But I got the idea after the picture you posted, right so the original scan lines are not horizontal either, well in that case zig zagging them indeed would not produce a good result
 
  • Like
Likes berkeman
  • #17
Keith_McClary said:
It might require more than 17 vacuum tubes 😲 !
Yeah, it's easy to forget that the development of TV sets was a process, and it started with minimal electronics (and constant requirement with backward compatibility).

Ps.: I guess zig-zag scanning would be easier with electrostatic deflection, since the voltage has less constraint against sudden changes than current. So this tube actually could be able to do such trick (no wonder: it was coming from oscilloscopes). As the story went, electrostatic deflection was inconvenient due the long neck of the tube and the low deflection angle, so by the time things settled (40's, I believe) it was out of question.
 
Last edited:
  • Like
  • Informative
Likes Klystron and berkeman
  • #18
artis said:
not quite sure what you mean by that.
Shirley, surely that joke is from "Airplane" = "Flying High".
 
  • Like
Likes hutchphd and berkeman
  • #19
Here is a Lissajous Raster with frequencies of 101 and 139, drawn as a yellow line on black.
Lissajous_raster_101x139.png
 
  • Like
Likes KalleMP and Klystron
  • #20
Not sure why but looking at that picture @Baluncore is painful for my eyes even on a flat panel screen
 
  • #21
tech99 said:
scanning left to right gave less flicker
Why did we have top to bottom?
 
  • #22
Keith_McClary said:
Why did we have top to bottom?
Because that is the way we read an English language book.
 
  • #23
artis said:
Not sure why but looking at that picture @Baluncore is painful for my eyes even on a flat panel screen
There are several reasons for the crazy visual effects. A low-resolution original was drawn in pixels. That is displayed on your screen pixel grid, which shows many different sampling beats. To focus your eyes, your brain then tries to identify range and to sort out the parallax. That is why you can see many layers in 3D.

But none of that is relevant to the scan in a grid. There is only one line in that plot, with the x and y deflection being fundamental sine and cosine waves. Higher mutually prime scan frequencies would fill in the black bits, just like a normal raster.

Modulation of the line brightness would produce an image without the optical confusion in the parts of your visual brain that still remain after viewing the image.
 
  • Like
Likes KalleMP
  • #24
artis said:
Not sure why but looking at that picture @Baluncore is painful for my eyes even on a flat panel screen
Tuned Lissajou patterns tend to be easy on the eyes. Moire patterns baffle the eye; the perceived pain likely from muscles in the eye attempting to focus on an 'object' comprised of interference.

Though difficult to make out, the 'hourglass' shape in @Baluncore 's example is a typical Lissajou pattern. Adjusting one or both input sources leads to an 'eggbeater' pattern until the next tuned state. Mike's raster patterns in post #4 would produce Moire patterns if printed with the lines closer together.
 
  • #25
There's one word which I can't yet find in this thread and that's 'Compatibility'. (I lied about that - just found it mentioned higher up.) The first TV systems had problems enough in producing the first low resolution monochrome systems. The move to the higher resolution monochrome system (525 and 625 lines) was a significantly hard jump and pushed the technology to the limit in many ways. The really clever available advance was to use interlace, which helped motion portrayal because they could double the rate that pictures could be transmitted and the vertical resolution is the same for stationary pictures. This was 'doable'.
The monochrome system was then pretty much set in stone and there were many existing TVs (around the US, at least) but no extra transmitters or spectrum space for a brand new Colour system. So moving to colour required a system that was (almost) undetectable. NTSC, PAL and don't forget SECAM did their best to deal with limited channel spec; phase distortion was mitigated by moving from NTSC to PAL and early PAL didn't even use a delay line. The eye averaged out colour errors from line to line. SECAM was a basically FM colour subcarrier system (I believe that was for the benefit of viewers in mountainous regions who had to deal with worse phase distortion)

I don't think any of the ideas aired in this thread hadn't been considered but it was basically too late to implement any of them. I would't mind betting that any of the alternative scanning systems would have made life even harder to allow extra colour information to be squeezed in. Anything other than a regular scan rate and consistent video frequencies would have made it impossible to place the spectral components of the colour subcarrier into the spaces in the comb of the luminance spectrum, which is why it's possible to shoe-horn the colour information in amongst the luminance information without too much distortion and crosstalk.
This thread could go on for many pages before we manage to deal with all the clevernesses of (particularly) PAL.
 
  • Like
Likes Klystron
  • #26
Scanning in two directions: another problem with this is that there would be a strong spectral component at line frequency (plus its sidebands) because each pixel would be followed (on the subsequent line) by a pixel from a different part of the image. So what could have been a low detail image with fuzzy bits of near white on one side and fuzzy bits of near black on the other side would generate strongly alternating values on the transmitted lines - a high level of line frequency. To get that effect from a normal scan, you'd need to be scanning an area with precisely black/white/black/white horizontal stripes, coincident with the scan lines. A serious problem if you want to slot in a whole set of spectra from a colour subcarrier.
 
  • #27
sophiecentaur said:
Scanning in two directions: another problem with this is that there would be a strong spectral component at line frequency (plus its sidebands) because each pixel would be followed (on the subsequent line) by a pixel from a different part of the image. So what could have been a low detail image with fuzzy bits of near white on one side and fuzzy bits of near black on the other side would generate strongly alternating values on the transmitted lines - a high level of line frequency. To get that effect from a normal scan, you'd need to be scanning an area with precisely black/white/black/white horizontal stripes, coincident with the scan lines. A serious problem if you want to slot in a whole set of spectra from a colour subcarrier.
With or without interlace; RGB (red green blue) 'guns' improved color and motion information representation, at some loss of clarity compared to B&W (black and white; 'silver') CRTs, until demand from computer users and home theater influenced carriers and monitor manufacturers to adopt new technology.

Current color information ubiquity make B&W sources appear anemic, if not ethereal. In the broader picture audio information capture, storage and broadcast influenced bandwidth allocations as color mainstreamed.
 
  • Like
Likes sophiecentaur
  • #28
Radio. It's like television, but the pictures are better.
 
  • Like
  • Love
Likes KalleMP, sophiecentaur and Keith_McClary
  • #29
That image you posted @berkeman also makes sense why they had an uneven line count of having half a line which I think is because each horizontal line was a little tilted downwards to the right if looked from the front of the screen so in theory in order to fill the upper right corner and lower left corner of the screen you needed half a line because having a full line half of that line would hit the invisible part of the inside of the CRT tube.

So while we are at it , it seems that what they did when introducing color tv (apart from shadow mask, aperture grill, and three guns) they introduced the color signal (info) on top of the luminance signal of each line (makes sense given that each line now needs to be also colorful)
So is it correct to think that now the base luminance signal amplitude dictated the brightness of each pixel along the horizontal line while the added (on top) color carrier dictated how strongly each gun has to fire in order to have the correct brightness of each subpixel so that the whole pixel has the necessary color while at the same time the brightness of that color is determined by the aforementioned luminance signal?I wonder how were the signals mixed before transmitting given the luminance signal has a much larger amplitude and the color signal "rides" on top of it with a lower amplitude, at least it seems so from the pictures i could find.
I'm sure @sophiecentaur or @Klystron or anyone else has something to say about this.
 
  • #30
There is a very good explanation (RGB vs YUV color encoding) in Analog Television Wikipedia. Start there: many very clever folks made television
 
  • Like
Likes Klystron
  • #31
1626179926742.png

Here is a basic diagram of the waveform of a NTSC / PAL colour waveform of the 'colour bars' test picture. The white level is the highest luminance and, of course, has a chrominance value of zero. The colours in the diagram are false - just to show you which bar is which but show the maximum level of the subcarrier for each of the (saturated) coloured bars. A monochrome receiver sees a pretty good 'grey scale' bit it looks different from the normal monochrome signal. I think that the 'compatible' signal is in fact more in agreement with the original scene but people didn't like it (we don't like change, do we, dear?)

The colour subcarriers two components, in quadrature so varying the phase and amplitude of each component allows you to carry two colour coordinates of each point on the line. The frequency used for the subcarrier is very carefully chosen to have sidebands that fit between the components of the luminance signal. Because of the basic line frequency of the TV signal, static pictures have a comblike structure with a spacing of line frequency. The interleaving of the chrominance and luminance components allows the signals to occupy the same spectrum space. (Dead clever - eh?) Once there's motion, this interleaving fails but the eye doesn't spot the cross talk between the signals.
However, there is a real nasty when high frequency luminance patterns interfere with the chrominance signals and produce nasty patterns. This was avoided by making sure that TV presenters wore appropriate clothing without fine check patterns.
There were many advances in colour coding and decoding and, by the time digital TV came along, they'd got it pretty good. PALcoding was a significant advance on NTSC but we (UK) had to wait many years for it.
artis said:
I wonder how were the signals mixed before transmitting given the luminance signal has a much larger amplitude and the color signal "rides" on top of it with a lower amplitude, at least it seems so from the pictures i could find.
The picture shows that the "riding on top" is not too big a problem because the brightest parts of the picture do not have saturated colours so the colour subcarrier level is low. Headroom is not too much affected.
I grew up on PAL TV and the various clever bits gradually got absorbed into my brain. But it's now little more than history and it's really not worth anyone's while getting too deep into the details (imo), although it really was a magnificent piece of Engineering. The basics of colorimetry and imaging in general are all the same for DTV as for analogue TV so the vast amount of work that was done is still helping with DTV development. Many of the members of the MPEG group cut their teeth on Pal and NTSC.
 
  • Like
Likes KalleMP
  • #32
Rive said:
Yeah, it's easy to forget that the development of TV sets was a process, and it started with minimal electronics (and constant requirement with backward compatibility).

Ps.: I guess zig-zag scanning would be easier with electrostatic deflection, since the voltage has less constraint against sudden changes than current. So this tube actually could be able to do such trick (no wonder: it was coming from oscilloscopes). As the story went, electrostatic deflection was inconvenient due the long neck of the tube and the low deflection angle, so by the time things settled (40's, I believe) it was out of question.
Here is a interesting report from GE in 1957 describing in detain the past and projected developments that were undertaken to reduce the depth of tubes as the size goals kept climbing.

In the end a CRT was limited by physical constraints and the largest CRT displays moved to projection systems before the Plasma display and LCD display era started.

http://www.one-electron.com/Archive... Evolution of Picture Tube Size and Shape.pdf

Kalle
--
Kalle Pihlajasaari
Lahti, Finland
"Achieve 25(OH)D3 > 125nmol/l (50ng/ml) with DAILY supplementation." - Kalle Pihlajasaari, 2021
 
  • Like
  • Informative
Likes Rive, Keith_McClary and anorlunda
  • #33
The pre War developments also included mechanical scan for larger screen projection, and this would not lend itself to zig-zag scan. For instance, at RadiOlympia in 1938 a large screen mechanical scan 405 line receiver made by Scophony was displayed. It utilised small, high speed mirror drums and used an opto-acoustic light modulator, called the Jeffree Cell. https://blog.scienceandmediamuseum....cophony-tv-receiver-high-speed-scanner-motor/
Another serious objection to zig-zag scan is that the scan must be perfectly linear to a precision of one pixel, or vertical edges will be ragged, and such linearity was impracticable.
 
  • Informative
  • Like
Likes sophiecentaur, artis, Klystron and 1 other person
  • #34
The great thing about these 'free-running' topics of PF is the variety of viewpoints and often the historical and personal content/addition which is so hard to come by on the internet by looking only.

Thank you all for this.

Ps.: I've still seen what's not here o0) Was some good stuff.
 
  • Like
Likes sophiecentaur, artis and Klystron
  • #35
artis said:
Someone surely has though of this before but I still wonder.
The original electron gun raster scan pattern was from top left to right and then the gun is switched off while the deflection coils reset so that the electron beam would start again at the left side but now a pixel row or two (as in interlaced) lower.
It takes time for the magnetic field to change and so the electron beam cannot do useful work at that moment.
Were there any ideas to have the raster scan pattern from left to right and then in the next row from right to left back and then again from left to right etc. This way instead of moving the beam back and starting a new line a new line would simply be drawn from the side where the beam left the previous line. Magnetic deflection coils would not have to be reset instead they could just go continually from left to right and back and the resetting would only need to happen at the end of each full frame from bottom to top.

This would mean that each frame could be drawn quicker so more frames could have been packed in a given time and less flicker. Probably increased bandwidth would also be needed.

In modern flat panel technologies like LCD or OLED etc this is probably not a problem because there is no beam to reset and the dispaly is driven electronically from a driver chip, so each next pixel scan line starts as soon as the last one ends with some small delay probably.
I have been pondering on the same idea. Using a low power resonant circuit for the deflection coils and wondered about Lissajous scanning.

Found these interesting papers.
https://www.mdpi.com/2072-666X/10/1/67/htm Scanning MEMS Mirror for High Definition and High Frame Rate Lissajous Patterns

https://www.nature.com/articles/s41598-017-13634-3 Frequency selection rule for high definition and high frame rate Lissajous scanning

Kalle
--
Kalle Pihlajasaari
Lahti, Finland
"Achieve 25(OH)D3 > 125nmol/l (50ng/ml) with DAILY supplementation." - Kalle Pihlajasaari, 2021
 
  • #36
tech99 said:
The pre War developments also included mechanical scan for larger screen projection, and this would not lend itself to zig-zag scan. For instance, at RadiOlympia in 1938 a large screen mechanical scan 405 line receiver made by Scophony was displayed. It utilised small, high speed mirror drums and used an opto-acoustic light modulator, called the Jeffree Cell. https://blog.scienceandmediamuseum....cophony-tv-receiver-high-speed-scanner-motor/
Another serious objection to zig-zag scan is that the scan must be perfectly linear to a precision of one pixel, or vertical edges will be ragged, and such linearity was impracticable.
The point about scanning, however you do it, is that it is a sampling process. Any sampling introduces artefacts which can make re-construction problematic. Conventional scanning was selected because it was convenient (a spinning disc, initially) and that led to a relatively simple sawtooth horizontal and vertical scan. The artefacts are 'predictable'. If you try to scan in other ways, the vertical and horizontal sample frequencies are no longer uniform so the nice, friendly 'comb' spectrum of a PAL signal is destroyed. The timing suddenly may need to be much better (the 'pixel accuracy' of @tech99 may come in).
The downside of the conventional scan for TV tubes is, as people have mentioned, the enormous power needed for sawtooth deflection, and the scan linearity with wide angle tubes. But that was more or less sorted out with large sweaty power circuits.
Repeat scanning is terrible value for bandwidth use, if the picture is actually transmitted in the same form that it's detected and displayed, as in conventional transmission.
Once you have modern digital signal processing, the same basic scanned picture can be compressed into a tiny channel compared with the 7MHz or whatever for old fashioned TV. In that situation, a picture is a picture and can be imaged or displayed in any way you choose. Transmission becomes a different issue.
 
  • #37
sophiecentaur said:
The point about scanning, however you do it, is that it is a sampling process. Any sampling introduces artefacts which can make re-construction problematic. Conventional scanning was selected because it was convenient (a spinning disc, initially) and that led to a relatively simple sawtooth horizontal and vertical scan. The artefacts are 'predictable'. If you try to scan in other ways, the vertical and horizontal sample frequencies are no longer uniform so the nice, friendly 'comb' spectrum of a PAL signal is destroyed. The timing suddenly may need to be much better (the 'pixel accuracy' of @tech99 may come in).
The downside of the conventional scan for TV tubes is, as people have mentioned, the enormous power needed for sawtooth deflection, and the scan linearity with wide angle tubes. But that was more or less sorted out with large sweaty power circuits.
Repeat scanning is terrible value for bandwidth use, if the picture is actually transmitted in the same form that it's detected and displayed, as in conventional transmission.
Once you have modern digital signal processing, the same basic scanned picture can be compressed into a tiny channel compared with the 7MHz or whatever for old fashioned TV. In that situation, a picture is a picture and can be imaged or displayed in any way you choose. Transmission becomes a different issue.
I think that along a scan line we do not use sampling; there is no fundamental limit to resolution other than spot size. Although that creates a low pass filter action, that is not a sampling process. It is an analogue system in that respect. In the vertical direction we do have sampling and the max spatial frequency is then restricted to half the number of lines. As I have mentioned before, sampling doubles the required bandwidth.
Compression of TV signals relies on exploiting limitations of human vision, so we are actually robbing the recipient of information or placing constraints on what may be displayed. It cannot defeat the laws of Nature, such as Shannon. I am a bit out of date here, but I think a full quality digital TV picture when transmitted on the air will require about the same bandwidth as an analogue transmission.
 
  • #38
In the case of conventional TV you are ‘nearly’ right about there being no sampling on a line but the vertical dimension certainly is sampled. Any other form of scanning would involve both H and V explicit sampling.
Forget Shannon as analogue TV is way short of that limit. For a start, it wastes most of its bandwidth sending most of most pictures again and again. The law at work there is the ‘getting it done somehow’ law. There are many lossless methods of compression which, given an appropriate processing delay, can give moving pictures with the only shortcomings being in the original analogue signal (or sensor limitations). Shannon does not specify processing time either.

The vertical / horizontal bandwidth issue is not straightforward. The line rate is inversely related to the horizontal resolution for a given channel bandwidth. The choices in existing systems are only approximately optimal.
 
  • #39
tech99 said:
I think a full quality digital TV picture when transmitted on the air will require about the same bandwidth as an analogue transmission.
The bandwidth required for uncompressed 'raw' data with the nominal resolution of usual analog signal would be far bigger. For practical reasons the actual bandwidth requirement is kept ~ the same but the data got digitally compressed. At the end it still comes with higher quality.
 
  • #40
Rive said:
The bandwidth required for uncompressed 'raw' data with the nominal resolution of usual analog signal would be far bigger. For practical reasons the actual bandwidth requirement is kept ~ the same but the data got digitally compressed. At the end it still comes with higher quality.
It's not really valid to compare analogue and digital TV because, firstly, no one needs to transmit the amount of repeated information in most TV programme material. Our brains could actually not cope with separate full res 625 line pictures, appearing 25 times a second. Secondly the data rate needed is not really comparable with an analogue bandwidth. You need to do the whole calculation which has to compare minimal channel loss / signal strength and then relate analogue picture to noise ratio to error rates and how the coding and transmission methods deal with them.
The proof of the pudding is that the four / five channel TV service on UHF in UK has been replaced by the 70 standard and 15 HD channels on Freeview in the same UHF spectrum space.
 
  • #41
sophiecentaur said:
The proof of the pudding is that the four / five channel TV service on UHF in UK has been replaced by the 70 standard and 15 HD channels on Freeview in the same UHF spectrum space.
At the same time the multi-path ghosts have been exorcised.
I wonder what a zig-zag scan would look like with multi-path ghosting.
 
  • #42
Two aretefacts for the price of one, probably. But echos would tend to be broken up. One plus point to zig zag but several minuses I expect.
 
Last edited:
  • #43
@tech99 I think if digital radiowave broadcast was done like analogue was done where each frame was sent as a separate "new" frame of information( where information largely overlapped as @sophiecentaur already pointed out) then probably the digital bandwidth would compare to the analogue, but if I'm not mistaken then since the large scale introduction of digital broadcasting and equipment we rely on the fact that more modern TV's had built in circuitry with memory and signal processing so we can now send just the pixels that have changed for a new frame while the previous ones remain displayed from the memory.

As I read even newer technology uses the so called "AI" software which essentially can "fill in" missing parts of the frame and pixels by continually using a powerful processor to sample the incoming frames and find the most appropriate fill in's for the missing parts from it's memory.

I guess modern flat screens are more like computers with a digital radiowave input than actual TV's in their classical sense.
They have all the computer parts like RAM, CPU, GPU , the only difference between a desktop PC or a laptop is the medium through which the info is sent , internet cable VS radiowaves, but then again for laptops WIFI is just another form of radiowave broadcast.
 
  • #44
artis said:
probably the digital bandwidth would compare to the analogue
That would be a pretty inefficient digital transmission system for these days. You are assuming what would be a basic binary signalling system which has a bandwidth of the order of the 'Baud Rate'. Take WiFi, for instance, which is a pretty representative system. It uses an RF channel bandwidth of around 20MHz but can give data rates of over 200Mb/s with QAM.

I know it's more complicated than that because range and coverage need to be considered but there is no simple rule of thumb these days, like there was when we sent a binary data stream down a wire. It amazes me that I can usually get 70Mb/s along a couple of hundred metres of what was installed as telephone (naff audio quality) cable.
 
  • #45
I built a display like this back in the early 1980's. (Probably still have the prototype somewhere...) It was intended for a computer graphics system, so all of the transmission and compatibility issues others have mentioned were not a concern in this case. The primary motivation was, as you suggest, to not waste something like 10% of the frame time in retrace.

Since there was no compensating scan in an image tube originating the video, the line pairing at the edges was indeed an issue. To address this, we stepped the vertical scan. In order to get adequate step response, we had to have a relatively low inductance vertical deflection yoke driven by a high-bandwidth amplifier. There was an issue with horizontal errors, as someone mentioned. The primary cause of this was magnetic hysteresis in the ferrite core of the horizontal deflection yoke. Since I was generating the video electronically I could compensate for this fairly well, though it might still have been something of a problem in production due to temperature and part-to-part variations. It never made it into production, though. Ultimately, the reduction in required video bandwidth was not worth the extra power in the amplifiers and other drawbacks. Still, it was a great experiment.

An interesting related issue was that at the time, horizontal output transistors in standard horizontal scan circuits were NPN bipolar transistors operated as saturated switches, and woe betide you if your HOT ever came out of saturation during the flyback pulse! To ensure this never happened, and because of large variations in the transistor's beta due to temperature and lot variations, you had to make sure you had lots of base drive to cover the worst case with plenty of margin. This, of course, means not only a large storage time, but a wide variation in storage time across temperature and parts. This uncertainty meant you had to allocate extra time for horizontal retrace. To reduce this, I built a circuit that servoed the retrace pulse to the H Synce pulse. This worked quite well, though again, I don't think we ever put it into production.

By the way, I suspect just about every variation on CRT scanning has been tried at some point. I remember an article on a system that used a spiral scan, from the center outward I think. I don't know why. I also had an alphanumeric display that did a sort of mini-raster along each row of characters (fast vertical, slow horizontal as I recall), but only as far as the characters went in each row. Seems like a small jump from there to a full XY vector display.

Fun times.
 
  • Like
  • Informative
Likes KalleMP, DaveE, Keith_McClary and 1 other person
  • #46
Thank you Entropivore. I also remember an interesting slow scan format before the real digital era where there was a long persistence tube and the pixels were updated randomly.
 
  • #47
Using a scanned electron beam must limit the possibilities of reading out and displaying the image pixels. That beam and the scanning circuits effectively have a large amount of 'momentum' due to the inductance of the coils. That's no longer a necessary factor in the way the picture elements are transmitted.

CCD imaging is currently based on sequential access to the elements in picture lines but the vertical sequence in which the lines are read out need not, I think, require a particular order of data. In fact, how important is it even that the charge coupled elements would actually need to be in a line? Ever since an electron beam has no longer been used, the whole notion of scanning 'as such' is less relevant to reading out the image information.

The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
 
  • #48
entropivore said:
I built a display like this back in the early 1980's. (Probably still have the prototype somewhere...) It was intended for a computer graphics system, so all of the transmission and compatibility issues others have mentioned were not a concern in this case. The primary motivation was, as you suggest, to not waste something like 10% of the frame time in retrace.

Since there was no compensating scan in an image tube originating the video, the line pairing at the edges was indeed an issue. To address this, we stepped the vertical scan. In order to get adequate step response, we had to have a relatively low inductance vertical deflection yoke driven by a high-bandwidth amplifier. There was an issue with horizontal errors, as someone mentioned. The primary cause of this was magnetic hysteresis in the ferrite core of the horizontal deflection yoke. Since I was generating the video electronically I could compensate for this fairly well, though it might still have been something of a problem in production due to temperature and part-to-part variations. It never made it into production, though. Ultimately, the reduction in required video bandwidth was not worth the extra power in the amplifiers and other drawbacks. Still, it was a great experiment.

An interesting related issue was that at the time, horizontal output transistors in standard horizontal scan circuits were NPN bipolar transistors operated as saturated switches, and woe betide you if your HOT ever came out of saturation during the flyback pulse! To ensure this never happened, and because of large variations in the transistor's beta due to temperature and lot variations, you had to make sure you had lots of base drive to cover the worst case with plenty of margin. This, of course, means not only a large storage time, but a wide variation in storage time across temperature and parts. This uncertainty meant you had to allocate extra time for horizontal retrace. To reduce this, I built a circuit that servoed the retrace pulse to the H Synce pulse. This worked quite well, though again, I don't think we ever put it into production.

By the way, I suspect just about every variation on CRT scanning has been tried at some point. I remember an article on a system that used a spiral scan, from the center outward I think. I don't know why. I also had an alphanumeric display that did a sort of mini-raster along each row of characters (fast vertical, slow horizontal as I recall), but only as far as the characters went in each row. Seems like a small jump from there to a full XY vector display.

Fun times.
I misspoke slightly regarding blowing up horizontal output transistors. The flyback actually occurs when you turn off the transistor, and the real problem is running out of base drive when you reach the peak current at the end of the sweep. Of course, trying to turn the output transistor back on during the flyback pulse will also cause grief, but that's not so likely to happen. Except... I think it was the Commodore PET that had a monitor where rather than having a horizontal oscillator that was synchronized to pulses from the video circuitry, in essence software directly drove the horizontal output stage, so you could in fact smoke the hardware through a programming error. Clever, eh?
 
  • #49
sophiecentaur said:
Using a scanned electron beam must limit the possibilities of reading out and displaying the image pixels. That beam and the scanning circuits effectively have a large amount of 'momentum' due to the inductance of the coils. That's no longer a necessary factor in the way the picture elements are transmitted.

CCD imaging is currently based on sequential access to the elements in picture lines but the vertical sequence in which the lines are read out need not, I think, require a particular order of data. In fact, how important is it even that the charge coupled elements would actually need to be in a line? Ever since an electron beam has no longer been used, the whole notion of scanning 'as such' is less relevant to reading out the image information.

The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
It's true that the display scanning need no longer define the the imager scanning (or vice versa), but there are other architectural issues at play. Interconnect limitations dictate that imager data is going to have to be serialized in some way, and if you want the full imager frame you might as well do that in a simple pattern. Some existing CMOS sensors provide the ability to shift out only a sub-region of the full array, which can be useful for machine vision applications such as target tracking but is less interesting for general photography and video applications.

Also, our compression standards (e.g., MPEG, H.264) have evolved around processing pixels in a predictable sequence. You could in principle build data compression into the imager itself, but there are architectural and economic arguments against that as well, so it probably only makes sense in a limited set of conditions.

Re your question about whether the imager elements need to even be in a line, consider that even though imaging and display are now greatly decoupled, in order to make sense of an image one still has to know the spatial organization of the original sampling points. Given this, it makes sense to standardize on a single pattern so as to provide interoperability across sensors and systems. (If you've ever had to deal with "non-square" pixels in an imager or display you'll probably know what a headache it can be.) I'm not sure to what degree the evolution of fabrication technologies influences this, but I suspect it may also come into play. That is to say, they seem to be optimized for rectilinear structures, so uniform XY grids are a natural choice. Ultimately, economics is a major driver of the evolution of technology.
 
  • #50
sophiecentaur said:
The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
Well if what I understand about active matrix TFT is correct then the vertical scan rate is the rate at which each pixel row gate's are switched on/off (each row gate's connected together) and this is matched by the vertical data bus (mosfet drain line, individual for each column) that then drives each pixel capacitance either on/off based on it's square wave.
So with this one can have all pixels within a single horizontal row be controlled simultaneously and independently. I can't see how one could do this for the whole screen instead of single row at a time as that would require additional layers of wires on the TFT matrix so that any sub pixel of all the matrix can be controlled independently , It seems unrealistic , also the driving cirucitry would probably have to be orders of magnitude more complex, is there something I'm missing ?
 
Back
Top