Is there a way to improve CRT raster scan efficiency by scanning both ways?

  • Thread starter Thread starter artis
  • Start date Start date
  • Tags Tags
    Crt
Click For Summary
The discussion centers on the potential for improving CRT raster scan efficiency by scanning in both directions, which could reduce flicker and increase frame rates. The original method requires the electron beam to reset after each line, causing delays that could be eliminated by alternating scan directions. However, this approach raises concerns about non-parallel scan lines and the complexity of vertical deflection circuitry. Additionally, while modern displays like LCDs and OLEDs do not face these issues, implementing such a system in CRTs could lead to technical challenges, including maintaining image quality and synchronization. Overall, the concept presents intriguing possibilities but is fraught with practical difficulties.
  • #31
1626179926742.png

Here is a basic diagram of the waveform of a NTSC / PAL colour waveform of the 'colour bars' test picture. The white level is the highest luminance and, of course, has a chrominance value of zero. The colours in the diagram are false - just to show you which bar is which but show the maximum level of the subcarrier for each of the (saturated) coloured bars. A monochrome receiver sees a pretty good 'grey scale' bit it looks different from the normal monochrome signal. I think that the 'compatible' signal is in fact more in agreement with the original scene but people didn't like it (we don't like change, do we, dear?)

The colour subcarriers two components, in quadrature so varying the phase and amplitude of each component allows you to carry two colour coordinates of each point on the line. The frequency used for the subcarrier is very carefully chosen to have sidebands that fit between the components of the luminance signal. Because of the basic line frequency of the TV signal, static pictures have a comblike structure with a spacing of line frequency. The interleaving of the chrominance and luminance components allows the signals to occupy the same spectrum space. (Dead clever - eh?) Once there's motion, this interleaving fails but the eye doesn't spot the cross talk between the signals.
However, there is a real nasty when high frequency luminance patterns interfere with the chrominance signals and produce nasty patterns. This was avoided by making sure that TV presenters wore appropriate clothing without fine check patterns.
There were many advances in colour coding and decoding and, by the time digital TV came along, they'd got it pretty good. PALcoding was a significant advance on NTSC but we (UK) had to wait many years for it.
artis said:
I wonder how were the signals mixed before transmitting given the luminance signal has a much larger amplitude and the color signal "rides" on top of it with a lower amplitude, at least it seems so from the pictures i could find.
The picture shows that the "riding on top" is not too big a problem because the brightest parts of the picture do not have saturated colours so the colour subcarrier level is low. Headroom is not too much affected.
I grew up on PAL TV and the various clever bits gradually got absorbed into my brain. But it's now little more than history and it's really not worth anyone's while getting too deep into the details (imo), although it really was a magnificent piece of Engineering. The basics of colorimetry and imaging in general are all the same for DTV as for analogue TV so the vast amount of work that was done is still helping with DTV development. Many of the members of the MPEG group cut their teeth on Pal and NTSC.
 
  • Like
Likes KalleMP
Engineering news on Phys.org
  • #32
Rive said:
Yeah, it's easy to forget that the development of TV sets was a process, and it started with minimal electronics (and constant requirement with backward compatibility).

Ps.: I guess zig-zag scanning would be easier with electrostatic deflection, since the voltage has less constraint against sudden changes than current. So this tube actually could be able to do such trick (no wonder: it was coming from oscilloscopes). As the story went, electrostatic deflection was inconvenient due the long neck of the tube and the low deflection angle, so by the time things settled (40's, I believe) it was out of question.
Here is a interesting report from GE in 1957 describing in detain the past and projected developments that were undertaken to reduce the depth of tubes as the size goals kept climbing.

In the end a CRT was limited by physical constraints and the largest CRT displays moved to projection systems before the Plasma display and LCD display era started.

http://www.one-electron.com/Archive... Evolution of Picture Tube Size and Shape.pdf

Kalle
--
Kalle Pihlajasaari
Lahti, Finland
"Achieve 25(OH)D3 > 125nmol/l (50ng/ml) with DAILY supplementation." - Kalle Pihlajasaari, 2021
 
  • Like
  • Informative
Likes Rive, Keith_McClary and anorlunda
  • #33
The pre War developments also included mechanical scan for larger screen projection, and this would not lend itself to zig-zag scan. For instance, at RadiOlympia in 1938 a large screen mechanical scan 405 line receiver made by Scophony was displayed. It utilised small, high speed mirror drums and used an opto-acoustic light modulator, called the Jeffree Cell. https://blog.scienceandmediamuseum....cophony-tv-receiver-high-speed-scanner-motor/
Another serious objection to zig-zag scan is that the scan must be perfectly linear to a precision of one pixel, or vertical edges will be ragged, and such linearity was impracticable.
 
  • Informative
  • Like
Likes sophiecentaur, artis, Klystron and 1 other person
  • #34
The great thing about these 'free-running' topics of PF is the variety of viewpoints and often the historical and personal content/addition which is so hard to come by on the internet by looking only.

Thank you all for this.

Ps.: I've still seen what's not here o0) Was some good stuff.
 
  • Like
Likes sophiecentaur, artis and Klystron
  • #35
artis said:
Someone surely has though of this before but I still wonder.
The original electron gun raster scan pattern was from top left to right and then the gun is switched off while the deflection coils reset so that the electron beam would start again at the left side but now a pixel row or two (as in interlaced) lower.
It takes time for the magnetic field to change and so the electron beam cannot do useful work at that moment.
Were there any ideas to have the raster scan pattern from left to right and then in the next row from right to left back and then again from left to right etc. This way instead of moving the beam back and starting a new line a new line would simply be drawn from the side where the beam left the previous line. Magnetic deflection coils would not have to be reset instead they could just go continually from left to right and back and the resetting would only need to happen at the end of each full frame from bottom to top.

This would mean that each frame could be drawn quicker so more frames could have been packed in a given time and less flicker. Probably increased bandwidth would also be needed.

In modern flat panel technologies like LCD or OLED etc this is probably not a problem because there is no beam to reset and the dispaly is driven electronically from a driver chip, so each next pixel scan line starts as soon as the last one ends with some small delay probably.
I have been pondering on the same idea. Using a low power resonant circuit for the deflection coils and wondered about Lissajous scanning.

Found these interesting papers.
https://www.mdpi.com/2072-666X/10/1/67/htm Scanning MEMS Mirror for High Definition and High Frame Rate Lissajous Patterns

https://www.nature.com/articles/s41598-017-13634-3 Frequency selection rule for high definition and high frame rate Lissajous scanning

Kalle
--
Kalle Pihlajasaari
Lahti, Finland
"Achieve 25(OH)D3 > 125nmol/l (50ng/ml) with DAILY supplementation." - Kalle Pihlajasaari, 2021
 
  • #36
tech99 said:
The pre War developments also included mechanical scan for larger screen projection, and this would not lend itself to zig-zag scan. For instance, at RadiOlympia in 1938 a large screen mechanical scan 405 line receiver made by Scophony was displayed. It utilised small, high speed mirror drums and used an opto-acoustic light modulator, called the Jeffree Cell. https://blog.scienceandmediamuseum....cophony-tv-receiver-high-speed-scanner-motor/
Another serious objection to zig-zag scan is that the scan must be perfectly linear to a precision of one pixel, or vertical edges will be ragged, and such linearity was impracticable.
The point about scanning, however you do it, is that it is a sampling process. Any sampling introduces artefacts which can make re-construction problematic. Conventional scanning was selected because it was convenient (a spinning disc, initially) and that led to a relatively simple sawtooth horizontal and vertical scan. The artefacts are 'predictable'. If you try to scan in other ways, the vertical and horizontal sample frequencies are no longer uniform so the nice, friendly 'comb' spectrum of a PAL signal is destroyed. The timing suddenly may need to be much better (the 'pixel accuracy' of @tech99 may come in).
The downside of the conventional scan for TV tubes is, as people have mentioned, the enormous power needed for sawtooth deflection, and the scan linearity with wide angle tubes. But that was more or less sorted out with large sweaty power circuits.
Repeat scanning is terrible value for bandwidth use, if the picture is actually transmitted in the same form that it's detected and displayed, as in conventional transmission.
Once you have modern digital signal processing, the same basic scanned picture can be compressed into a tiny channel compared with the 7MHz or whatever for old fashioned TV. In that situation, a picture is a picture and can be imaged or displayed in any way you choose. Transmission becomes a different issue.
 
  • #37
sophiecentaur said:
The point about scanning, however you do it, is that it is a sampling process. Any sampling introduces artefacts which can make re-construction problematic. Conventional scanning was selected because it was convenient (a spinning disc, initially) and that led to a relatively simple sawtooth horizontal and vertical scan. The artefacts are 'predictable'. If you try to scan in other ways, the vertical and horizontal sample frequencies are no longer uniform so the nice, friendly 'comb' spectrum of a PAL signal is destroyed. The timing suddenly may need to be much better (the 'pixel accuracy' of @tech99 may come in).
The downside of the conventional scan for TV tubes is, as people have mentioned, the enormous power needed for sawtooth deflection, and the scan linearity with wide angle tubes. But that was more or less sorted out with large sweaty power circuits.
Repeat scanning is terrible value for bandwidth use, if the picture is actually transmitted in the same form that it's detected and displayed, as in conventional transmission.
Once you have modern digital signal processing, the same basic scanned picture can be compressed into a tiny channel compared with the 7MHz or whatever for old fashioned TV. In that situation, a picture is a picture and can be imaged or displayed in any way you choose. Transmission becomes a different issue.
I think that along a scan line we do not use sampling; there is no fundamental limit to resolution other than spot size. Although that creates a low pass filter action, that is not a sampling process. It is an analogue system in that respect. In the vertical direction we do have sampling and the max spatial frequency is then restricted to half the number of lines. As I have mentioned before, sampling doubles the required bandwidth.
Compression of TV signals relies on exploiting limitations of human vision, so we are actually robbing the recipient of information or placing constraints on what may be displayed. It cannot defeat the laws of Nature, such as Shannon. I am a bit out of date here, but I think a full quality digital TV picture when transmitted on the air will require about the same bandwidth as an analogue transmission.
 
  • #38
In the case of conventional TV you are ‘nearly’ right about there being no sampling on a line but the vertical dimension certainly is sampled. Any other form of scanning would involve both H and V explicit sampling.
Forget Shannon as analogue TV is way short of that limit. For a start, it wastes most of its bandwidth sending most of most pictures again and again. The law at work there is the ‘getting it done somehow’ law. There are many lossless methods of compression which, given an appropriate processing delay, can give moving pictures with the only shortcomings being in the original analogue signal (or sensor limitations). Shannon does not specify processing time either.

The vertical / horizontal bandwidth issue is not straightforward. The line rate is inversely related to the horizontal resolution for a given channel bandwidth. The choices in existing systems are only approximately optimal.
 
  • #39
tech99 said:
I think a full quality digital TV picture when transmitted on the air will require about the same bandwidth as an analogue transmission.
The bandwidth required for uncompressed 'raw' data with the nominal resolution of usual analog signal would be far bigger. For practical reasons the actual bandwidth requirement is kept ~ the same but the data got digitally compressed. At the end it still comes with higher quality.
 
  • #40
Rive said:
The bandwidth required for uncompressed 'raw' data with the nominal resolution of usual analog signal would be far bigger. For practical reasons the actual bandwidth requirement is kept ~ the same but the data got digitally compressed. At the end it still comes with higher quality.
It's not really valid to compare analogue and digital TV because, firstly, no one needs to transmit the amount of repeated information in most TV programme material. Our brains could actually not cope with separate full res 625 line pictures, appearing 25 times a second. Secondly the data rate needed is not really comparable with an analogue bandwidth. You need to do the whole calculation which has to compare minimal channel loss / signal strength and then relate analogue picture to noise ratio to error rates and how the coding and transmission methods deal with them.
The proof of the pudding is that the four / five channel TV service on UHF in UK has been replaced by the 70 standard and 15 HD channels on Freeview in the same UHF spectrum space.
 
  • #41
sophiecentaur said:
The proof of the pudding is that the four / five channel TV service on UHF in UK has been replaced by the 70 standard and 15 HD channels on Freeview in the same UHF spectrum space.
At the same time the multi-path ghosts have been exorcised.
I wonder what a zig-zag scan would look like with multi-path ghosting.
 
  • #42
Two aretefacts for the price of one, probably. But echos would tend to be broken up. One plus point to zig zag but several minuses I expect.
 
Last edited:
  • #43
@tech99 I think if digital radiowave broadcast was done like analogue was done where each frame was sent as a separate "new" frame of information( where information largely overlapped as @sophiecentaur already pointed out) then probably the digital bandwidth would compare to the analogue, but if I'm not mistaken then since the large scale introduction of digital broadcasting and equipment we rely on the fact that more modern TV's had built in circuitry with memory and signal processing so we can now send just the pixels that have changed for a new frame while the previous ones remain displayed from the memory.

As I read even newer technology uses the so called "AI" software which essentially can "fill in" missing parts of the frame and pixels by continually using a powerful processor to sample the incoming frames and find the most appropriate fill in's for the missing parts from it's memory.

I guess modern flat screens are more like computers with a digital radiowave input than actual TV's in their classical sense.
They have all the computer parts like RAM, CPU, GPU , the only difference between a desktop PC or a laptop is the medium through which the info is sent , internet cable VS radiowaves, but then again for laptops WIFI is just another form of radiowave broadcast.
 
  • #44
artis said:
probably the digital bandwidth would compare to the analogue
That would be a pretty inefficient digital transmission system for these days. You are assuming what would be a basic binary signalling system which has a bandwidth of the order of the 'Baud Rate'. Take WiFi, for instance, which is a pretty representative system. It uses an RF channel bandwidth of around 20MHz but can give data rates of over 200Mb/s with QAM.

I know it's more complicated than that because range and coverage need to be considered but there is no simple rule of thumb these days, like there was when we sent a binary data stream down a wire. It amazes me that I can usually get 70Mb/s along a couple of hundred metres of what was installed as telephone (naff audio quality) cable.
 
  • #45
I built a display like this back in the early 1980's. (Probably still have the prototype somewhere...) It was intended for a computer graphics system, so all of the transmission and compatibility issues others have mentioned were not a concern in this case. The primary motivation was, as you suggest, to not waste something like 10% of the frame time in retrace.

Since there was no compensating scan in an image tube originating the video, the line pairing at the edges was indeed an issue. To address this, we stepped the vertical scan. In order to get adequate step response, we had to have a relatively low inductance vertical deflection yoke driven by a high-bandwidth amplifier. There was an issue with horizontal errors, as someone mentioned. The primary cause of this was magnetic hysteresis in the ferrite core of the horizontal deflection yoke. Since I was generating the video electronically I could compensate for this fairly well, though it might still have been something of a problem in production due to temperature and part-to-part variations. It never made it into production, though. Ultimately, the reduction in required video bandwidth was not worth the extra power in the amplifiers and other drawbacks. Still, it was a great experiment.

An interesting related issue was that at the time, horizontal output transistors in standard horizontal scan circuits were NPN bipolar transistors operated as saturated switches, and woe betide you if your HOT ever came out of saturation during the flyback pulse! To ensure this never happened, and because of large variations in the transistor's beta due to temperature and lot variations, you had to make sure you had lots of base drive to cover the worst case with plenty of margin. This, of course, means not only a large storage time, but a wide variation in storage time across temperature and parts. This uncertainty meant you had to allocate extra time for horizontal retrace. To reduce this, I built a circuit that servoed the retrace pulse to the H Synce pulse. This worked quite well, though again, I don't think we ever put it into production.

By the way, I suspect just about every variation on CRT scanning has been tried at some point. I remember an article on a system that used a spiral scan, from the center outward I think. I don't know why. I also had an alphanumeric display that did a sort of mini-raster along each row of characters (fast vertical, slow horizontal as I recall), but only as far as the characters went in each row. Seems like a small jump from there to a full XY vector display.

Fun times.
 
  • Like
  • Informative
Likes KalleMP, DaveE, Keith_McClary and 1 other person
  • #46
Thank you Entropivore. I also remember an interesting slow scan format before the real digital era where there was a long persistence tube and the pixels were updated randomly.
 
  • #47
Using a scanned electron beam must limit the possibilities of reading out and displaying the image pixels. That beam and the scanning circuits effectively have a large amount of 'momentum' due to the inductance of the coils. That's no longer a necessary factor in the way the picture elements are transmitted.

CCD imaging is currently based on sequential access to the elements in picture lines but the vertical sequence in which the lines are read out need not, I think, require a particular order of data. In fact, how important is it even that the charge coupled elements would actually need to be in a line? Ever since an electron beam has no longer been used, the whole notion of scanning 'as such' is less relevant to reading out the image information.

The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
 
  • #48
entropivore said:
I built a display like this back in the early 1980's. (Probably still have the prototype somewhere...) It was intended for a computer graphics system, so all of the transmission and compatibility issues others have mentioned were not a concern in this case. The primary motivation was, as you suggest, to not waste something like 10% of the frame time in retrace.

Since there was no compensating scan in an image tube originating the video, the line pairing at the edges was indeed an issue. To address this, we stepped the vertical scan. In order to get adequate step response, we had to have a relatively low inductance vertical deflection yoke driven by a high-bandwidth amplifier. There was an issue with horizontal errors, as someone mentioned. The primary cause of this was magnetic hysteresis in the ferrite core of the horizontal deflection yoke. Since I was generating the video electronically I could compensate for this fairly well, though it might still have been something of a problem in production due to temperature and part-to-part variations. It never made it into production, though. Ultimately, the reduction in required video bandwidth was not worth the extra power in the amplifiers and other drawbacks. Still, it was a great experiment.

An interesting related issue was that at the time, horizontal output transistors in standard horizontal scan circuits were NPN bipolar transistors operated as saturated switches, and woe betide you if your HOT ever came out of saturation during the flyback pulse! To ensure this never happened, and because of large variations in the transistor's beta due to temperature and lot variations, you had to make sure you had lots of base drive to cover the worst case with plenty of margin. This, of course, means not only a large storage time, but a wide variation in storage time across temperature and parts. This uncertainty meant you had to allocate extra time for horizontal retrace. To reduce this, I built a circuit that servoed the retrace pulse to the H Synce pulse. This worked quite well, though again, I don't think we ever put it into production.

By the way, I suspect just about every variation on CRT scanning has been tried at some point. I remember an article on a system that used a spiral scan, from the center outward I think. I don't know why. I also had an alphanumeric display that did a sort of mini-raster along each row of characters (fast vertical, slow horizontal as I recall), but only as far as the characters went in each row. Seems like a small jump from there to a full XY vector display.

Fun times.
I misspoke slightly regarding blowing up horizontal output transistors. The flyback actually occurs when you turn off the transistor, and the real problem is running out of base drive when you reach the peak current at the end of the sweep. Of course, trying to turn the output transistor back on during the flyback pulse will also cause grief, but that's not so likely to happen. Except... I think it was the Commodore PET that had a monitor where rather than having a horizontal oscillator that was synchronized to pulses from the video circuitry, in essence software directly drove the horizontal output stage, so you could in fact smoke the hardware through a programming error. Clever, eh?
 
  • #49
sophiecentaur said:
Using a scanned electron beam must limit the possibilities of reading out and displaying the image pixels. That beam and the scanning circuits effectively have a large amount of 'momentum' due to the inductance of the coils. That's no longer a necessary factor in the way the picture elements are transmitted.

CCD imaging is currently based on sequential access to the elements in picture lines but the vertical sequence in which the lines are read out need not, I think, require a particular order of data. In fact, how important is it even that the charge coupled elements would actually need to be in a line? Ever since an electron beam has no longer been used, the whole notion of scanning 'as such' is less relevant to reading out the image information.

The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
It's true that the display scanning need no longer define the the imager scanning (or vice versa), but there are other architectural issues at play. Interconnect limitations dictate that imager data is going to have to be serialized in some way, and if you want the full imager frame you might as well do that in a simple pattern. Some existing CMOS sensors provide the ability to shift out only a sub-region of the full array, which can be useful for machine vision applications such as target tracking but is less interesting for general photography and video applications.

Also, our compression standards (e.g., MPEG, H.264) have evolved around processing pixels in a predictable sequence. You could in principle build data compression into the imager itself, but there are architectural and economic arguments against that as well, so it probably only makes sense in a limited set of conditions.

Re your question about whether the imager elements need to even be in a line, consider that even though imaging and display are now greatly decoupled, in order to make sense of an image one still has to know the spatial organization of the original sampling points. Given this, it makes sense to standardize on a single pattern so as to provide interoperability across sensors and systems. (If you've ever had to deal with "non-square" pixels in an imager or display you'll probably know what a headache it can be.) I'm not sure to what degree the evolution of fabrication technologies influences this, but I suspect it may also come into play. That is to say, they seem to be optimized for rectilinear structures, so uniform XY grids are a natural choice. Ultimately, economics is a major driver of the evolution of technology.
 
  • #50
sophiecentaur said:
The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
Well if what I understand about active matrix TFT is correct then the vertical scan rate is the rate at which each pixel row gate's are switched on/off (each row gate's connected together) and this is matched by the vertical data bus (mosfet drain line, individual for each column) that then drives each pixel capacitance either on/off based on it's square wave.
So with this one can have all pixels within a single horizontal row be controlled simultaneously and independently. I can't see how one could do this for the whole screen instead of single row at a time as that would require additional layers of wires on the TFT matrix so that any sub pixel of all the matrix can be controlled independently , It seems unrealistic , also the driving cirucitry would probably have to be orders of magnitude more complex, is there something I'm missing ?
 
  • #51
sophiecentaur said:
In fact, how important is it even that the charge coupled elements would actually need to be in a line?
We have a huge pile of image math based on the line&raster system. As a storage and transfer method, I don't think it'll ever fundamentally change.

sophiecentaur said:
The discussions in this thread will be less and less relevant once random access to the pixels of images is possible. I have a feeling that CMOS would allow more flexibility of the data flow from the sensor elements.
With the rapid development of IC technology, it is indeed possible to assign sophisticated circuitry to every pixel, and this opens up many new possibilities regarding (colour and movement) dynamics and on-the-fly processing.
On the other hand, if you take a photo it is still expected to have all the field of vision captured.
 
  • #52
I think it was the Commodore PET that had a monitor where rather than having a horizontal oscillator that was synchronized to pulses from the video circuitry, in essence software directly drove the horizontal output stage, so you could in fact smoke the hardware through a programming error. Clever, eh?
That was implemented in the early IBM PC desktop computers in the late 1970's. There was no possibility of troubleshooting the display without it being connected to their computer. With no horizontal oscillator, all the magic smoke would escape from the horizontal output stage! :rolleyes:
 
  • #53
entropivore said:
Some existing CMOS sensors provide the ability to shift out only a sub-region of the full array, which can be useful for machine vision applications such as target tracking but is less interesting for general photography and video applications.
I am really not up to date with details like that but is sort of makes my point that line scanning is not a 'given'.
entropivore said:
Given this, it makes sense to standardize on a single pattern so as to provide interoperability across sensors and systems.
With the present state of things you are right. But the 'readout' sequence could be varied to suit the particular image (image sequence) in an intelligent way. That sequence could be sent to the decoder.
artis said:
Well if what I understand about active matrix TFT is correct then the vertical scan rate is the rate at which each pixel row gate's are switched on/off (each row gate's connected together) and this is matched by the vertical data bus (mosfet drain line, individual for each column) that then drives each pixel capacitance either on/off based on it's square wave.
In the end, it's a matter of achievable data handling speeds and we will surely do a lot better than what you are describing. If the image sensor readout were pixel orientated, it could be treated as a random access memory and a more intelligent coding processor could assemble the optimum image (sequence) data. The possibility of getting better quality images will rely on intelligent systems which present the data best to the human eye / brain.

The motion aspect of image sensing tends to be ignored in many of these discussions. Even going back to 'old fashioned TV', the practice of having interlaced fields in old fashioned TV was all about reducing the jerkiness of 25 / 30 Hz frame rate by doubling the temporal sampling rate.
 
  • #54
Well I agree @sophiecentaur that if one had the option of changing individual pixels all across the screen all at once and do so at the same rate the real camera captured pixels change due to light changes then I guess we could forget the term "frame rate". The picture would be much smoother but then again the question is how fast can we change individual pixels at once and does that match up the the most challenging videos captured
 
  • #55
artis said:
does that match up the the most challenging videos captured
That would take us into the same problems that the developers of Mpeg have encountered; matching the channel to the subject materials and psychology. It would be more along the lines of the way human vision works. But there can be no doubt that we have to assume that a scanning system, based on revolving drums or deflecting electron beams, can be improved on significantly.

I would say we're not even half way there.
 
  • #56
sophiecentaur said:
It would be more along the lines of the way human vision works.
Yes that also came to my mind , how exactly human vision works with regards to this and I'm pretty sure we don't have horizontal row scan and vertical column scan rates, but I don't want to derail this otherwise good thread so I guess one would need to make another one for that.
 
  • Like
Likes sophiecentaur
  • #57
artis said:
I'm pretty sure we don't have horizontal row scan and vertical column scan rates,
It was very much the tail wagging the dog. The tail was what we could do at the time and the rest of the TV system followed.

The way we see things is so strange. I remember the head of our School Art Department covering my Science lesson on the last (fun) day of Christmas term. She was indulging the kids by drawing portraits of a few of them. Her method was to start top left of the A4 paper and more or less produce the picture as if she was writing / scanning it in large type. When she got to the bottom right, the picture was done. I was gobsmacked that the sketches were all good likenesses and it set me thinking about what she was actually doing in producing the likenesses in that way.
 
Last edited:
  • Informative
Likes hutchphd
  • #58
sophiecentaur said:
That would take us into the same problems that the developers of Mpeg have encountered; matching the channel to the subject materials and psychology. It would be more along the lines of the way human vision works. But there can be no doubt that we have to assume that a scanning system, based on revolving drums or deflecting electron beams, can be improved on significantly.

I would say we're not even half way there.
When you say we're not even half way there, it's not clear to me where "there" is, or (with apologies to Gertrude Stein) if there's even a there there. To put it another way, what problem are you really trying to solve?

One can certainly imagine an imager built on top of some sort of neurmorphic substrate with a massively parallel interconnect. This might be very useful for applications like missile guidance or other machine vision problems, or in building something like an artificial retina. For the larger set of applications, though, data acquired by an imager need to be conveyed in fairly raw form to a physically distinct entity, and that degree of parallelism isn't an option. So, we're stuck with some form of serialized transmission. If the addressing sequence is not defined in advance, that is, if the pixels are sent in apparently random order, then the overhead of sending addresses along with the pixel data becomes a heavy burden. If you had a 4k x 4k imager you'd need 24 bits for each address. If you have 24 bits of pixel data, then you've just doubled the bandwidth requirement, but to what end?

One of the benefits of raster scanning is that it preserves locality of reference. This means that you can do on-the-fly processing without assembling full frames, which is advantageous in terms of latency and storage requirements. Running a filter over rasterized pixel sequence requires storing only a few lines worth of pixel data. On a random sequence, it would require storing the entire frame to be sure you had all of the pixels for each iteration of the filter.

Unlike the early days of television, it is a rare case nowadays that images are conveyed from a sensor to a display without alteration. Let's say you have some non-raster sequence that you've determined is optimal for extracting data from the sensor. How would you composite that stream with another one, which I guess would have a completely different address sequence? Even if the two streams had the same address sequence, would that still be the optimal sequence for the composite result? What happens after you've applied scaling or other transforms?

Perhaps I'm missing your point. Is your idea to do away with the entire notion of video as a sequence of frames? I can imagine this in some sort of special case with a one-to-one mapping between a sensor and a display, analogous to a coherent fiber optic bundle, for example. How this would work in a more general case is much less clear to me. Disregarding the rather onerous addressing overhead mentioned above, I can sort of see how you might do compositing and spatial transforms, but it would seem to break anything that relies on locality of reference, such as spatial filtering. (I haven't even begun to try to get my head around temporal filtering in such a system.)

Bear in mind that of all the pixels in the universe, a significant (and rapidly increasing, I expect) portion of those captured by imagers are never displayed for human eyes, and likewise many of those displayed for human viewing never originated from real-world image capture. Coming up with an entirely new video paradigm that is optimized for direct sensor-to-display architectures seems like a misdirected effort.
 
  • Informative
  • Like
Likes hutchphd and DaveE
  • #59
entropivore said:
When you say we're not even half way there, it's not clear to me where "there" is, or (with apologies to Gertrude Stein) if there's even a there there. To put it another way, what problem are you really trying to solve?

One can certainly imagine an imager built on top of some sort of neurmorphic substrate with a massively parallel interconnect. This might be very useful for applications like missile guidance or other machine vision problems, or in building something like an artificial retina. For the larger set of applications, though, data acquired by an imager need to be conveyed in fairly raw form to a physically distinct entity, and that degree of parallelism isn't an option. So, we're stuck with some form of serialized transmission. If the addressing sequence is not defined in advance, that is, if the pixels are sent in apparently random order, then the overhead of sending addresses along with the pixel data becomes a heavy burden. If you had a 4k x 4k imager you'd need 24 bits for each address. If you have 24 bits of pixel data, then you've just doubled the bandwidth requirement, but to what end?

One of the benefits of raster scanning is that it preserves locality of reference. This means that you can do on-the-fly processing without assembling full frames, which is advantageous in terms of latency and storage requirements. Running a filter over rasterized pixel sequence requires storing only a few lines worth of pixel data. On a random sequence, it would require storing the entire frame to be sure you had all of the pixels for each iteration of the filter.

Unlike the early days of television, it is a rare case nowadays that images are conveyed from a sensor to a display without alteration. Let's say you have some non-raster sequence that you've determined is optimal for extracting data from the sensor. How would you composite that stream with another one, which I guess would have a completely different address sequence? Even if the two streams had the same address sequence, would that still be the optimal sequence for the composite result? What happens after you've applied scaling or other transforms?

Perhaps I'm missing your point. Is your idea to do away with the entire notion of video as a sequence of frames? I can imagine this in some sort of special case with a one-to-one mapping between a sensor and a display, analogous to a coherent fiber optic bundle, for example. How this would work in a more general case is much less clear to me. Disregarding the rather onerous addressing overhead mentioned above, I can sort of see how you might do compositing and spatial transforms, but it would seem to break anything that relies on locality of reference, such as spatial filtering. (I haven't even begun to try to get my head around temporal filtering in such a system.)

Bear in mind that of all the pixels in the universe, a significant (and rapidly increasing, I expect) portion of those captured by imagers are never displayed for human eyes, and likewise many of those displayed for human viewing never originated from real-world image capture. Coming up with an entirely new video paradigm that is optimized for direct sensor-to-display architectures seems like a misdirected effort.
As I mentioned previously, a long persistence CRT can do individual pixel scanning by driving X and Y plates with suitable noise-like waveforms.
 
  • #60
tech99 said:
As I mentioned previously, a long persistence CRT can do individual pixel scanning by driving X and Y plates with suitable noise-like waveforms.
Certainly. The DEC 338 and 339 displays were classic examples of this. (Remember light pens?) Genisco even built a 3D display by combining a vector display and an oscillating mirror. (Look up the Genisco "Spacegraph".) But there's a reason why vector displays are now essentially historical artifacts and computer display technology and video display technology have converged.
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
4K
Replies
32
Views
4K
Replies
12
Views
2K
  • · Replies 8 ·
Replies
8
Views
10K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
8
Views
4K
Replies
8
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K