What are the limitations of CRT display technology? resolution, nits, ....

AI Thread Summary
CRT display technology faces significant limitations, particularly in resolution and brightness. Achieving high resolutions like 3840x2160 on a 34-inch screen would require increased beam current, which could lead to issues with electron scattering and mask heating. The inherent inefficiency of CRT phosphors, converting only about 30% of energy into visible light, poses another challenge compared to OLED technology. Additionally, while using thick strontium glass could theoretically allow for higher brightness levels, it raises concerns about x-ray emission and potential image quality degradation due to glass imperfections. Overall, the complexity and engineering demands of building a CRT make it a daunting project for enthusiasts.
jman5000
Messages
29
Reaction score
2
TL;DR Summary
What were the limits of crt display in terms of resolution, brightness, frequency, ect...
I've been reading about how color crt displays work because I've become interested in trying to make one, even if it isn't going to be very good compared to the ones of the past. (yes I know they have high voltage that can arc through material/ air and that voltage drop and electron collisions emit x-rays.)
The main attributes I'm interested in is resolution and brightness. I know that a higher resolution requires less beam current since less electrons means less repulsion, but say I wanted to fit a resolution of 3840x2160 into a 34 inch screen, this is a much higher beam current per area than any crt I've heard of. Do you think a strong enough focusing apparatus could be used to stop the electrons from scattering and be able to keep the brightness of your average crt, say 150 nits while still maintaining that high resolution I outlined above?

Also, I've read that crt phosphors only convert ~30% of energy into visible light which seems really low but I don't know how that compares to the phosphors used in oled displays. What makes crt phosphors so inefficient and could a phosphor be made that was able to convert the electrons of the beam more efficiently? I mean what is different to how an oled phosphor lights up compared to a crt? Don't they both have electrons hitting it to light up? A more efficient phosphor seems like the easiest solution to overcoming crt limitations since you wouldn't have to worry about increasing voltage which would wear out the cathode and require even thicker screens to stop xrays.

Now this next part is going into super hypothetical territory which I'd never want to actually test. Is there any reason why I couldn't have super high brightness of like 600+ nits if I was okay with having multiple feet of strontium glass? The way we talk about xray blocking, saying things like the strontium glass screen blocks 99.8% of xrays means that some amount gets through, right? So having a 600 nit crt would be emitting way higher energy xrays regardless of the screen used. I'd think the glass would reduce the energy of xrays a lot but the potential for a xray to go through the glass completely unimpeded is exists right? Additionally, do you think that would degrade the image quality in any way? I'd think the glass would have slight imperfections that aren't frequent enough to affect the image at lower thickness but would accumulate at thicker glass sizes.

Let me know of any other limitations you know of that would make this a dead out of water idea.
 
Engineering news on Phys.org
Do you have the capability to make a CRT, with the immense amount of engineering involved? How will you position the shadow mask, and make the electron guns?
 
  • Like
Likes russ_watters, dlgoff and berkeman
Try search terms high resolution crt display and phosphor efficiency. All of your questions will be answered in the first page of hits for each search.
 
  • Like
  • Informative
Likes phinds, russ_watters and berkeman
Don't forget that the large beam current will heat the heck out of the shadow mask in your proposed set up unless you are very very precise.
 
tech99 said:
Do you have the capability to make a CRT, with the immense amount of engineering involved? How will you position the shadow mask, and make the electron guns?
No, I only have a bachelors in pure mathematics. I'm currently learning electricity through youtube which is all algebra based which feels weird I feel like I should be learning calculus-based version but can't really find any resources that are digestible. I don't mean to trivialize how hard a task building a crt would be. I know it requires extensive knowledge of electrical, chemical, and mechanical/ thermal physics. Although I look up youtube videos of diy cathode ray tube displays and I see a few barebones ones that don't have a really stable image but looks completely in the realm of making in a garage. I mention high resolution crt but I don't plan on making a high fidelity one. At least not until I make a simpler one and understand the intricacies of it before I try and make something better. I'm just asking since I'm curious about the technology.

Speaking of which, do you have any analog circuit textbook recommendations for beginners? It can use calculus, but it still needs to assume I know nothing about electricity. I'm not going to actually touch crt stuff until I feel component enough in electricity to know I won't kill myself.

Also, I actually have a book on electromagnetism, but it is more focused on theory than application. It's by Griffiths. Do you think it would be important to read that or would reading a book on more applied stuff suffice? I feel like regardless of if I read the Griffiths book or not, I'm going to have to read a more applied book so would rather not have to read two different books.
 
jman5000 said:
Let me know of any other limitations you know of that would make this a dead out of water idea.
The resolution will be limited by the manufacture of the "invar" shadow mask. The shadow mask will need to be repeatably mounted more precisely to the face of the tube.
The brightness will be limited by the beam dimensions, heating of the mask, and burning of the phosphor.

The best way to understand a technology is to imagine making it, as a reverse engineer. I have heard it said, that you cannot be a truly good person, if you do not have wicked thoughts, and resist them.

Once you realise the multiple-lifetime cost of making one colour CRT, evaluate the cost of making yourself an electron microscope. That will be easier, cheaper, and a more productive investment. You can use it to make nanomachines and quantum computers for your children.
 
  • Like
Likes Rive, russ_watters and berkeman
jman5000 said:
I only have a bachelors in pure mathematics.
With that, it is only a few years of serious electronics practice and you may attempt to DIY something based on an old oscilloscope tube.
link
_s5Vj9xATHP.JPG?auto=compress%2Cformat&w=830&h=466.jpg

There are people who can DIY something like that tube itself too (usually in awful quality, but still: it's at least possible).

CRT displays were the products of a long multi-disciplinary evolution. The last generations were so complex and sophisticated that (in comparison) a DIY particle accelerator would look easy and boring.

Not that I would recommend either of them o0)
 
Last edited:
  • Like
Likes AlexB23, apostolosdt, sophiecentaur and 1 other person
Rive said:
With that, it is only a few years of serious electronics practice and you may attempt to DIY something based on an old oscilloscope tube.
That is not a raster display like a TV, it is a vector plotter. You can just see the trace of the beam, as it moves from one symbol to another on the screen.
 
Baluncore said:
That is not a raster display like a TV, it is a vector plotter. You can just see the trace of the beam, as it moves from one symbol to another on the screen.
Yes. More math, less electronics. A nice fit here, I think.
 
  • #10
Rive said:
With that, it is only a few years of serious electronics practice and you may attempt to DIY something based on an old oscilloscope tube.
link
Pretty cool. :smile:

BTW, did he mention why the display moves slightly every few seconds? I only quick-skimmed the middle of the video and skipped to the end result. At first I thought there was a glitch in the deflection circuitry, but then I realized that he probably did it on purpose. :wink:
 
  • #11
Baluncore said:
That is not a raster display like a TV, it is a vector plotter. You can just see the trace of the beam, as it moves from one symbol to another on the screen.
The trace of the beam shows that the drive waveform doesn't go far enough to get below a believable black level. A fairly good CRT driver won't allow visible frame flyback lines to be seen - although I have seen the effect on a faulty CRT. If the bandwidth of the drive is high enough to give those crisp characters then is it fundamental that trace between characters would be seen? I don't think so.
Edit: the scanning system must have been a nightmare - electrostatic - but they used it in many specialist vector displays.
Rive said:
CRT displays were the products of a long multi-disciplinary evolution. The last generations were so complex and sophisticated that (in comparison) a DIY particle accelerator would look easy and boring.
I feel privileged to have been around in the days of colour crts and analogue colour TV coding. And it's all water under the bridge now. All that tech was driven by the fact that 'everyone' wanted / needed a colour TV and were prepared to pay money for one.
 
  • Like
Likes Rive, dlgoff and Bystander
  • #12
sophiecentaur said:
The trace of the beam shows that the drive waveform doesn't go far enough to get below a believable black level.
Maybe Z-mod is not being used, to keep the sparks out of the microcontroller. The differential deflection plates for X and for Y, would be about ground potential, within the grounded aquadag.
An opto-isolator might be used to cross the gap between the grounded microcontroller, and the negative EHT of the CRT cathode - grid1, maybe 1.5 kV ?

Notice how the brightness of the trace indicates the direction of travel, since the output of the X and Y axis D-A converters, settle together, exponentially towards the next symbol.

It would look better if the point of departure from a previous symbol, was close to the next symbol, and the start point of the next symbol was on the far side of the widest part of the next symbol. That would cross the gap quickly, then have time to settle in the brighter area of the next symbol.
 
  • #13
Baluncore said:
Maybe Z-mod is not being used, to keep the sparks out of the microcontroller.
A suitable driver would do that. I'm surprised that the z mod seems not to be good enough to suppress visible flyback lines when in normal 'scope' mode. Or maybe the demo was considered to work well enough without specific flyback suppression. It can hardly be fundamental to the tube. OR . . . . they deliberately left the flyback lines visible to make it obvious that it was a vector image and not a raster.
It would have been an impressive demo - for a one off. What generation of electronics did the image processing, I wonder? Trivial, these days but it could have involved a lot of sweaty transistors in the 60s.
 
  • #14
sophiecentaur said:
Trivial, these days but it could have involved a lot of sweaty transistors in the 60s.
Well, by '90s drawing 7-digit numbers (measurements) on the screen was available even on (cheap, portable) soviet tech (that's the first I've seen) by logic ICs
Looked awful on that 5cmX4cm screen I had, but worked...
I guess it came earlier on the more lucky half of the world :smile:
 
Last edited:
  • Like
Likes sophiecentaur
  • #15
I don't see the issue of coupling logic signals to the deflection plates. Low voltage signals eventually ended up on on the deflection plates of any crt scope many years ago. Dual trace scopes in chop mode deflected the beam without blanking.
 
  • #16
Averagesupernova said:
I don't see the issue of coupling logic signals to the deflection plates.
That is not the problem. The problem is that z-mod is done at the grid 1 voltage, relative to the cathode, both of which are near the negative EHT voltage.
 
  • #17
Baluncore said:
That is not the problem. The problem is that z-mod is done at the grid 1 voltage, relative to the cathode, both of which are near the negative EHT voltage.
The z axis had low voltage control in some cases also. And apparently you didn't catch my implication that the clock kit is probably not blanked at all, ever. Crt scopes that I've worked on didn't blank the trace in chop mode. If the beam is deflected fast enough, it doesn't write on the phosphor face of the crt. Or at least it writes quite faintly.
 
  • #18
Averagesupernova said:
The z axis had low voltage control in some cases also.
Yes. It was often capacitor coupled to the grid 1, but EHT noise on the cathode then modulated the brightness. Expensive CROs had DC coupling of Z-mod, but that required transistors rated at 2 kV, until the advent of opto-couplers.

Averagesupernova said:
And apparently you didn't catch my implication that the clock kit is probably not blanked at all, ever.
Baluncore said:
Maybe Z-mod is not being used, to keep the sparks out of the microcontroller.

Averagesupernova said:
Crt scopes that I've worked on didn't blank the trace in chop mode.
Those I have worked on, usually AC couple the clock edge that advances the channel select flip-flop, to G1. That dims the trace during the channel transition.
 
  • #19
I hunted around and found this link about building a clock display like the one shown above. As I thought, the actual CRT deflection plates need some very high voltages and require amplification even for the lowest x and y sensitivity. 'Eric' obviously had fun with his kit.

Life was hard in those days but the TV race was as urgent as the Space Race and more actual profit was involved.

BTW RIP the latest Russian moon landing. Hard to resist a bit of Schadenfreude but I feel sorry for all those Engineers. I hope none of them find themselves in Siberia. Mr P can be very vindictive.
 
  • #20
So, I thought of a mechanism to have increased brightness in a crt without driving the cathode harder and emitting more xrays. Let me know of any obvious flaws with it as I don't actually know much physics.
Imagine three electron guns like a traditional color crt. The shadow mask is a traditional three circular holes per pixel shadow mask, ie no aperture grille, completely smushed flat in contact with all the phosphors. The reason for this is that the shadow mask is actually an open circuit with a voltage applied to it, where the electron beam passing over a pixel will allow the circuit in that spot on the mask to temporarily close only when the beam passes over it. At this point the phosphor would receive more electricity from the closed circuit on the mask and get even brighter.

The "air" gaps (it's in a vacuum but idk how else to word that) in the shadow mask, where the electron beam would pass through, will have enough resistance to not allow electricity to flow unless the electron beam is directly over it.

I have some concerns over this, that is I'm not really sure how a negative electron beam would interact with a mask that also has a charge. It might not land correctly or distort the image as the beam moves along the screen.

Also, since the gaps have a minimum requirement to even close, to have very good color range it would need to be extremely precise to allow very fine granulations of voltage to close the circuit without closing neighboring circuits as well. Seems extremely complex.
 
Last edited:
  • #21
jman5000 said:
The shadow mask is a traditional three circular holes per pixel shadow mask, ie no aperture grille, completely smushed flat in contact with all the phosphors.

The electron beam must go to ground through the mask, or it will build up a negative charge that repels/reflects the beam from the mask. The shadow mask is set back from the phosphor, so one hole in the mask is used by all three of the electron beams from the three guns. Each beam passes through the aperture at a slightly different angle, then hits the appropriate colour phosphor dot. The area about the phosphor must also be conductive, to prevent a buildup of charge, and secondary emission.

The phosphor is placed by photo-sensitising the screen with a radiation source, placed where the appropriate gun for that colour will be. So, it takes three cycles of sensitise, expose, etch and deposit, to place the phosphors. That requires the shadow mask be placed accurately three times, then finally fixed, before the back half of the tube is attached, and the guns installed.

I cannot see how you can photo-print three different phosphors with one mask having three holes for each pixel.

A brighter phosphor will overheat and damage the phosphor chemistry.
 
  • Like
Likes Rive and sophiecentaur
  • #22
jman5000 said:
Let me know of any obvious flaws with it
Actually, the whole thing is full of flaws. @Baluncore has pointed out some of them.
Also, It's a bit late to be trying to improve the Shadow Mask design. That ship has sailed.
 
  • #23
Baluncore said:
The electron beam must go to ground through the mask, or it will build up a negative charge that repels/reflects the beam from the mask. The shadow mask is set back from the phosphor, so one hole in the mask is used by all three of the electron beams from the three guns. Each beam passes through the aperture at a slightly different angle, then hits the appropriate colour phosphor dot. The area about the phosphor must also be conductive, to prevent a buildup of charge, and secondary emission.

The phosphor is placed by photo-sensitising the screen with a radiation source, placed where the appropriate gun for that colour will be. So, it takes three cycles of sensitise, expose, etch and deposit, to place the phosphors. That requires the shadow mask be placed accurately three times, then finally fixed, before the back half of the tube is attached, and the guns installed.

I cannot see how you can photo-print three different phosphors with one mask having three holes for each pixel.

A brighter phosphor will overheat and damage the phosphor chemistry.
Oh I thought each pixel had three holes for rgb. In any case I think I see a problem in that the circuit would still be closed even with the airgaps because the mask is in contact with the phosphors. They kind of have to be for my idea to work otherwise they wouldn't get additional energy from the closed circuit.
 
  • #24
sophiecentaur said:
Actually, the whole thing is full of flaws. @Baluncore has pointed out some of them.
Also, It's a bit late to be trying to improve the Shadow Mask design. That ship has sailed.
crts still have superb motion handling compared to other displays. There are some displays that match the crt in motion quality using strobing backlight, but they come with dim, muted colors and come with some other problems.
Sure, they might not have oled levels of color, but they beat out lcd while innately having great motion.
 
  • #25
Baluncore said:
The electron beam must go to ground through the mask, or it will build up a negative charge that repels/reflects the beam from the mask. The shadow mask is set back from the phosphor, so one hole in the mask is used by all three of the electron beams from the three guns. Each beam passes through the aperture at a slightly different angle, then hits the appropriate colour phosphor dot. The area about the phosphor must also be conductive, to prevent a buildup of charge, and secondary emission.

The phosphor is placed by photo-sensitising the screen with a radiation source, placed where the appropriate gun for that colour will be. So, it takes three cycles of sensitise, expose, etch and deposit, to place the phosphors. That requires the shadow mask be placed accurately three times, then finally fixed, before the back half of the tube is attached, and the guns installed.

I cannot see how you can photo-print three different phosphors with one mask having three holes for each pixel.

A brighter phosphor will overheat and damage the phosphor chemistry.
How did you learn how crts work? I cannot find decent information on the internet beyond layman descriptions that aren't very thorough in the intricacies of it. I'd really like to get a more thorough understanding of all the parts of it.
 
  • #26
jman5000 said:
How did you learn how crts work? I cannot find decent information on the internet beyond layman descriptions that aren't very thorough in the intricacies of it. I'd really like to get a more thorough understanding of all the parts of it.
I did a Google search on Society for Information Display: CRT design and got some good hits that may help you. Check out the hit list or do your own similar search:

https://www.google.com/search?client=firefox-b-1-e&q=society+for+informaion+display:+CRT+design
 
  • Informative
Likes AlexB23 and phinds
  • #27
jman5000 said:
How did you learn how crts work?
I became interested in electronics in 1965, built my first CRT oscilloscope in 1969, and have spent most of my life surrounded by CRT based instruments.
If you stay with it for long enough, it will soak in.
 
  • Like
Likes phinds, Averagesupernova, Rive and 1 other person
  • #28
jman5000 said:
crts still have superb motion handling compared to other displays.
I can't believe that. Are you taking the display only or are you including the coding and compression that is used in order to get more than one HD channel into the spectrum space used for conventional 625 line TV?

Is it the same argument that makes vinyl better to listen to than MP3? (Was I being too cheeky there?)
 
  • #29
sophiecentaur said:
I can't believe that. Are you taking the display only or are you including the coding and compression that is used in order to get more than one HD channel into the spectrum space used for conventional 625 line TV?

Is it the same argument that makes vinyl better to listen to than MP3? (Was I being too cheeky there?)
It is measurable quality. Check out Blurbusters.com for more info and the guy who runs the site and figured this out can answer more focused questions. I'll try to describe it though. This is assuming progressive signal to crt, not interlaced.

How blurry something looks to the human eye is tied to how long the pixel is visible to your eye. modern displays use sample and hold, where the frames are held until a new one is swapped in. So a 60hz display holds a frame for 16ms. A crt image visibility is based on the phosphor decay time since it doesn't hold the frame. The decay time of the phosphors are ~1ms. For a lcd to match crt in motion clarity it would need to hold the frame for only 1ms so about 1000hz.

I will point out that a sample and hold display still retains their full resolution when slower motion is on screen. How fast stuff can pan on screen is tied on the hz. So, an oled will actually have higher resolution doing a slow-motion pan, but it doesn't take much before it switches to looking blurry.

https://www.testufo.com/framerates#count=3&background=stars&pps=120

This test shows what I am talking about. Use ctrl+mouse wheel to set scaling to 100%. Change the speed dropdown and notice that the ufo gets blurrier the higher the speed. crt decouples the hz from motion blur so you can even run a crt at 30 (with horrible flickering) and get crystal clear images at any speed you set that to. Focusing on these tests might give you a headache but I assure you don't have to focus to the point of nausea to notice the enhanced motion on crt.

I'll also point out if you have a slower cheaper lcd it might even look blurry at the lower speeds in which case you might think what I'm saying is balony. However, these cheaper lcds especially older ones have longer pixel transition times that can bleed into the next frame and still look blurry no matter what.

https://www.testufo.com/photo#photo...ursuit=0&height=0&stutterfreq=0&stuttersize=0

You can see that it can actually look completely in focus. Again, on crt it is crystal clear at any speed setting.

Also, even if you had a 1000hz display you would have to have content that is natively 1000hz to get that motion clarity, otherwise you get image duplication like you see with those other refresh rates on those sites. Most video is 24hz so you actually get this image duplication. You may not notice it outside of extreme panning speeds though because the duplications are in the boundaries of the blur lcd/ oled produce. There are solutions to these problems that basically amounts to having insanely high resolutions and refresh rates to the point of absurdity to make the duplications unnoticeable. Realistically even crt isn't the perfect solution for film as you get flickering at lower hz so to run it at a watchable hz would be 48hz minimum imo, which would still image duplicate, but it still looks cleaner than blur of lcd/ oled. You can also interpolate frames on either display. For the crt the number of fake frames would be kept to a minimum since you can use lower hz and still retain motion clarity, which should look better than a higher ratio of fake frames.
 
Last edited:
  • #30
jman5000 said:
It is measurable quality.
I see what you mean now (a rave from the grave for me). It's a Sampling Phenomenon. In order to get the brightness as high as possible, the digital screen pixels are on for as long as possible. This introduces a sinc distortion in the spatial frequency response when an object moves. I remember it was given a name Boxcar Distortion(?). This is avoided with zero width samples. I found this link which explains what I remember of sampling theory.
CRT phosphors are given a short decay time to avoid blurring- overlap into the next pixel (even with stationary scenes, iirc) and they have very high peak brightness. It is a very savage way of producing light from the little phosphors. That link can't actually demo a CRT, of course - not on my monitors at least.
Afair, the practical resolution of a CRT also depends on the bandwidth of the 'z mod' drive chain which the parallel digital circuitry sorts out another way.

BTW, I looked at the link and yes it made me feel sick. But I am a serious whimp on fairground rides and with strobe lighting.
 
  • #31
sophiecentaur said:
It's a Sampling Phenomenon. In order to get the brightness as high as possible, the digital screen pixels are on for as long as possible. This introduces a sinc distortion in the spatial frequency response when an object moves. I remember it was given a name Boxcar Distortion(?). This is avoided with zero width samples. I found this link which explains what I remember of sampling theory.
Wait are you saying the phosphor decay time doesn't matter for visible blur and the blur on modern displays is purely from these samples they take? So a crt with much higher decay times wouldn't look any blurrier? (I know they would have other problems like longer phosphor trails). I thought the blur had something to do with the eye interacting with light on a flat stationary surface and trying to predict the motion or something, but you are saying it's purely because of multiple width samples being taken?Additionally, this article written by the author of this website goes into much better detail than me on persistence blur vs motion blur in sample and hold displays.
https://forums.blurbusters.com/viewtopic.php?t=177

This quote I pulled from it is why I thought the blur was tied to pixel visibility:

"As you track eyes on moving objects, your eyes are in different positions at the beginning of a refresh than at the end of a refresh. Flickerfree displays means that the static frames are blurred across your retinas. So high persistence (long frame visibility time) means there is more opportunity for the frame to be blurred across your vision. That's why you still see motion blur on 1ms and 2ms LCDs, and this is why strobing hugely helps them (make them behave more like CRT persistence)."
 
Last edited:
  • #32
jman5000 said:
Wait are you saying the phosphor decay time doesn't matter for visible blur and the blur on modern displays is purely from these samples they take?
jman5000 said:
So a crt with much higher decay times wouldn't look any blurrier?
I'm glad you started this conversation because it's revived my interest.

The phosphor decay time has no effect with stationary pictures. A long decay can 'leave behind' elements of the previous frame and leave streaks or blurred back edges. (I said it the wrong way round in my previous post.)
sophiecentaur said:
CRT phosphors are given a short decay time to avoid blurring- overlap into the next pixel (even with stationary scenes, iirc)
The blurring is into the trailing part of a moving object. That bit about stationary scenes is nonsense. Long phosphor delay just gives a brighter stationary image.

The rectangular pixels are samples of the image. In many sample reconstruction operations, the result of 'boxcar' distortion can be compensated by post filtering (eg in audio DACs) but the eye can't do that so that will cause blurred edges when there's movement. Temporal effects are difficult to deal with before the camera sensor; we can't Nyquist Pre-filter the original scene so it would need to be highly over sampled to avoid those sampling products.

There's no essential difference between seeing a moving object on a screen or seeing the background when we track a moving object but I am sure that our brain video processing is different for the two cases. (tracking a moving object can make us feel unwell, for instance)

But I have reservations about the methodology in the link because the display can't (?) change the time that the pixels are switched on for. Also, the analogue decay of a phosphor can result in more than one past pixel being affected (in the extreme) but each LED pixel has no memory of its previous value. (If it does, the effect can be filtered out.)
 
  • #33
sophiecentaur said:
There's no essential difference between seeing a moving object on a screen or seeing the background when we track a moving object but I am sure that our brain video processing is different for the two cases. (tracking a moving object can make us feel unwell, for instance)
https://www.testufo.com/persistence

This actually shows how eye tracking can change how an image looks.
sophiecentaur said:
I'm glad you started this conversation because it's revived my interest.
In crt or just display tech in general? I feel disappointed that crt is dead. The best monitors had 140khz horizontal frequency and other models could run 2300x1440p at like 80hz which is beyond full hd on a small monitor. Some models even had .22mm dotpitch.

Yeah, the size is big but as someone who doesn't care about the size, I'd rather have the motion handling over the brightness of modern displays. In a near black room the colors still blow my really expensive lcd out of the water. I play more games than watch stuff so that motion quality matters massively when games are changing the entirety of the screen quickly most of the time.

I wish I'd have gotten into them a decade ago when people were offloading them for cheap. Nowadays all that's left are mostly lower quality small ones. Like I feel like display tech downgraded. That's not how technology is supposed to work. It isn't supposed to get worse. (yeah I know modern displays do some things better like ansi contrast)
 
  • #34
jman5000 said:
his actually shows how eye tracking can change how an image looks.
But he doesn't do a comparison with a CRT display and I think you may be extrapolating in that direction.
Thing is that a CRT has to use a sequential (raster) scan and the only cleverness that's available is to have interlace, which gives better motion portrayal and reduces flicker. There's nothing to say that a TV picture has to paint the pixels in that way; they can be randomly accessed and areas with motion can have higher refresh rate than still backgrounds. It's only a matter of time before this is available and standardised. But broadcasting media have an enormous lag in their development because of the need for 'compatibility' with existing equipment in the home.
Data compression is essential when people want access to multiple channels and the CRT, with its rigid scanning mode would require extra image processing to make it compatible with Digital standards.
CRTs with big images have a certain charm but they are so expensive, take up room (as big as a grand piano!), and loads of electrical power. Plus, they are too dim to be able to watch under almost daylight conditions (see the afternoon football game from a chair out in the garden) so they don't really have a place in today's world where digital displays can have amazing resolution and the possibility of upscaling scan rates. They are almost throw-away items these days and they tick virtually all the boxes in an Engineering and commercial context.
jman5000 said:
I feel disappointed that crt is dead.
My comment about Vinyl seems to have been accurate but I'm more in love with the signal processing that squeezed RG and B signals into a PAL coded signal which would fit in an RF TV channel bandwidth with a very fair monochrome version for those viewers who 'preferred black and white'.
 
  • #35
sophiecentaur said:
There's nothing to say that a TV picture has to paint the pixels in that way; they can be randomly accessed and areas with motion can have higher refresh rate than still backgrounds. It's only a matter of time before this is available and standardised.
Stuff that is filmed at 24fps, like nearly all Hollywood film and youtube videos don't gain anything by running at higher framerate than it was captured at. It won't matter if it is streamed at higher framerates. It just ends up creating duplicate images on your display (that your eye literally sees as distinct duplicate images, I'm not sure if I made that clear before). Believe me I've tried converting film to higher framerates without interpolation to try and get rid of the duplicate images the eye perceives, it doesn't work. I don't doubt lcd and oleds could match or surpass the motion handling of crts one day but who knows when that will be?

Technically we already have similar motion handling on certain lcds that use strobing backlight, but the implementation is usually bad, creating artifacts that look even worse, and the colors end up looking so washed out and dim it's not worth using. I have one of those lcds and the implementation of it is bad even though it was a very expensive monitor. If you are wondering what I mean by strobing I am talking about this:
https://blurbusters.com/faq/motion-blur-reduction/I think a large portion of these artifacts are tied to human brain doing some work on the image it perceives and therefore isn't a directly measurable attribute without using a human eye. Of course, you can still establish correlations to other display attributes using a human eye. Although if you think all the blur and duplicate images is purely tied to measured quantities, I'd love to hear a more technical explanation of it beyond just human perception.

Your explanation on boxcar blur makes me wonder, is what you describe the sole factor in motion blur on sample and hold displays or is there a human perception element outside of it?
 
Last edited:
  • #36
jman5000 said:
Believe me I've tried converting film to higher framerates without interpolation to try and get rid of the duplicate images the eye perceives, it doesn't work.
Of course, without some form of interpolation you can't up convert in any useful way. Way back, my department were developing the use of motion vectors for improving motion portrayal in 525/60 to 625/50 TV standards conversion. This link is an abstract of a paper I found which is not free but it mentions that motion portrayal can be enhanced beyond simple interpolation. If you are interested in frame rate upconversion then you might find that link interesting. A search using the term 'motion vector' should take you to some useful stuff.
You will have seen slomo sequences in sports TV coverage and they do much better than simple interpolation. In most respects, TV has far more potential for quality improvement than film can ever have.
 
  • #37
sophiecentaur said:
Of course, without some form of interpolation you can't up convert in any useful way. Way back, my department were developing the use of motion vectors for improving motion portrayal in 525/60 to 625/50 TV standards conversion. This link is an abstract of a paper I found which is not free but it mentions that motion portrayal can be enhanced beyond simple interpolation. If you are interested in frame rate upconversion then you might find that link interesting. A search using the term 'motion vector' should take you to some useful stuff.
You will have seen slomo sequences in sports TV coverage and they do much better than simple interpolation. In most respects, TV has far more potential for quality improvement than film can ever have.
Can you think of a hypothetical way to have a display that doesn't hold samples and works similar to crt method of immediately fading the image? Games cannot really be run at super high framerates so overcoming blur with more frames isn't really feasible. I wonder if fed and sed displays would of been sample and hold?

Obviously if you had high enough refresh rate display you might be able to just simulate a crt method of drawing images, but that level of display seems like sci-fi.
 
Last edited:
  • #38
sophiecentaur said:
Actually, the whole thing is full of flaws. @Baluncore has pointed out some of them.
Also, It's a bit late to be trying to improve the Shadow Mask design. That ship has sailed.
Would you mind pointing out some of those flaws? I'm not asking to spend a whole afternoon thinking through all the problems but if you immediately see something wrong with it I'd appreciate it as now I'm curious what's wrong with it.
I already noticed it's not actually open circuit by being in contact with the phosphors, but disregarding that what else was wrong with it?
 
  • #39
I found an article linking to all the other articles on the blurbuster website talking about all the sources of motion blur if anyone wants to see it. It explains it much better than I do especially if you read many of the links. The blur I was describing how to overcome with 1000hz displays is the eye tracking blur.
https://blurbusters.com/faq/lcd-motion-artifacts/

More specifically this article covers eye tracking blur: https://blurbusters.com/faq/oled-motion-blur/
 
Last edited:
  • #40
This thread has split into two now - motion is a second issue.
jman5000 said:
Can you think of a hypothetical way to have a display that doesn't hold samples
You can't avoid a sampled image these days so you are stuck with samples. Before long and if necessary, they will probably invent a solid state display with high enough power to flash the pixels at significantly less than the frame repeat period. That could mean impulse samples rather than long ones.

jman5000 said:
Would you mind pointing out some of those flaws?
There is a list of them in the earlier post I mentioned from @Baluncore.

I've been thinking more about CRTs and there are issues about the choice of a horizontal raster display. It favours horizontal motion portrayal. Your interest seems to be mostly about horizontal motion and perhaps most games are designed with more of that (?).
jman5000 said:
I think a large portion of these artifacts are tied to human brain doing some work on the image it perceives and therefore isn't a directly measurable attribute without using a human eye.
The human eye is there all the time., of course but it is not hard to make a mechanical display (large rotating cylinder ) as a reference moving image. Much easier than trying to produce fast moving unimpaired images electronically. Do you fancy making one of those and having it in the corner of your games room? :wink:

I must say, I found the blur buster display examples were very hard work to look at and there is an enormous snag that they have no reference CRT or 'real' images.
 
  • #41
jman5000 said:
Technically we already have similar motion handling on certain lcds that use strobing backlight, but the implementation is usually bad, creating artifacts that look even worse, and the colors end up looking so washed out and dim it's not worth using.
Still, modifying them: doing it right based on that direction looks more promising than the resurrection of a tech now long dead.

jman5000 said:
Stuff that is filmed at 24fps, like nearly all Hollywood film and youtube videos don't gain anything by running at higher framerate than it was captured at.
Now AI is working on that: both on resolution and fps. The results are quite surprising for a tech barely out of its cradle. Also looks like an area worthy of interest (more useful and interesting than those ch(e)atbots).
 
  • #42
sophiecentaur said:
I must say, I found the blur buster display examples were very hard work to look at and there is an enormous snag that they have no reference CRT or 'real' images.
What exactly do they need a crt for? Are you saying you don't believe the assessment that the visibility of a pixel reduces the perceived motion blur? I have a 144hz lcd and setting to different refresh rates of 60/100/144 definitely allows me to set that ufo test faster without blur and it is definitely less blurry when panning around in a game. Whatever it is seems to coincide with refresh rate so even if it isn't directly caused by refresh rate raising it seems to keep lowering perceived blur.
 
Last edited:
  • #43
sophiecentaur said:
This thread has split into two now - motion is a second issue.
Are you saying I need to make a second thread specifically for motion? I'm probably close do done bugging you now. There's not much more I can ask. Well, that's not true I could probably talk your ear off asking questions about crt but I won't.

Also, in relation to motion horizontal/ vertical on crt, does that distinction matter? I've never noticed a discrepancy between the two directions on my crt but then again maybe I never see stuff move vertically.
 
Last edited:
  • #44
Rive said:
Still, modifying them: doing it right based on that direction looks more promising than the resurrection of a tech now long dead.Now AI is working on that: both on resolution and fps. The results are quite surprising for a tech barely out of its cradle. Also looks like an area worthy of interest (more useful and interesting than those ch(e)atbots).
I hope so but it's been two decades of having a literally worse image in motion than what we had at the super high end crt monitors and while we have 4k now we still haven't caught up in motion resolution, which is kind of the whole point of a display.

I know there are theoretical ways of doing it such as having super bright lcds that can use strobing, but those still lack the deep blacks, so now you are looking at doing that with microled instead. microled isn't even available to consumers at this point.

The only other option is an oled that gets so bright that you could insert a thick rolling bar in like 75% of the frame duration, that dims the image to gain motion clarity by reducing pixel visibility duration. With a bright enough oled the blackbar wouldn't matter as long as I was okay with running at nits comparable to crt. The current black bar insertion in some oleds only has around 600p resolution in motion and ends up even dimmer than a crt.

I hope technology can keep improving. I don't know though, crt didn't keep improving ad Infinium. Who is to say we won't hit similar limitations for oled/ lcd? Do we know the limitations and it's just a matter of developing the tools to manufacture it?
 
  • #45
jman5000 said:
Who is to say we won't hit similar limitations for oled/ lcd
Just noticed that a solution got skipped here. I wonder if you have checked plasma displays (both actual parameters and potential for development) for your requirements/expectations?
 
Last edited:
  • #46
Rive said:
Just noticed that a solution got skipped here. I wonder if you have checked plasma displays (both actual parameters and potential for development) for your requirements/expectations?
Plasmas supposedly scaled poorly in terms of power and weren't power efficient enough at higher resolutions to pass regulations is something I read on the internet. You are right that plasma might have been the solution if only we'd have used faster decaying phosphors. Plasmas that were sold were typically using 4ms persistence phosphors.
 
  • #47
jman5000 said:
weren't power efficient enough at higher resolutions to pass regulations is something I read on the internet.
... to be honest, I have some doubts that CRT would pass those parts of the regulations at resolution/frequency expected these days...

For me, I see far higher chance of (partial) resurrection for plasma than for CRT.
 
  • #48
jman5000 said:
I've never noticed a discrepancy between the two directions on my crt
if you wave your hand across the screen side to side and up and down, there's a noticeable difference. But I can't repeat that or other experiments because my last CRT went away 15 years ago. There's a wealth of old papers on motion portrayal on raster scan CRTs - just dig around. Its won't have been written on computer, though!.

I have to admit that I find video games rather tiresome (but I am in a minority) so I don't really care too much about the amazing spec they need for processor and connection speeds. But I am very impressed at the eye watering performance that's available.
jman5000 said:
an oled that gets so bright
I suspect that Oleds are just a passing phase. I suspect that a bright enough device with a short duty cycle will involve high volts and high mean power consumption. That could limit applicability to non portable products. But I watch this space.

The future will be in the minds of the developers and they will ration out the advances to maximise the profits by regulating the rate that new stuff becomes available. I remember the idea that Kodak always had a set of 'new' films and processes to bring onto the market every time (and only after) another company brought out a 'copy' of the latest Kodak product.
 
  • #49
sophiecentaur said:
I have to admit that I find video games rather tiresome (but I am in a minority) so I don't really care too much about the amazing spec they need for processor and connection speeds. But I am very impressed at the eye watering performance that's available.
I'll admit I'm also getting tired of video games, only some that stand out catch my attention nowadays. But that is just it, we need super high refresh rates to lower blur from sample and hold, it's a fundamental flaw with it. Yes, more refreshes results in a more continuous image but the clarity of those images is bad because of the long persistence. My crt running at 94 refreshes flat out looks better in terms of motion clarity and smoothness than my high end 144hz lcd when objects exceed a certain threshold in speed that's not hard to exceed. The clarity is better at even 60, though I'll admit the smoothness feels less than the 144hz ld at that point, it still ends up looking better.

This isn't just for games though it applies to movies as well. Having super high framerates just produces the image duplications and they still have the blur from sample and hold as well. Running 24fps movies on a higher hz display doesn't actually decrease the persistence of it as it duplicates the frames to fit into however many hz it is. To do otherwise would result in a super dim image.

Like I said, these should be solvable problems if we can get even brighter pixels than what we have now. I know there are some displays around two thousand nits nowadays so maybe that would be bright enough, but nobody has tried using those displays in ways that improve motion clarity.
 
Last edited:
  • #50
jman5000 said:
Having super high framerates just produces the image duplications
Why would it involve something as basic as image duplications? I think your experience with increasing frame rates reflects that the processing is not top range. If you want to see how it can be done really well then look at slow mo, produced at the TV picture source - before transmission. It's very processor intensive and it builds a picture for each frame, using motion vectors for each pixel. The motion portrayal is very smooth with very little blur. I'd imagine that the cost of circuitry to do this real-time in a tv monitor may be just too much.

Rive said:
For me, I see far higher chance of (partial) resurrection for plasma than for CRT.
Haha. And heat your house at the same time. 'Electronic Tubes' use as much power for the heaters as your whole solid state display, I'd bet. But we still use one electron tube in our homes and there is, as yet, no alternative. The heroic Magnetron is a legend. I read a story that the very first one that was manufactured in a lab worked really well - the result of some brilliant lateral thinking about how electrons behave in magnetic fields
 
Back
Top