How do CRT's change resolution?

In summary, a CRT monitor has a dot pitch of 0.22mm x 0.14mm x 0.26mm (Hor. x Vert. x Diagonal), and a max resolution of 1600x1200. The viewable area is 18" which translates to roughly these values in millimeters: Viewable: 457.2 mmHorizontal: 345.44 mmVertical: 299.72 mmDividing the horizontal dimension by the horizontal pitch, I get 1570 pixels. Since the max resolution is 1600 I assume that the remaining 30 pixels are missing due to some rounding noise in my calculations. Anyways, this isn't my issue. Assuming there are 1600 pixels, using a resolution of 800
  • #1
Labyrinth
26
0
This is probably a very simple question, but the answer is surprisingly hard to find.

I have a CRT monitor with a dot pitch of 0.22mm x 0.14mm x 0.26mm (Hor. x Vert. x Diagonal), and a max resolution of 1600x1200 (this is a 4:3 display).

The viewable area is 18" which translates to roughly these values in millimeters:

Viewable: 457.2 mm
Horizontal: 345.44 mm
Vertical: 299.72 mm

Dividing the horizontal dimension by the horizontal pitch, I get 1570 pixels. Since the max resolution is 1600 I assume that the remaining 30 pixels are missing due to some rounding noise in my calculations. Anyways, this isn't my issue.

Assuming there are 1600 pixels, using a resolution of 800x600 or 400x300 would constitute a doubling and quadrupling of horizontal pixels into a group respectively, but what about a resolution such as 1024x768?

How does a CRT group its pixels in such a way as to allow these "in between" resolutions?
 
Computer science news on Phys.org
  • #2
The number of pixels don't necessarily map onto the actual pitch because some of the tube width is outside the screen mask - so you lose some pixels around the edge.

For smaller non-integral multiples it simply rounds off to the nearest multiple so to display 1024 pixels on 1600 it will, approximately, display alternate scan pixels for 2 screen pixels and then 1 screen pixel so giving 1024*1.5 = 1536
How smart it is at doing this depends on how good the monitor is - a modern monitor could store a scan line or entire frame in memory and resample to it's own resolution.
 
  • #3
It's not that complicated. The phosphors on a CRT are analog, and behave similar to capacitors, depending on their persistance. They respond to the average current flow over time. The amount of time and number of sweeps it takes each color phosphor to fully transition depends on the persitance and sensitivity of the phosphors. The phosphors end up doing the interpolation instead of some logic inside the monitor.

The electron beam guns in a CRT are basically painting all of the phosphor pixels on the screen at all times. The intensity of the beam for each color changes on resolution boundaries instead of phophor mask boundaries, but since the phosphors respond to both duration and intensity of the beam in about the same manner, you get a nice anti-aliasing effect. For example, if a boundary from full intensity to zero intensity occurred mid-phosphor, the phosphor would end up being half as bright, and probably brighter on the side getting the higher intensity, increasing the effective resolution beyond the number of phosphors on the screen.

This built-in up-coversion partial phosphor repsonse behavior to mid-phosphors changes in beam intensity of the CRT's makes them much better at handling a wide range of resolutions. Even though the highest resolution exceeds the phosphor density, the image will only get a bit blurry depending on the amount of "bleeding" within each phosphor.

So it's in the nature of the phospors to handle the up or down conversion from various resolutions, unlike a digital monitor which has to use an algorithm and can't paint a partial pixel.

The next step up from a single CRT system, is a 3 tube CRT projector, which eliminates the need for any mask, but requires more precise convergence calibration. Since each tube has a solid coating of phosphor for each color, there are no "pixels", and the beam diameter, sweep rate and intensity cycling can be varied to handle various resolutions.

I have a Hitachi CM722 and a ViewSonice GF225FB. The G225FB can go up to 2048x1526 resolution but the phosphor density is about 2032, so it's a bit blurry. At 1900x1200, which I sometimes use for wathing hi-def video, it looks fine. Other wise I mostly run it at 1280x960 or sometimes 1600x1200. What I noticed most between a good CRT monitor and a LCD monitor is that the text looks much better on a CRT monitor.
 
Last edited:
  • #4
Thanks for both of your replies.

I have some questions.

So basically, if I use resolutions that can be made using less pixels than are in the mask I at least have the ability to avoid pixel "bleeding" / blurriness?

I stretched/moved the screen at 1024x768 until one more "line" on top and bottom, left and right started to make this waffle type pattern. I then backed it off by one. I figured that this was the end of the mask and I could then calculate the pixel geometry with precision because I could get fairly exact figures for the horizontal/vertical dimensions of the viewable area. However since using the entire viewable area creates a non-integral value, and cannot be precisely 1.5 pixels, I know that there must be some bleeding occurring.

At for example, 1024x768, at what dimensions should I make the screen to avoid any pixel bleeding, and also know precisely how many pixels (or pixel groups) I'm looking at?
 
  • #5
Labyrinth said:
At what dimensions should I make the screen to avoid any pixel bleeding, and also know precisely how many pixels (or pixel groups) I'm looking at?
It may not matter. Typically the "misconvergence" value is greater than the dot size, so I don't think you'll avoid partial pixel painting no matter what settings you use. At lower resolutions, the fine dot pitch will do a sort of anti-alias, most noticable with how the curves on text fonts look smoother than they do on a digital monitor. This could be due to having round pixels instead of rectangular pixels. Text doesn't look as good on triniton (vertical mask, rectangular pixels) CRTs as it does on shadow mask (round pixels) CRTs. The point is that CRT monitor phosphors are designed to work well with partial pixel painting.
 

What is a CRT?

A CRT (Cathode Ray Tube) is a type of display technology that was commonly used in older television and computer monitors. It works by using an electron gun to shoot a beam of electrons onto a phosphorescent screen, creating the images that we see.

How do CRT's change resolution?

CRT's change resolution by adjusting the number of horizontal and vertical lines that make up the display. This is done by adjusting the electron beam's scanning frequency and the strength of the electron beam, which determines the brightness of each pixel. By changing these parameters, the display can show more or less detail, resulting in a change in resolution.

Why do CRT's have different resolutions?

CRT's have different resolutions because they were designed to display different types of content. For example, a CRT used for television will have a lower resolution compared to a CRT used for computer monitors, as television content does not require as much detail as computer graphics.

What is the maximum resolution of a CRT?

The maximum resolution of a CRT depends on the specific model and manufacturer. However, in general, CRT's have a maximum resolution of around 1920x1080 pixels, which is considered high definition (HD).

How do CRT's compare to modern display technologies in terms of resolution?

In terms of resolution, CRT's are significantly inferior to modern display technologies such as LCD and LED. This is because CRT's are limited by their design and cannot display resolutions higher than a certain limit. On the other hand, modern display technologies have much higher resolutions, with some screens even being able to display 8K resolution (7680x4320 pixels).

Back
Top