How Does Flickering Light Influence Our Brain's Perception of Image Brightness?

AI Thread Summary
The discussion centers on the perception of color vibrancy and brightness differences between CRT and LCD displays, particularly at varying refresh rates. Observations indicate that a CRT appears more vibrant in low ambient light, despite having a lower contrast ratio than an LCD, potentially due to the human eye's sensitivity to light and the effects of persistence of vision. Lowering the refresh rate of the CRT to 48Hz resulted in a perceived increase in brightness, attributed to the longer duration of black frames enhancing contrast perception. Participants also explore the implications of flickering light and neighboring pixel colors on perceived contrast, suggesting that the eye adapts quickly to these changes. The conversation concludes with inquiries about research supporting these observations and the effects of display technology on color perception.
jman5000
Messages
29
Reaction score
2
TL;DR Summary
Ambient light affects perceived contrast due to eye adaptation to raised light. A Display's light output would itself raise ambient light. Therefore, dimmer displays can result in higher perceived contrast than what the static contrast is to a greater degree than a brighter display.
Bear with me this will be long, but everything here I feel is necessary to understand the context.

I recently used a crt at varying refresh rates and noticed a few things that has made me question how the brain perceives colors on a display. First, is that my crt appears to have more color vibrancy than my lcd in a near black room even though the lcd has a higher contrast ratio by a factor of nearly 5x. I hypothesized that maybe it was because the human eye is less sensitive to light when there is a lot of ambient light in the room and conversely when there is less it is more sensitive to light.

Logically the light from the display itself should also force this eye adaption as well. The crt is at least half as bright if not more and therefore my eye would be more sensitive to the light it output. I decided to test my idea by lowering the refresh rate of my crt to see if it would appear even brighter. (a crt display draws a frame and the frame immediately starts to fade, meaning there is actually a blackness between each frame, lcds don't do this, instead they swap out an image for the next with no black between them). The result was that it looked even brighter at 48hz than at higher framerates!

Yet, there was a slightly visible black between the images, but when I saw the image, it appeared even brighter than when at a higher framerate. I know the brightness did not actually increase because I measured the same luminance at either framerate. This suggests that the increased black image in my persistence vision, ie a noticeable flickered light, can actually give you higher perceived brightness than a static light (and therefore perceived contrast? It is hard to gauge the contrast since I cannot compare the framerates simultaneously. Picking up on brightness while swapping between settings is easier to notice.)

I know the human perception of contrast can exceed a displays objective contrast because of this: https://en.wikipedia.org/wiki/Checker_shadow_illusion

This means a display that could only output the colors of tile A and B would result in a third color perceived by the brain that is impossible for the display to actually have. This establishes that a display can have perceived contrast greater than the objective contrast that is dependent on the light of the neighboring pixels. I believe this might also be happening here with my crt.

Now here is where I run into a crossroads. My assumption that the ambient light of the display itself affects perceived contrast seems reasonable, but using the knowledge that the checker shadow illusion gives also means I could associate an increase in perceived contrast due to the neighboring pixel being a different color in my persistence vision.

That is, the neighboring pixel is black in my persistence vision when the framerate is lower. Do I attribute the increase in color vibrancy when ran at 48hz to the increased duration of black in my persistence vision, resulting in a darker environment eye adaptation, or due to the fact that the color of the neighboring pixels is a different color in my persistence vision, that is, they are darker?

Now me saying the neighboring pixels are darker in my persistence vision might be confusing because I said the image was brighter at a lower framerate but remember a crt is actually drawing each pixel sequentially in every frame, unlike lcd which swaps them all at once. Meaning the persistent image in my vision is actually changing much faster and more like a gradient. I see the dark parts in my persistence vision, but the newest part of the image appears brighter at the lower framerate then at the higher framerate. Since the persistence vision updates instep with the pixels being drawn, the entire image appears brighter even though I can see the dark parts.

It seems crazy that the human eye could make dark adaptations in such short time as to be noticeable at 48hz vs 60hz, but consider the display is actually dark more of the time than lit up. It just appears the opposite due to persistence of vision. The two things listed above are my reasoning for why it looks more vibrant than my lcd and while I don't have objective evidence to back it up, this is the most logical conclusion I can come up with.

Let me know if you disagree with any assumptions or reasoning! I am working off a lot of assumptions with little proof beyond my own experiences but I'm not sure how else to explain the increased perceived brightness and color vibrancy the crt has compared to my lcd.

My question is if there any research to support this conclusion? Alternatively suggest some additional reading that explores the idea of how flickered light or neighboring light can affect perceived color. I know this technique could never match a display that can natively output high contrast while being able to maintain a low average light, like an oled, but I am curious on what the limits to this could be. It seems logical that the higher your average display brightness, the less effective this flicker would be at increasing perceived contrast since your eye would be adjusted to a brighter image. Not to mention flickering on a dim crt is fine, probably would hurt eyes at higher nits like 600.

On a sidenote, unrelated to the science of this, the 48hz flickering actually adds some cool affects to some parts of video. For example, ceiling lights have a subtle flicker that makes them have more presence in a scene, fire has an actual subtle flicker, a magic portal will have a subtle pulsing, lightsabers and lasers will also have a pulsing effect. Really neat, except that it does it to things that shouldn't flicker as well like lab coats or anything that is bright compared to the rest of the scene.
 
Last edited by a moderator:
Biology news on Phys.org
Short flashes look brighter than long ones, as you can see by looking through a film camera when you operate the shutter. This effect was known to the television pioneers. when very low frame rates were used, down to 12 1/2 frames per second. The eye also produces spurious colour from flashing sequences.
 
It would seem useful incorporate LED displays into any research protocol. The LED can respond to MHz inputs and thereby such a display can provide an arbitrary time response relative to the much slower phosphors of the CRT or viscosity of the LCD.
At present your analysis is a thin net of observations trying to envelop a pile of supposition.
I feel certain that the visual effect of PWM LED lighting vs DC LED lighting has been extensively researched because of the obvious commercial application. One could start there.
 
With CRTs there is a frame rate and a field rate. For progressive, they are the same. For interlaced, they are not. In your comparison between 48Hz and 60Hz, did both formats have the same number of fields per frame?
Also, what was you method for measuring luminance? I would used a solid white test pattern that illuminated a white wall - then measured the brightness of that wall. This reduces the geometric effects of a rapidly scanning light spot. Basically, the wall would be illuminated by the bright spot no matter where the bright spot was at any instant.
 
.Scott said:
With CRTs there is a frame rate and a field rate. For progressive, they are the same. For interlaced, they are not. In your comparison between 48Hz and 60Hz, did both formats have the same number of fields per frame?
Also, what was you method for measuring luminance? I would used a solid white test pattern that illuminated a white wall - then measured the brightness of that wall. This reduces the geometric effects of a rapidly scanning light spot. Basically, the wall would be illuminated by the bright spot no matter where the bright spot was at any instant.
The USA adopted 60 fields per second, 30 frames, and UK and Europe adopted 50 fields per second, 25 frames. These were initially locked to the AC supply mains in order to avoid hum bars on the picture, but later would be crystal locked. Although interlaced scanning reduces brightness flicker, it introduces jerkiness with motion, so it is not a fully effective technique.
 
tech99 said:
The USA adopted 60 fields per second, 30 frames, and UK and Europe adopted 50 fields per second, 25 frames. These were initially locked to the AC supply mains in order to avoid hum bars on the picture, but later would be crystal locked. Although interlaced scanning reduces brightness flicker, it introduces jerkiness with motion, so it is not a fully effective technique.
As I am sure you are aware, the NTSC objective was not perfection. It was to jam a color picture plus into a 6MHz band. It was a hero's effort in adapting to human vision using only analogue hardware.

But my comments on interlacing where only to ask more about the OP.
 
.Scott said:
With CRTs there is a frame rate and a field rate. For progressive, they are the same. For interlaced, they are not. In your comparison between 48Hz and 60Hz, did both formats have the same number of fields per frame?
Also, what was you method for measuring luminance? I would used a solid white test pattern that illuminated a white wall - then measured the brightness of that wall. This reduces the geometric effects of a rapidly scanning light spot. Basically, the wall would be illuminated by the bright spot no matter where the bright spot was at any instant.
Oh wow wasn't expected this thread to get resurrected. They were progressive. I did find out that the crt gets reduced sharpness (and hence reduced contrast?) at higher bandwidth. Not sure if the two are connected as I am skeptical on if I can notice such differences between the low bandwidths I was testing at.
I'm not too interested in these crts anymore tbh as I got into them for video games which I have all but stopped playing. I do like the fundamental tech behind is though, using electron beams to make precise tiny stuff is really fascinating to me.

Speaking of color perception, did you know very recently there was an experiment to see a color no human has seen before? Apparently certain rods are always activated in tandem since light hits the eye in large sizes but apparently, they used really tiny light waves to activate only a specific part of the eye.
 
jman5000 said:
Speaking of color perception, did you know very recently there was an experiment to see a color no human has seen before? Apparently certain rods are always activated in tandem since light hits the eye in large sizes but apparently, they used really tiny light waves to activate only a specific part of the eye.
It is described in this article in the Science Advances journal and in this UC Berkley News article.
They imaged a small portion of someone's retina. Then used those images to identify the location of every S (blue), M (green), and L (red) cone. Then they processed a color photograph to determine exactly which cones they wanted to activate. Finally, they did a raster scan of that small retinal are, modulating the beam so that it would trigger the cones they had selected.

Although the laser was green, the image seen by the subject was made up of the intended colors.

That new color "olo" is a greenish teal. Normally that color would trigger both M and L cones. But the laser dot is small enough to trigger only a single cone at a time. So by avoiding the red cones (which also react to green - but not as well), they get a color that is very much greener than green.
 
.Scott said:
. So by avoiding the red cones (which also react to green - but not as well
This reminds of the photographic work of the Lumiere brothers called autochrome I think. They made a custom shadowmask for each color. Diufficult but impressive work.
 
Back
Top