jman5000
- 29
- 2
Wait are you saying the phosphor decay time doesn't matter for visible blur and the blur on modern displays is purely from these samples they take? So a crt with much higher decay times wouldn't look any blurrier? (I know they would have other problems like longer phosphor trails). I thought the blur had something to do with the eye interacting with light on a flat stationary surface and trying to predict the motion or something, but you are saying it's purely because of multiple width samples being taken?Additionally, this article written by the author of this website goes into much better detail than me on persistence blur vs motion blur in sample and hold displays.sophiecentaur said:It's a Sampling Phenomenon. In order to get the brightness as high as possible, the digital screen pixels are on for as long as possible. This introduces a sinc distortion in the spatial frequency response when an object moves. I remember it was given a name Boxcar Distortion(?). This is avoided with zero width samples. I found this link which explains what I remember of sampling theory.
https://forums.blurbusters.com/viewtopic.php?t=177
This quote I pulled from it is why I thought the blur was tied to pixel visibility:
"As you track eyes on moving objects, your eyes are in different positions at the beginning of a refresh than at the end of a refresh. Flickerfree displays means that the static frames are blurred across your retinas. So high persistence (long frame visibility time) means there is more opportunity for the frame to be blurred across your vision. That's why you still see motion blur on 1ms and 2ms LCDs, and this is why strobing hugely helps them (make them behave more like CRT persistence)."
Last edited: