The 29.97 frames per NTSC standard

In summary: I can't get any over-the-air channels with my antenna :(In summary, the frame rate was changed from 30 fps to 29.97 fps to avoid some kind of problem with the circuitry. The technical problem is unknown, but my guess is that it had something to do with not wanting to start a new screen refresh cycle exactly at the same time the power supply was charging, to prevent glitches getting into the chroma demodulator/amplifier system.
  • #1
bitrex
193
0
I have read before that when color television was developed the frame rate was changed from 30 frames per second to 29.97 frames per second to avoid some kind of problem with the circuitry, but I have never seen an explanation of just what this technical problem was. My somewhat educated guess is that maybe it had something to do with not wanting to start a new screen refresh cycle exactly at the same time the power supply was charging, to prevent glitches getting into the chroma demodulator/amplifier system? Or something?
 
Engineering news on Phys.org
  • #2
Prior to color TV, the frame rate was, indeed, 30 fps. When color came along, they needed a small amount of additional information in order to render it but the bandwidth was already fixed by channel allocations--adding more information to a signal necessarily increases its bandwidth. Their solution was to dial back the frame rate ever so slightly so lower the bandwidth that the existing signal required by an amount which would then leave room for the chroma signal. That amount turned out be about .03 fps.
 
  • #3
Here is why: Originally, the vertical scanning rate was 30 hertz (interlaced, so it was 60 fields per second, 30 frames per second) and the horizontal scanning rate was 15750 hertz. Notice that 15750 is divisible by 30. When the black and white video signal modulates a carrier wave, we get sidebands every 15750 hertz out away from the carrier. Most of the information is close to the carrier but there is still some information that we cannot tolerate losing that is several Mhz away from the carrier. The scheme with NTSC color was to was to put a subcarrier for color several Mhz out away from the main carrier. They picked a subcarrier frequency of 3.579545 Mhz. Then they moved the vertical scanning frequency to 15734 hertz and the verticle scanning frequency to 59.94 hertz (frame rate to 29.97 hertz). Why? All these frequencies are phase related. Black and white information still falls in sidebands out away from the carrier as it always did. Color information was found in sidebands either side of the subcarrier every 15734 hertz. Here is the genius of the whole thing: The color sidebands fall in between the black and white sidebands. It's like having a bucket of large rocks and saying it is full, then you pour sand in and let it settle in between the rocks. More stuff in the same bucket. A comb filter is used to separate the color information from the black and white. A fairly simple scheme was used to do this. Each line of video information on the average is very similar to the previous. Also, the phase of the color information changes by 180 degrees each line. Knowing this, they split the video information into 2 paths, delay one by 1/15734 (one scan line) and add the two. The color information cancels itself out because of the 180 degree phase shift and the black and white signal is left. To get the color signal without the black and white, the same thing is done except one of signal paths has an amplifier with a gain of -1 in it. Most if not all black and white TV sets didn't know the difference. The scanning frequencies are close enough to original so that they worked fine.
-
Want to know more? I used to work on NTSC video equipment. Signal generators, insertion generators, video measuring equipment (waveform monitors, vectorscopes, etc.) cable TV RF signal level meters, etc. I know what it's like to be obsoleted. :(
 
  • #4
Thank god we switched to digital.
 
  • #5
waht said:
Thank god we switched to digital.
I used to be able to get 4 channels when the antenna was pointing East. Now I get 2.

Used to get 6 channels when pointing south. Now I get 3 if I'm lucky.

Tell me again why digital is better. :confused:
 
  • #6
turbo-1 said:
Tell me again why digital is better. :confused:

Because digital video modulation 30p (progressive) does not go through all the hoops of analog video modulation as described by Averagesupernova. It's more elegant in design.

You may not be picking up all the stations for many reasons, (bad antenna, wiring, might have to rescan the receiver, or simply your area hasn't upgraded)
 
  • #7
The brother of the previous owner of this place did custom TV antenna installations, and I have really high end VHF-UFH antenna on top of a ~35' mast. It's not the antenna, the converter (I have rescanned 'til I'm sick of it), or the coax. Digital signals are "all or nothing" and they are highly directional and sensitive to terrain. When you're in hilly country and the nearest TV transmitter is over 40 miles away, digital sucks. This is all so the big Telecom outfits can get some publicly-owned bandwidth cheap and sell us services using it.
 
  • #8
turbo-1 said:
Tell me again why digital is better. :confused:
The ones you don't get include "so you can think you can dance Canada"?
 
  • #9
mgb_phys said:
The ones you don't get include "so you can think you can dance Canada"?
I miss the hot political shows, though, including "New Brunswick vs Nova Scotia: Provincial-Government Ultimate Fighting" and the ever-popular "What's up wit' dem hosers, eh?"
 
  • #10
Averagesupernova said:
Here is why: Originally, the vertical scanning rate was 30 hertz (interlaced, so it was 60 fields per second, 30 frames per second) and the horizontal scanning rate was 15750 hertz. Notice that 15750 is divisible by 30. When the black and white video signal modulates a carrier wave, we get sidebands every 15750 hertz out away from the carrier. Most of the information is close to the carrier but there is still some information that we cannot tolerate losing that is several Mhz away from the carrier. The scheme with NTSC color was to was to put a subcarrier for color several Mhz out away from the main carrier. They picked a subcarrier frequency of 3.579545 Mhz. Then they moved the vertical scanning frequency to 15734 hertz and the verticle scanning frequency to 59.94 hertz (frame rate to 29.97 hertz).

Let's see if I understand this correctly, first for plain black and white: if the carrier were simply modulated by the 15750 Hz frequency, we'd just get two sidebands + and - 15750 Hz off the carrier frequency, but if I remember correctly analog TV uses some kind of vestigial sideband thing where the upper sideband is partially suppressed. However, the carrier is also being modulated by the luminance information, so is this the reason that there isn't just a single sideband frequency but multiple sidebands stretching away from the carrier spaced every 15750 Hz? Then, if I understand the subcarrier concept correctly, the chroma information is first AM modulated onto a 3.579545 MHz signal, and then that already-modulated signal is used to again modulate the main carrier frequency. I'm having a tough time visualizing what the spectrum would look like in the frequency domain; do you have a link to a pretty diagram that might help by any chance? :biggrin:
 
  • #11
bitrex. You pretty much have it right except a few things mixed up with the sidebands. Don't have time at the moment to elaborate but I can post more later. Incidentally, with a good spectrum analyzer you can see the sidebands I referred to where they fall in between each other.
 
  • #12
turbo-1 said:
I used to be able to get 4 channels when the antenna was pointing East. Now I get 2.

Used to get 6 channels when pointing south. Now I get 3 if I'm lucky.

Tell me again why digital is better. :confused:

Hi turbo. Your comments are interesting because while I'm enjoying so many advantages of digital TV I have to agree that the "all or nothing" aspect of it can be a problem in some areas. In my area I know of quite a few people who needed to upgraded their antenna to get decent digital reception. In most cases the initial reception seemed to be good (on initial installation of set-top box or new TV) but over a longer time period drops-outs would often be noticed, particularly in adverse weather conditions.

The main problem is that when you get a drop-out with digital TV you get a complete freeze of picture for several seconds and sometimes (depending on the equipment) a large and very annoying audio glitch. Personally I get pretty good DTV reception with only a fairly basic antenna, so I'm lucky, but just lately I've started to notice a few drop-outs in windy conditions. At the moment these are few and far between enough to not be real problem though I might need to upgrade my antenna soon.

Lately I've been wondering if they shouldn't have "wasted" a bit more bandwidth in error correction to make DTV a little more robust in marginal reception area's. Say for example that you put all the bandwidth of a high definition channel into a standard definition channel with a massive amount of error correction redundancy, I think the result would be unbelievably robust. (ok maybe that example is going a bit too far in “waste” but you get the point). Does anyone else have an opinion on this - I mean do you think perhaps the standard for DTV broadcast could have wasted a bit more bandwidth on making it more robust?
 
  • #13
Averagesupernova said:
Here is why: Originally, the vertical scanning rate was 30 hertz (interlaced, so it was 60 fields per second, 30 frames per second) and the horizontal scanning rate was 15750 hertz. Notice that 15750 is divisible by 30.

that divisor is 525 which means, including the flyback, there are 525 horizontal lines per frame and there used to be 30 frames per second ("now", before DTV, it was 29.97).

When the black and white video signal modulates a carrier wave, we get sidebands every 15750 hertz out away from the carrier. Most of the information is close to the carrier but there is still some information that we cannot tolerate losing that is several Mhz away from the carrier. The scheme with NTSC color was to was to put a subcarrier for color several Mhz out away from the main carrier. They picked a subcarrier frequency of 3.579545 Mhz. Then they moved the vertical scanning frequency to 15734 hertz and the verticle scanning frequency to 59.94 hertz (frame rate to 29.97 hertz). Why?

Averagesupernova, there is a reason for why they bumped the fH down from 15750 to 15734, and i don't think you got completely to it. it has to do with the sound carrier at 4.5 MHz from the video base band.

originally fH = 15750, and since adjacent horizontal lines are often largely identical, the raster scan signal is sort of a periodic function with fundamental frequency at fH, you will see in the video spectrum spikes (a concentration of energy) at integer multiples of fH. now they wanted to bump up and locate this color information exactly half way between two of these harmonics (the 227th and 228th), largely for the reasons you described: for B&W TVs the interference of the color subcarrier for one line will be exactly out-of-phase of the interference for the adjacent line and will tend to visually cancel.

the question is how does the intensity (B&W) signal interfere with the chroma signal, and it's exactly the same. from the POV of the color base band (sitting at 227.5 x fH), all of those spikes of the intensity signal are halfway between the harmonics of the chroma signal so they tend to cancel visually on every adjacent line.

but the sound carrier (to the right of where the put the chroma) didn't quite get to be exactly halfway between chroma harmonics and that caused visible distortion to the chroma. to get the sound carrier at exactly halfway between chroma harmonics, they needed to make it precisely an integer times fH. so let's see:

[tex] \frac{4.5 \mathrm{MHz}}{f_H} = 285.714 [/tex]

so they want to make it exactly an integer. which integer do you think they'll pick? the closest one, which is 286. they want the sound carrier to be at exactly 286 x fH.

there are two ways of doing that, they could bump up the sound carrier from 4.5 MHz to 286 x 15750 Hz = 4.5045 Mhz, or they could bump fH down to (4.5 MHz)/286 = 15734.2657 Hz, which is what they chose to do. i think they decided bumping up the sound carrier higher would be more difficult for existing TVs to track or latch onto the slightly detuned sound carrier than it would be for existing TVs to latch onto a slightly lowered fH. from that decision to bump fH down, which caused fV to get bumped down to 29.97 Hz which continued to be fH/525,... because of that decision in the 1950s, video engineers now have to deal with that nasty drop-frame and audio engineers had to deal with an occasional 44.056 kHz sampling rate instead of 44.1 kHz. what a pain in the arse.
 
Last edited:
  • #14
Why is it that the sound carrier is interfering with the chroma signal? In this diagram:http://people.seas.harvard.edu/~jones/cscie129/nu_lectures/lecture8/analog_tv/spec_ntsc_col.gif" it looks like the audio sidebands fall entirely outside the luminance and chrominance bands. And if that diagram isn't accurate and the audio sidebands do fall inside the main video signal, won't making the audio carrier a integer multiple of the horizontal frequency cause interference in the luminance signal again? And it looks like from the diagram that the chrominance Q signal is being transmitted with both sidebands, but the I signal is being transmitted only with a vestigial sideband! Gaaah <head explodes>
 
Last edited by a moderator:
  • #15
http://upload.wikimedia.org/wikipedia/commons/f/fd/Ntsc_channel.svg
The audio lies outside the video frequencies.
http://en.wikipedia.org/wiki/Composite_video"
 
Last edited by a moderator:
  • #16
rbj said:
but the sound carrier (to the right of where the put the chroma) didn't quite get to be exactly halfway between chroma harmonics and that caused visible distortion to the chroma. QUOTE]

I don't think that the audio sidebands are guaranteed to fall anywhere. The audio is FM and typically you have sidebands fall at harmonics of the modulating signal all the way out to the edge of the bandwidth of the signal. Max deviation is 25 Khz so suppose you are modulating with a 500 hertz signal you would have sidebands all the way from the sound carrier out to 25 khz away from it at 500 hertz intervals. Any interference with chroma is a shortcoming of the TV. Intermodulation is unwanted heterodyning of the sound and chroma signals within a stage in the TV set.
 
  • #17
bitrex said:
Let's see if I understand this correctly, first for plain black and white: if the carrier were simply modulated by the 15750 Hz frequency, we'd just get two sidebands + and - 15750 Hz off the carrier frequency, but if I remember correctly analog TV uses some kind of vestigial sideband thing where the upper sideband is partially suppressed.
No, not quite. The only way you will ever get only one upper and lower sideband is if you modulate with a pure sine wave. No matter what kind of luminance info you modulate with you will always have sidebands at multiples of 15750 because it is non-sinusoidal. The vestigal sideband is done to conserve bandwidth. Only one sideband is needed to convey all the information. I'm told the reason that the lower sideband (not upper) is partially supressed instead of simply eliminated is because the technology didn't exist to completely remove the lower sideband.

bitrex said:
However, the carrier is also being modulated by the luminance information, so is this the reason that there isn't just a single sideband frequency but multiple sidebands stretching away from the carrier spaced every 15750 Hz?

See above.

bitrex said:
Then, if I understand the subcarrier concept correctly, the chroma information is first AM modulated onto a 3.579545 MHz signal, and then that already-modulated signal is used to again modulate the main carrier frequency. I'm having a tough time visualizing what the spectrum would look like in the frequency domain; do you have a link to a pretty diagram that might help by any chance? :biggrin:

The chroma info is AM modulated onto the 3.579545 subcarrier using a balanced modulator. This way, the carrier is automatically supressed. It simply isn't needed. The colorburst signal you may have heard of is in-phase with the color subcarrier. It is used to recover the chroma signal in the TV set. It phase locks an oscillator in the TV set to the original phase of the chroma subcarrier at the TV station.
 
  • #18
Upon seeing the link in post #14 some of the details started to come back to me about NTSC video and I decided it would be a good idea to do a little more detailed description of what happens in color transmission. I have to admit I needed to do a bit of digging to come up with some of the numbers below. Thanks to the book Basic Televisions and Video Systems by Bernard Grob.

The 3 color signals out of the camera: Red, Green, and Blue are the 3 primary colors. Technically there is a camera for each optically filtered color. We get 3 signals that feed into a matrix to form the Y (luminance) signal, the I signal and the Q signal.
-
The Y signal is simply all three color signals added up in the following proportions: Y = .3R + .59G + .11B. If only the Y signal formed by this matrix were transmitted, all TV sets would receive a black and white signal and look the same as any other black and white TV set.
-
The I signal is the three color signals added up in the following proportions: I = .6R - .28G - .32B.
-
The Q signal is the three color signals added up in the following proportions: Q = .21R - .52G + .31B.
-
The I and Q signals and the Y signal collectively contain all the video information needed to transmit a color TV signal.
-
Now the Y signal modulates the carrier the same way it would have in the days before color. The I and Q signals modulate the 3.579545 subcarrier but they do it 90 degrees out of phase. This is why the letter Q was picked. It means quadrature. Each signal (I and Q) modulates a separate modulator and then those outputs are combined.
-
The I signal will modulate the chroma subcarrier more than the Q signal does. There is a reason for this in that the color information that it carries requires more detail. Also, the diagram in the link in post #14 will show this to be the case. However, as bitrex pointed out, it IS vestigal (this is one of those details I mentioned previously in this post). This is to keep it out of the sound channel. RBJ, you may have had some confusion over this.
 
  • #19
Say what you want about NTSC (Never Twice the Same Color) but for what they had to work with, and the requirement to stay compatible with black and white, I think that adding a color signal to the already existing black and white signal was quite an accomplishment.
 
  • #20
Averagesupernova said:
Now the Y signal modulates the carrier the same way it would have in the days before color. The I and Q signals modulate the 3.579545 subcarrier but they do it 90 degrees out of phase. This is why the letter Q was picked. It means quadrature. Each signal (I and Q) modulates a separate modulator and then those outputs are combined.

you're doing pretty good with this. now ask yourself how they picked 3.579545 MHz? ponder that a little, Av.

then punch into your calculator: 227.5 x 4500000 / 286, and see what you get.

::edit:: and take that number and divide by 227.5 (to get fH) and again by 525 and you'll get the number in the subject line of this thread. ::/edit::
However, as bitrex pointed out, it IS vestigal (this is one of those details I mentioned previously in this post). This is to keep it out of the sound channel. RBJ, you may have had some confusion over this.

it's in the textbooks. at least in my day (and, for an historical question, old textbooks are better, all other considerations equal). see if your library has a copy of A Bruce Carlson. those drawings on wikipedia are just drawings. that one does not depict it as the line spectrum it nearly is, which is the whole motivation behind interlacing the chroma signal harmonics (or sidebands) up there centered halfway between the 227th and 228th harmonics of fH. anyway, in a less-than-perfect reality, the spectrum does not end there. at least we know the chroma channel receiver does not have infinite-dB rejection at 4.5 MHz. especially back them olden days. selectivity cost money. it was more inductors and capacitors not just simply more taps on the FIR filter.

when the audio is silence or close to it, the energy of the FM audio signal is concentrated at the carrier frequency. of course, we would not expect the audio sidebands to interlace with either the intensity or chroma sidebands. now ask yourself, what would be the visual effect of whatever sound carrier that leaks through? ask in three cases:

1. if the sound carrier is exactly an integer times fH?
2. if the sound carrier is exactly an integer+1/2 times fH?
3. if the sound carrier is somewhere in between?

what do you see in either case and which, in your judgment is better?

Av, I'm not confused about this a bit.

r b-j
 
Last edited:
  • #21
I never said the sound carrier isn't phase locked with the rest of the video signals.

Edit: I still don't see what you are getting at. It simply looks to me like you are quizzing me on something but don't want to admit it. I've looked at your last post several times today and don't 'get it'. I'll post some more later tonight.
 
Last edited:
  • #22
First, let's lay this next quote to rest.

rbj said:
you're doing pretty good with this. now ask yourself how they picked 3.579545 MHz? ponder that a little, Av.

then punch into your calculator: 227.5 x 4500000 / 286
, and see what you get.

::edit:: and take that number and divide by 227.5 (to get fH) and again by 525 and you'll get the number in the subject line of this thread. ::/edit::

Dividing the horizontal scanning frequency no matter what it is by 525 will get the vertical scanning frequency. (I took the liberty of making it blue.) The original 15750/525 results in 30 hertz. That was the whole premise of the thread and I pointed out in my first post the relationship between the horizontal and verticle scanning frequencies.
-
Although I didn't mention it, after color came along the sound carrier was indeed an integer harmonic of the horizontal scanning frequency. (I took the liberty of making that red). Yes, the 286th harmonic of Fh is the sound carrier of 4.5 Mhz. 227.5 has significance why? Well the color subcarrier is the 227.5th harmonic of Fh and obviously the .5 part is to get the sidebands to fall in between each other. Nothing new here. As to why they picked 3.579545 in general as in that general part of the spectrum? It has to fall far enough away from luminance sidebands (the majority of luminance information is in the lower frequencies) and far enough away from the sound carrier to prevent the beat frequency due to intermodulation within a TV sets stage from being too low to easily filter out. The beat frequency is about 920 Khz and is the difference between the sound carrier and the chroma subcarrier.

rbj said:
it's in the textbooks. at least in my day (and, for an historical question, old textbooks are better, all other considerations equal). see if your library has a copy of A Bruce Carlson. those drawings on wikipedia are just drawings. that one does not depict it as the line spectrum it nearly is, which is the whole motivation behind interlacing the chroma signal harmonics (or sidebands) up there centered halfway between the 227th and 228th harmonics of fH.

Yes, the spectrum does not look as it does in the link in post#14. That diagram is a simplification. I believe I mentioned that in my first post also. The information is concentrated in the sidebands spaced at Fh.

rbj said:
anyway, in a less-than-perfect reality, the spectrum does not end there. at least we know the chroma channel receiver does not have infinite-dB rejection at 4.5 MHz. especially back them olden days. selectivity cost money. it was more inductors and capacitors not just simply more taps on the FIR filter.

when the audio is silence or close to it, the energy of the FM audio signal is concentrated at the carrier frequency. of course, we would not expect the audio sidebands to interlace with either the intensity or chroma sidebands. now ask yourself, what would be the visual effect of whatever sound carrier that leaks through? ask in three cases:

1. if the sound carrier is exactly an integer times fH?
2. if the sound carrier is exactly an integer+1/2 times fH?
3. if the sound carrier is somewhere in between?

what do you see in either case and which, in your judgment is better?

Av, I'm not confused about this a bit.

r b-j

I just realized I misread your post #13 and assumed you were talking about AUDIO sidebands. No wonder I didn't see the significance of your last post.

Number 1: If the sound carrier were an integer harmonic of Fh (which it actually is) it would fall directly on a luminance sideband. There isn't a lot of information in the high end of the luminance spectrum so I would think a little loss in the high end of it due to filtering out the sound carrier (imperfect filter) wouldn't be a big deal. After all, they were already filtering it out with black and white sets.
-
Number 2: If the sound carrier were and integer + 1/2 times Fh (which it isn't) the sound carrier would fall directly on a chroma sideband. Is this what you are getting at? Although I've never looked into it, the comb filter may help filter this out since it's phase will change by 180 degrees the same way the luma does relative to chroma when the sound carrier falls in between the chroma sidebands (case #1).
-
I don't know if this was the motivation to put the color subcarrier where it is or not. I can tell you that TV sets back in the day had traps in the IF for the sound carrier, the adjacent channel sound carrier, and the adjacent channel picture carrier. They knew the issues of keeping unwanted signals out of the IF before color came along. My guess is that they knew it was easier to comb filter what was left of the sound carrier out of the color signal and use LC filtering to get the sound carrier out of the luminance signal the same way they had been doing with black and white. I think this is the point you were trying to get across and I'm sorry I misread it. Incidentally, the reason that they couldn't bump the main picture carrier and sound carrier was compatibility issues. They needed to maintain a 4.5 Mhz spacing as you pointed out.
-
Edit: I've found the Bernard Grob book (I have several older ones of that author too) to be sufficient. Curious about the Bruce Carlson book now. Not really curious enough to look for it though. LOL I assume you mean there are some good spectral diagrams and such with decent detail?
 
Last edited:
  • #23
Averagesupernova said:
Dividing the horizontal scanning frequency no matter what it is by 525 will get the vertical scanning frequency. The original 15750/525 results in 30 hertz.

now how do you get to 29.97 Hz?

Although I didn't mention it, after color came along the sound carrier was indeed an integer harmonic of the horizontal scanning frequency. (I took the liberty of making that red). Yes, the 286th harmonic of Fh is the sound carrier of 4.5 Mhz.

after the introduction of color. there was no integer relationship before.

227.5 has significance why? Well the color subcarrier is the 227.5th harmonic of Fh and obviously the .5 part is to get the sidebands to fall in between each other. Nothing new here.

they could have plopped the chroma baseband between the 228th and 229th fH (puttting it a little closer to the sound carrier), but they had to make a decision between which 2 harmonics they were putting the chroma baseband and halfway between 227 and 228 is where they decided.

As to why they picked 3.579545 in general as in that general part of the spectrum? It has to fall far enough away from luminance sidebands (the majority of luminance information is in the lower frequencies) and far enough away from the sound carrier to prevent the beat frequency due to intermodulation within a TV sets stage from being too low to easily filter out. The beat frequency is about 920 Khz and is the difference between the sound carrier and the chroma subcarrier.

very good. but whether it's off by 920 or 917 kHz doesn't make that much difference except for the interlacing of lines.

Yes, the spectrum does not look as it does in the link in post#14. That diagram is a simplification. I believe I mentioned that in my first post also. The information is concentrated in the sidebands spaced at Fh.

it needs to show the interlacing of luminance and chroma sidebands.

Number 1: If the sound carrier were an integer harmonic of Fh (which it actually is)

only because they chose to make it that way (by fudging fH down from 15750 Hz to 4500000/286 Hz).

... it would fall directly on a luminance sideband. There isn't a lot of information in the high end of the luminance spectrum so I would think a little loss in the high end of it due to filtering out the sound carrier (imperfect filter) wouldn't be a big deal.

correct. but the chroma signal is closer to the sound carrier. a lot closer.

After all, they were already filtering it out with black and white sets.

otherwize we would see 286 columns of lighter and darker image on the BW TV, with no cancellation of alternate lines.

Number 2: If the sound carrier were and integer + 1/2 times Fh (which it isn't) the sound carrier would fall directly on a chroma sideband. Is this what you are getting at?

yes, and it's closer to the chroma signal than it is to the luminance. think from the POV of the choma signal (so the 227.5 x fH is at "zero", the luminance carrier is at -227.5 x fH and the sound carrier is up at +58.5 x fH). that's good. both are offset by 1/2 fH and whatever effect of either that leaks through will have cancellation in alternate horizontal lines. you won't see twisted bars of messed up color intensity or hue.

Although I've never looked into it, the comb filter may help filter this out since its phase will change by 180 degrees the same way the luma does relative to chroma when the sound carrier falls in between the chroma sidebands (case #1).

now you get it (i think). BTW, comb filters are things we build with a delay line of some sort. not a problem now (with digital memory), but they didn't really have them (for cheap) back in 1950 or 1960. the issue was what would be least objectionable in the picture if any of this *did* leak through any filtering. it is true that interlacing the chroma sidebands exactly halfway between the luma sidebands is optimal for comb filtering them out. but i don't think comb filters were all that common back then. the issue is what would those frequency components that are offset by 1/2 fH look like in your picture?

I don't know if this was the motivation to put the color subcarrier where it is or not. I can tell you that TV sets back in the day had traps in the IF for the sound carrier, the adjacent channel sound carrier, and the adjacent channel picture carrier. They knew the issues of keeping unwanted signals out of the IF before color came along. My guess is that they knew it was easier to comb filter what was left of the sound carrier out of the color signal and use LC filtering to get the sound carrier out of the luminance signal the same way they had been doing with black and white.

i don't know what you think they were using for comb filters back then. what was their delay element? a piece of transmission line that is 1/15734 second long? i don't think so.

comb filters are a relatively modern signal processing tool. it normally requires digitization and computer memory for the needed delay element. (maybe SAW - surface acoustic wave.)

I think this is the point you were trying to get across and I'm sorry I misread it. Incidentally, the reason that they couldn't bump the main picture carrier and sound carrier was compatibility issues. They needed to maintain a 4.5 Mhz spacing as you pointed out.

but it rubs two ways. with fH reduced to 15734 Hz, the existing TVs had to lock on to a detuned fH . evidently that was less of a problem than the FM sound receiver locking on to 4.5045 MHz. or maybe it was the fixed trap at 4.5 MHz.

I've found the Bernard Grob book (I have several older ones of that author too) to be sufficient. Curious about the Bruce Carlson book now. Not really curious enough to look for it though. LOL I assume you mean there are some good spectral diagrams and such with decent detail?

i just wanted something that shows the line spectra at multiples of fH and the interlacing of chroma into the luminance way up there at 227.5 x fH .
 
Last edited:

1. What is the significance of 29.97 frames per second in the NTSC standard?

The 29.97 frames per second (fps) in the NTSC standard refers to the number of frames displayed in one second of video footage. This standard was established by the National Television System Committee (NTSC) in the United States in the 1950s and is still commonly used for analog television broadcasting.

2. Why is the NTSC standard frame rate not a whole number like other standards?

The frame rate of 29.97 fps in the NTSC standard is not a whole number because it is based on the frequency of the alternating current used in the United States, which is 60 hertz. This frequency caused interference with the picture quality when filming at a standard rate of 30 fps, so the NTSC committee decided to reduce the frame rate slightly to 29.97 fps to prevent this interference.

3. How does the 29.97 fps frame rate affect video quality?

The 29.97 fps frame rate can cause a slight decrease in video quality compared to higher frame rates, as it may result in motion blur or less smooth movements. However, this frame rate is still widely used for television broadcasting and is not typically noticeable to the average viewer.

4. Is the NTSC standard still used today?

While the NTSC standard is no longer used for analog television broadcasting in the United States, it is still commonly used for digital television broadcasting in other countries. However, many countries have adopted different standards, such as the PAL standard in Europe and the SECAM standard in France.

5. Can the frame rate of 29.97 fps be converted to other standards?

Yes, the frame rate of 29.97 fps can be converted to other standards through a process called standards conversion. This involves adjusting the frame rate and other parameters to match the requirements of the desired standard. However, this may result in a slight loss of quality in the final video.

Similar threads

  • Electrical Engineering
Replies
1
Views
1K
  • Computing and Technology
Replies
0
Views
163
  • Electrical Engineering
Replies
11
Views
7K
  • Electrical Engineering
Replies
12
Views
2K
Replies
20
Views
1K
Replies
9
Views
2K
  • Beyond the Standard Models
Replies
14
Views
5K
Replies
11
Views
4K
  • STEM Academic Advising
Replies
4
Views
2K
Back
Top