Sie sind hier : Startseite →  (4) Historie - Video Technik→  Video-Theorie→  Videotape Machines

Component Videotape Machines

One of the most severe problems with our TV standard, NTSC, is the inability to separate the luminance and chrominance signals once they have been combined. Attempting this separation always produces easily visible artifacts that permanently degrade the picture quality. If a video signal can be left in it's component (Luminance and two color signals.) state and recorded that way, it can tolerate a lot more processing without quality loss than if it was a composite signal. In the early eighties, engineers at RCA and Panasonic built a videotape machine that recorded the NTSC signal in component form. Although called RECAM at the time, the format has simply come to be known as 'M' format. The video signal system of the machine actually consisted of two separate signal systems: one for luminance (Y) and another for chroma. These signals were recorded on the tape on separate tracks using separate heads. Although the machine also had normal composite inputs and outputs, the real beauty of the machine was the ability to input and output component video.

 

The most unusual aspect of the signal system was how the chroma signal was actually handled. Instead of recording the double sideband, suppressed carrier analog quadrature AM chroma signal, the recorder recorded the two signals that are used to make the chroma signal. In NTSC, these are called the I and Q color difference signals. (Remember, it takes two signals to quadrature modulate something.) To reduce the occupied bandwidth of the resulting chrominance signal, the bandwidths of I and Q are not equal. I has a bandwidth of about 1.3 MHz and Q has a bandwidth of .5 MHz. In order to record the two signals on one track, some unique signal processing was required.

 

In the M format, the I and Q color signals were modulated on two separate FM carriers. The wider band I signal was modulated at 5 MHz with a peak-to-peak deviation of 1 MHz. The narrower Q signal was modulated at 1 MHz with a peak-to-peak deviation of .5 MHz. This unequal bandwidth also improved the chroma signal-to-noise ratio.

 

On playback, the I and Q signals are demodulated separately. They are either directly output (Along with Y) as component video signals, or encoded into composite video signals.

the tape

The last interesting thing about this system was the tape. It was 1/2 inch tape of high quality, but it was housed in an ordinary VHS cassette shell. In fact, ordinary VHS cassettes could be used as well, and apparently with little or no picture degradation. (Playing time was very short, however.) (VHS and M were, of course, incompatible. A VHS machine could not play an M recording, or vice versa.) The use of the VHS threading system is what caused the format to come to be known as M-format.

 

Not to be undone, Sony introduced a similar system, based on a Betamax videocassette shell, and called it Betacam. (It, too could use consumer betamax tapes for recording, and vice versa! Just like M, neither format could make a recording the other format could play back. Only the tapes themselves were interchangeable.)

 

There was one really major difference between M and Betacam. It is the way chroma encoding is done. If you recall, the M format used unequal bandwidth I and Q recording to record the color components. Although this helped improve the color signal-to-noise ratio, the color could never be any better quality than the best composite systems. Sony decided to sample the color difference signals on the R-Y and B-Y axes. They also chose equal bandwidths of 1.5 MHz both axes. This had the advantage of preserving far more color information than M, and therefore gave better pictures. The other advantage is that R-Y is up-down on the Vectorscope (The vectorscope displays the color information on a round oscilloscope screen. It displays color signals as magnitude and phase.) and B-Y is left-to-right. The I and Q signals appear at crazy angles on the vectorscope screen.

 

The I and Q signals used in the M format could be made to fit nicely together on one recorded track with simple frequency multiplexing. This is not the case with the equally wideband R-Y and B-Y signal. To solve this problem, Sony did something really unique. The R-Y and B-Y signals for a line of video are stored in an analog CCD delay line. They are then clocked out of the delay line at twice the rate they were clocked in, but one at a time. The result is that the R-Y signal for the entire line fits in the time of half a line, and so does the B-Y signal. The two signals are recorded one right after the other on the chroma track of the tape, using ordinary FM modulation. Although the time compression technique doubles the signal's bandwidth to 3 MHz, this is well within the capabilities of the recording process. This system is called Compressed Time Division Multiplexing, or CTDM.

 

On playback, the required timebase corrector would digitize the luminance and CTDM chroma signals, un-compress the R-Y and B-Y signals, delay the luminance by a line to make up for the signal processing delay of CTDM encoder, and convert them to analog. The signals could then be used as Y, R-Y,B-Y, or encoded into NTSC composite video.

Some more formats

In 1985, RCA Broadcast went out of business, and with it, much hope of the M format catching on. Ampex briefly marketed an M-format machine (Made by Panasonic for Ampex), but quickly dropped it when RCA folded. Sony took the ball and ran with it a long ways. They introduced Betacam-SP, a much-refined version of the original Betacam that used metal particle tape, featured a larger cassette size for playing times up to 90 minutes, (Original Betacam could get only 20 minutes out of a consumer-size cassette.) and offered a pair of AFM audio tracks. (AFM audio tracks are FM encoded audio that is recorded along with the video information on the video tracks. It gives near CD quality sound, and is popular on consumer decks under the name of VHS or Beta Hi-Fi.) There was a very noticeable improvement in picture quality over the old 'oxide' Betacam, and Sony started selling machines faster than they could make them.

 

Not to be outdone, Panasonic introduced MII, an improved version of M format that looked featurewise like Beta-SP. Most importantly, they abandoned the I,Q color components for R-Y, B-Y, and adopted a time-compression chroma chroma recording scheme (Called CTCM) as well. But lacking Sony's lead and experience in marketing VTR's, it is not nearly as widely used as Betacam-SP. Betacam-SP went on to become the de facto standard of the broadcast and post-production industry for many years. Ironically, MII uses ring-load threading, while the industrial Betacam-SP decks use M-load threading!

 

In all other respects, the component VTR's differ little from their composite ancestors. Other than the extra heads, the mechanics and servo systems are identical. Some extra electronics are included to help compensate for small timing differences between the components.

 

The benefits of component recording did not go unnoticed for the serious home user, or budget conscious pro user. JVC created a format called S-VHS that recorded Y and encoded color on separate tracks. Although recording the color in encoded format didn't improve it much, keeping it separate from the luminance did. The picture quality is noticeably better than VHS. Sony introduced a similar system called ED-Beta, but it never really went anywhere.

 

Today, in the consumer world, the vast majority of VCRs sold are of the color-under variety. In the professional world, most VTRs sold are of the component variety.

Nach oben

- Werbung Dezent -
© 2003 / 2024 - Copyright by Dipl. Ing. Gert Redlich - Filzbaden / Germany - Impressum und Telefon - DSGVO - Privatsphäre - Zum Flohmarkt
Bitte einfach nur lächeln : Diese Seiten sind garantiert RDE / IPW zertifiziert und für Leser von 5 bis 108 Jahren freigegeben - kostenlos natürlich.