For Direct color systems, where the chroma and luminance stay together, some form of timebase correction is necessary if you want color playback. It is optional for color-under formats, but required if you want to mix the output signal with other signals in a video production switcher. Timebase correction is simply the process of removing timebase error from a videotape machine's output video. Anytime a video signal is recorded on an mechanical media such as tape, instabilities will arise in the signal caused by the mechanical imperfections of the storage device. These may not cause a problem if you are simply viewing the picture on a monitor, but they are a major problem if the signal needs to be exactly timed with other video signals. There are basically three ways to do timebase correction. We will briefly look at all three.
The first, and simplest system, the electronically variable delay line, will correct timebase errors up to about a microsecond peak-to-peak. This system is only found in quadruplex machines, and some 1 inch type B machines. It was also tried unsuccessfully with some early industrial helical scan machines manufactured by IVC and Ampex. In these formats, the mechanical system and servos had to be be extremely accurate to hold timebase errors to about one microsecond. This is one of the reasons quad was such a finicky format: there was no room for error! These systems always existed in two parts: a coarse delay line that corrected most of the timebase error, and a fine system that corrected the rest.
The system works by comparing the horizontal line period of the off-tape video to a standard. The result was a DC voltage that controlled an electronically variable delay line. This delay line was a combination of several hundred small value coils shunted to ground through several hundred varactor diodes. A DC voltage applied to the varactor diodes varied their capacitance, and thus the delay of the delay line formed of these components. The drive voltages for the diodes had to be supplied from a circuit that converted the linear error signal into a highly logarithmic control voltage. The voltage also had to be bipolar, as the varactor diodes were hooked up with alternating polarities to cancel out their nonlinearities. There was also a feedback signal to the servos to keep the signal in the timing center of the delay line. The circuits were fairly simple, but required large numbers of very expensive precision components.
some more tricks
The second stage of the delay line compared playback color burst to reference color burst, and generated an error voltage that controlled a shorter, faster electronically variable delay line. There is also a circuit to feed a correction signal back to the coarse delay line to keep the signal in the timing center of the fine delay line. Additional circuits were present in both the coarse and fine delay to correct errors induced in the video by the electronically variable delay lines.
When in a good mood, this system worked extremely well, and resulted in a very jitter-free signal. Ampex called the coarse and fine delays AMTEC and COLORTEC, respectively. RCA called them ATC and CATC.
Relaxing the timebase stability requirements of the mechanical portion of a videotape machine was a major goal of videotape systems designers. Indeed, videotape as a flexible (All puns intended) production tool would never really occur until the tight mechanical stability problems could be overcome. And, the bottleneck of the problem was the timebase corrector.
In the late sixties, Ampex announced a major breakthrough in the search for a timebase corrector that had a larger error window. Their solution was to correct timebase error in coarse steps, and leave just a small amount of error that had to be corrected with an electronically variable delay line. This was made possible by the availability of Glass Delay Lines.
A glass delay line, sometimes called an ultrasonic delay line, consisted of a piece of quartz glass that had precisely machined facets. A piezoelectric transducer injected a vibration into the glass block, which bounced around off the facets. After a certain delay, the vibration made it's way to a second piezoelectric transducer, which outputs a delayed version of the input signal. Relatively long delays were available, including an entire horizontal line period (63.56 microseconds).
What Ampex did
Ampex connected a number of these delay lines in series, with the delay decreasing in binary sequence from 1/2 line on down. Below 1 microsecond, the delay line was a one microsecond fixed, lumped constant (= made of discrete components) delay line with eight taps. A small electrically variable delay line corrected the residual error, which was under 125 nanoseconds. A digital error measurement system measured the error and selected the appropriate delay lines to eliminate most of the error. Although it had some bugs, the system worked very well.
However, it still required large amounts of precision electronic parts to construct. It was also a major pain to align these systems because of distortions induced by the delay lines being switched in and out. This system was present in just two machines I know of for sure, both quadruplex format: the AVR-1 and the ACR-25 spot player. (Later ACR-25's had digital timebase correctors, borrowed from the AVR-2.) Both this system and the earlier AMTEC system were developed by Charlie Coleman of Ampex.
An error window of one horizontal line was more than adequate for any quadruplex timebase error. But, it was just barely adequate for any helical scan machine. The bulkiness and complexity of the tapped-delay line system made it impractical for anything except a full-broadcast videotape machine, and it was never adopted in any helical scan machine that I know of.
At the close of the sixties, digital IC's were just starting to become popular logic building blocks for all sorts of devices. A number of engineers from around the world were working on an all digital solution to the timebase correction problem. The concept was simple: convert the video signal from analog to digital, store it in a memory, and read it out again at a stable rate.
It sounded simple on paper, but was in fact extremely difficult (Like everything else involving the videotape recorder) to actually build. It was determined that the minimum digital sampling frequency that could be used was three times the color subcarrier frequency, or 10.738635 MHz. Each sample required eight bits of data, although a few early machines tried to get by with just seven. If you wanted to store just one line of video under these conditions, you needed 683 bytes of memory. Not any memory would do, it had to be very fast. An access time of under 90 nanoseconds was required.
This may not seem like much memory today, but in 1976, this was a lot of very fast memory! Even tougher was building an analog-to-digital converter that would run at 10.7 MHz. But, when the chips finally became available to build a digital timebase corrector, they did very quickly. There is a story that tells of a long line of broadcast engineers placing orders for the first digital timebase corrector, at the show it was introduced at, for a paltry $120,000 each!
timebase correctors quickly shrunk in price
But, like everything else digital, timebase correctors quickly shrunk in price and grew in capabilities. The early models could store a single line of video. Soon, two was common. Then 16. Then 32. And eventually, an entire field! A timebase corrector with an entire field of memory could take a nonsynchronous video signal of any kind, and make it synchronized to the video in your TV plant. This meant no more Genlocking your entire station to a network just to take a network feed. (Genlocking is the process of locking the synchronizing signals of one device to exactly match those of another unrelated device.) Modern timebase correctors/frame synchronizers can store 2 1/2 frames of video sampled at 14.31818 MHz and ten bits! Frame memories have been built that will store 30 or more seconds of full bandwidth video in memory! Originally, these machines required thousands of 1 megabit RAM chips! Due to the huge RAM price decreases (And capacity increases) for the personal computer world, devices are now being built that can store 30 seconds of full-bandwidth video on just a handful of memory SIMM's.!
Besides the A/D, D/A and memory sections of a digital timebase corrector, there are other important parts. One of these is the tape clock. The tape clock measures and tracks the timebase error, speeding up and slowing down sampling to match the error. Every line must be sampled the same number of times no matter how much it varies in period from a stable reference. This circuit is the most complex part of a timebase corrector. A reference clock is used to clock out the samples. It is usually genlocked to a reference signal so the stable output signal can be made synchronous with other video signals. A memory cycle arbiter circuit generates the addresses for reading and writing memory, and prevents the memory from being written to and read from in the exact same moment. All in all, the modern timebase corrector is a marvel of analog and digital electronics working together.
Timebase correction generally makes it's error correction once per line. However, many timebase errors build up over a period of less than a line, or over several lines or more. These errors, called Velocity Errors are a problem in all videotape formats, and cause serious problems in segmented formats like quad and 1" type B. (A Segmented videotape format records a field of video in several passes of the head. Quad is a good example of an analog segmented format. A Non-segmented Format records an entire field with one pass of the head. Most analog helical machines are non-segmented. All digital formats are segmented.) Most velocity errors are small, but can cause a noticeable change in color hue or saturation from the left side of the picture to the right, or from the top of a segment 'band' to the bottom. Most velocity error compensation systems require either a memory to keep track of the errors from the same part of the previous head scan (quad) or a 1-line video delay to store the line while the velocity error was being measured. Correcting the velocity error involves storing the timebase error from the previous head scan or line and comparing it to the error in the current line. The correction would be applied to the fine electronically variable delay line, or the signal reconstruction clocks of a digital timebase corrector.
The final stage of processing for the video signal is the Processing Amplifier. This device allows adjustment of at least four important video parameters: Video amplitude (brightness), setup level (Setup is that portion of a video signal that is near black. The setup level control is sort of a black level control.), chroma level (color), and chroma phase (tint). Early processing amplifiers also allowed sync level and burst level to be adjusted. The purpose of the processing amplifier was to make up for any imperfections in a video recoding caused by slightly improper operation of the recorder. Another important function of a processing amplifier is to replace the sync pulses and burst with new, clean sync and burst. A marginal signal can sometimes be made acceptable by cleaning up the sync in this manner. Processing amplifiers may be part of the playback signal system, or may also be part of the timebase corrector. Or, it may be a 'box' all by itself. Generally, processing amplifiers are only found in professional machines.