Video is technically the graphical images shown on a video display. However, it commonly refers to a motion image with or without sound added. It can also refer to the image format used in Television.
Video motion, like the motion picture film before it, achieves the illusion of motion by changing the image often and incrementally. The illusion of smooth motion requires changes quicker that about 16 frames per second. However, an image will still flicker unless it seems to change much faster than than, typically at least twice as fast. Flicker is more difficult to suppress as the ambient light gets brighter so the refresh rate must be higher. For movies the the actual frame rate is 24 frames per second with each image shown twice so that the flicker rate is 48 frames per second. This will still seem to flicker in a brightly lit room so movie theaters are always dark.
The US TV rate is 30 frames per second and to achieve a high flicker rate 1/2 of the image is changed 60 times per second. This is called a field and two fields are interlaced to produce the full picture at 30 frames per second. Transmitting only 1/2 the picture at a time helps lower the bandwidth requirement. This is called an interlace scan format while a continuous scan would be called progressive scan. To show a movie on an American TV requires a special projector that will show one image twice and the next image 3 times to achieve an average of a 30 Hertz frame rate. A European TV will just get the movie completed a little sooner by speeding up the frame rate to 25.
The reason 60 Hz was originally chosen for the field rate in the US is because it is the same as the power line frequency. Early TV's had poorly filtered power supplies and keeping the rate of both the same meant that any stray 60 Hz from the power supply would not drift across the image. In Europe the power frequency is 50 Hz so their TV system used a 50 Hz field rate for the same reason. As power filtering improved with later models the frame rate didn't need to match the power frequency exactly and by the time color TV as invented the field rate was adjusted slightly to avoid visual artifacts from the encoding method used for the color data.
Early video systems depending on CRT technology to display the image. Later LCD and plasma displays were used and have become the dominant technologies. Note that even if the image isn't changing most displays will still need to have its display refreshed.
 Television standards
While the original TV standards for analog video are no longer in use by most TV stations in the USA they are still the standards defining composite video signals, both NTSC and PAL. NTSC was originally standardized in 1941 while color was added in 1953. PAL was developed in the mid-60s.
Composite video is the same as the broadcast standard with the exception that the audio is not part of the signal. The implications of removing the audio permits the video to have higher resolution. The upper frequency cutoff in particular is the guiding value for horizontal resolution of the image. For USA broadcast that upper limit is 4.1 MHz leaving room of the audio subcarrier at 4.5 MHz. On PAL the limit is higher at 6 MHz total width which permits ostensibly higher horizontal resolution. However with no audio the upper limit is not capped at all and can be whatever the source video can do. As a rule of thumb (very approximate) there is 100 lines of horizontal resolution for each Mega-Hertz of bandwidth. Thus extending the NTSC signal has significant advantage.
When color was added to the original monochrome signal it was an addon to the specification and backward compatible for existing Monochrome TV. The way it was done was to add an encoded color signal to the existing video signal. The color information is encoded up near the top of the bandwidth interlaced with the video which is still broadcast in monochrome. A color TV broadcast two of the 3 colors needed and subtracts the two colors from the monochrome image to produce the third color. For NTSC the 2 colors are encoded on a subcarrier centered at 3.58 MHz and extends .5 MHz upward and 1.5 MHz downward. The effect of this is that the full resolution color signal is only for one color range. Removing the audio induced cap, as is done for composite video, permits full resolution in both color ranges by allowing the signal to extend upward the full 1.5 MHz as well. PAL color information is centered at 4.4 MHz and already goes 1.3 MHz in both directions.
All of the analog video enhancements are aimed at improving the resolution and possible interference of the color signals with the monochrome signal. Since in both PAL and NTSC there is the potential between the encoded color data and the high frequency monochrome signal there has to be special filters to keep things separated. The first improvement is called S-Video and what it does is to simply remove the encoded color information and transmit it on a separate wire, thus a video signal has monochrome on one wire and encoded color information on the other. Neither signal is otherwise modified and neither is limited artificially by bandwidth. The encoded signal is still on its carrier at 3.58 MHz for NTSC or 4.4 MHz for PAL.
Component video is still an analog video signal but keeps the two color components of the color encoded signal separate and transmits them on two separate wires, for a total of three wires. This permits full color resolution separate from the monochrome signal. The improvement is due to the losses that would be caused by the encoding and decoding of the color data. The bandwidth is also greater since each color has the full band. Green also carries the video framing data. This format is capable of Full HD performance.
 Digital standards
- H.261 was the first practical video encoding standard also known as CIF (Common International Format) or SIF. It was released in 1988. It has a resolution of 352 x 288 pixels and was used for video teleconferencing systems. (The NTSC implementation reduced the vertical resolution to 240 pixels.). Note that the pixels were not square with the horizontal pixel size 1.2 times the vertical pixel size. To achieve the correct aspect ratio using square pixels you would need to increase the horizontal resolution to 384 pixels. The frame rate is approximately 30 Hz (29.97).
- MPEG-1 The original Motion Picture digital MPEG standard. SIF (Source Input Format) uses MPEG as the input but is otherwise the same as CIF.
- H.262 also known as MPEG-2 part 2 Video is similar to MPEG-1 but adds interlace support (needed for analog TV). All conforming MPEG-2 Video decoders are capable of playing back MPEG-1 Video streams. Standard broadcast video is:
- Standard Definition (SD) and DVD: 720 x 480 NTSC and 720 x 576 PAL. (704 horizontal resolution is also encountered to maintain compatibility with CIF/SIF)
- High Definition (HD): 1280 x 720 and 1920 x 1080 for 16x9 images.
- High Definition for 4x3 images: 1440 x 1080
- H.263 is a video compression standard designed as a low-bitrate compressed format for videoconferencing. It is used for U-Tube and others.
- H.264 also known as MPEG-4 AVC (Advanced Video Coding) codec is the most recent standard and is designed to permit Internet streaming with less bandwidth requirements than MPEG-2 or MPEG-1. It can achieve full H.262 capabilities so it is the new universal standard. However, the capabilities are still changing so most implementations can only achieve a certain subset of H.264. Blu-ray is required to support this format.
- VC-1 is a new advanced high compression format for Blu-ray devices. It can provide capacities of over 3 hours on a standard disk. Note this is the same format originally released as WMV version 9 by Microsoft.
- H.265 is an UHD capable video standard. It doubles both the horizontal and vertical pixels to 3840 x 2160. The standard itself can do 8K UHD. Some newer Blu-ray devices can support this format. They will be identified as supporting 4K or UHD.
There are many different Video formats. These formats require a codec capable of decoding the information. They may also be inside a container format.
- MPEG - Motion picture expert Group formats used in most digital movies. It has several iterations.
- AVC (Advanced Video Coding) is a particular codec and is identical with MPEG-4 part 10
- Theora - An OGG based video codec that is patent free.
- WMV - Windows Media Video format. It is the native Windows video format.
- M-JPEG - Motion Jpeg, a method used to provide video out of a multi-image JPEG file.
- GIF - Normally an graphics format but it can support limited video using multiple images embedded in the same file.
- FLV - Flash video specifically targeted to deliver video over the Internet.
- SWF - Shockwave flash is a format for animated drawings.
- VC-1 - An advanced format that is also known as WMV version 9.
- VP8 - Google's video format now superseded with VP9.
 Container formats
Some formats that seem to be video formats are actually container format. A container format can contain multiple files and usually contains both a specific video format and an accompanying audio format. A container may also contain metadata and support DRM encoding.
- AVI - Audio Video Interleaved, is a multimedia container format introduced for Windows video
- MOV - An extension for a movie that is targeted at Apple Quicktime multimedia
- ASF - Microsoft's streaming format.
- OGG - This is an open source container.
- DVR-MS - Microsoft container for DVD type files.
- VOB - The MPEG-2 format with optional sound formats used in DVD players packaged for computer use.
- MKV - Matroska video
There are also standards for the cables and connectors used to transfer video information. Articles in this wiki include:
Composite video is carried over a single RCA connector. It is usually in a 3 connector bundle with two audio connectors (left and right stereo). They are color coded with a yellow video connector.
Component video uses three RCA connectors. There is one for each of the three colors with green also carrying the framing sync data. An additional two connectors are often included to provide stereo audio.