h.264 meaning in Chinese

H.264 is a new digital video coding standard developed by the joint video team (JVT: joint video team) of VCEG (Video Coding Expert Group) of ITU-T and MPEG (Moving Picture Coding Expert Group) of ISO / IEC. It is both ITU-T H.264 and ISO / IEC MPEG-4 Part 10. Recruitment of drafts began in January 1998, and the first draft was completed in September 1999. Its test mode TML-8 was developed in May 2001, and the FCD board of H.264 was adopted at the 5th meeting of JVT in June 2002. . Officially released in March 2003.

H.264, like the previous standard, is also a mixed coding mode of DPCM plus transform coding. However, it adopts a "return to basic" concise design, does not need many options, and obtains much better compression performance than H.263 ++; strengthens the adaptability to various channels, and adopts a "network friendly" structure and syntax, It is beneficial to the processing of error codes and packet loss; the application target range is wide to meet the needs of different rates, different resolutions and different transmission (storage) occasions; its basic system is open, and no copyright is required for use.

Technically, there are multiple flash points in the H.264 standard, such as unified VLC symbol coding, high-precision, multi-mode displacement estimation, integer transform based on 4 × 4 blocks, and layered coding syntax. These measures make the H.264 algorithm have a very high coding efficiency, under the same reconstructed image quality, it can save about 50% of the bit rate than H.263. H.264's code stream structure has strong network adaptability, increased error recovery capability, and can be well adapted to IP and wireless network applications.

Video coding technology is basically the introduction of two major series of video coding international standards, MPEG-x and ITU-T developed by ISO / IEC. From H.261 video coding recommendations, to H.262 / 3, MPEG-1 / 2/4, etc., there is a common goal that is constantly pursued, that is, to obtain as much as possible at the lowest possible bit rate (or storage capacity) Good image quality. Moreover, with the increasing demand for image transmission in the market, the problem of how to adapt to the transmission characteristics of different channels has become increasingly apparent. So IEO / IEC and ITU-T two international standardization organizations jointly developed the new video standard H.264 to solve these problems.
H.261 is the earliest video coding suggestion, the purpose is to standardize the video coding technology in the conference TV and videophone applications on the ISDN network. The algorithm it uses combines a hybrid coding method that reduces temporal redundancy between interframe predictions and DCT transform that can reduce spatial redundancy. Matching with ISDN channel, its output code rate is p × 64kbit / s. When the value of p is small, only images with low resolution can be transmitted, which is suitable for face-to-face TV calls; when the value of p is large (such as p> 6), conference TV images with better resolution can be transmitted. H.263 suggests a low bit rate image compression standard, which is technically an improvement and expansion of H.261, and supports applications with bit rates less than 64 kbit / s. But in fact H.263 and later H.263 + and H.263 ++ have been developed to support the application of full bit rate. It can be seen from the fact that it supports many image formats, such as Sub-QCIF, QCIF , CIF, 4CIF and even 16CIF.

The code rate of the MPEG-1 standard is about 1.2 Mbit / s, which can provide 30 frames of CIF (352 × 288) quality images, which are formulated for video storage and playback of CD-ROM discs. The basic algorithm of the MPEG-1 standard video coding part is similar to H.261 / H.263, and also adopts motion-compensated inter-frame prediction, two-dimensional DCT, VLC run-length coding and other measures. In addition, the concepts of intra frame (I), predicted frame (P), bidirectional predicted frame (B) and direct current frame (D) are introduced to further improve coding efficiency. On the basis of MPEG-1, the MPEG-2 standard has made some improvements in improving image resolution and compatibility with digital TV. For example, its motion vector accuracy is half a pixel; in encoding operations (such as motion estimation and DCT) Distinguish between "frame" and "field"; introduce coding scalability techniques, such as spatial scalability, temporal scalability, and signal-to-noise ratio scalability. The MPEG-4 standard introduced in recent years has introduced coding based on audiovisual objects (AVO: Audio-Visual Object), which greatly improves the interactive ability and coding efficiency of video communication. Some new technologies are also adopted in MPEG-4, such as shape coding, adaptive DCT, and arbitrary shape video object coding. But the basic video encoder of MPEG-4 still belongs to a type of hybrid encoder similar to H.263.

In short, the H.261 proposal is a classic for video coding. H.263 is its development and will gradually replace it in practice. It is mainly used in communications. However, the numerous options of H.263 often make users confused. The MPEG series of standards has evolved from storage media applications to transmission media applications. The basic framework of core video coding is consistent with H.261. The compelling MPEG-4 "object-based coding" part There are technical obstacles that are currently difficult to apply universally. Therefore, the new video coding proposal developed on this basis H.264 overcomes the weaknesses of the two, introduces a new coding method under the framework of hybrid coding, improves coding efficiency, and faces practical applications. At the same time, it was jointly developed by the two major international standards organizations, and its application prospects should be self-evident.

Technical highlights of H.264

1. Layered design

The H.264 algorithm can be conceptually divided into two layers: the video coding layer (VCL: Video Coding Layer) is responsible for efficient video content representation, and the network extraction layer (NAL: Network AbstracTIon Layer) is responsible for the appropriate way required by the network Pack and transfer data. A packet-based interface is defined between VCL and NAL. Packaging and corresponding signaling are part of NAL. In this way, the tasks of high coding efficiency and network friendliness are completed by VCL and NAL, respectively.

The VCL layer includes block-based motion compensation hybrid coding and some new features. Like the previous video coding standards, H.264 does not include pre-processing and post-processing functions in the draft, which can increase the flexibility of the standard.

NAL is responsible for encapsulating data using the segmented format of the underlying network, including framing, signaling of logical channels, utilization of timing information, or end-of-sequence signals. For example, NAL supports the transmission format of video on a circuit-switched channel, and supports the format of video transmission on the Internet using RTP / UDP / IP. The NAL includes its own header information, segment structure information, and actual payload information, that is, upper-layer VCL data. (If data segmentation technology is used, the data may consist of several parts).

2. High-precision, multi-mode motion estimation

H.264 supports motion vectors with 1/4 or 1/8 pixel accuracy. At 1/4 pixel accuracy, a 6-tap filter can be used to reduce high-frequency noise, and for a 1/8 pixel accuracy motion vector, a more complex 8-tap filter can be used. When performing motion estimation, the encoder can also select the "enhancement" interpolation filter to improve the prediction effect.

In H.264 motion prediction, a macro block (MB) can be divided into different sub-blocks according to Figure 2, forming a block size of 7 different modes. The flexible and detailed division of this multi-mode is more suitable for the shape of the actual moving objects in the image, which greatly improves the accuracy of motion estimation. In this way, 1, 2, 4, 8 or 16 motion vectors can be included in each macroblock.

In H.264, the encoder is allowed to use more than one frame of the previous frame for motion estimation, which is the so-called multi-frame reference technique. For example, a reference frame that has just been encoded in 2 frames or 3 frames, the encoder will select a frame that can give a better prediction for each target macroblock, and indicate for each macroblock which frame is used for prediction.

3. Integer transformation of 4 × 4 blocks

H.264 is similar to the previous standard, using block-based transform coding for the residual, but the transform is an integer operation rather than a real number operation, and its process is basically similar to DCT. The advantage of this method is that it allows transforms and inverse transforms with the same accuracy in the encoder and decoder, and it is easy to use simple fixed-point arithmetic. In other words, there is no "inverse transform error". The unit of transformation is a 4 × 4 block instead of the 8 × 8 block that was commonly used in the past. As the size of the transform block is reduced, the division of the moving object is more accurate. In this way, not only the calculation amount of the transform is relatively small, but also the convergence error at the edge of the moving object is greatly reduced. In order to make the transformation method of small-sized blocks not produce gray-scale differences between blocks for a large area of ​​smooth areas in the image, the DC coefficients of 16 4 × 4 blocks (each small block) One, a total of 16) Perform the second 4 × 4 block transformation, and perform a 2 × 2 block transformation on the 4 4 ​​× 4 block DC coefficients of the chroma data (one for each small block, a total of 4).

In order to improve the rate control capability of H.264, the amplitude of the quantization step change is controlled at about 12.5%, rather than changing with a constant increase. The normalization of the transform coefficient amplitude is processed in the inverse quantization process to reduce the computational complexity. In order to emphasize the fidelity of color, a smaller quantization step is adopted for the chroma coefficient.

4. Unified VLC

There are two methods for entropy coding in H.264, one is to use a unified VLC (UVLC: Universal VLC) for all the symbols to be coded, and the other is to use content-adaptive binary arithmetic coding (CABAC: Context-AdapTIve Binary ArithmeTIc Coding). CABAC is an option, its encoding performance is slightly better than UVLC, but the calculation complexity is also high. UVLC uses a set of codewords of unlimited length. The design structure is very regular, and different objects can be encoded with the same code table. This method can easily generate a codeword, and the decoder can easily identify the prefix of the codeword. UVLC can quickly obtain resynchronization when a bit error occurs.

5. Intra prediction

In the previous H.26x series and MPEG-x series standards, the inter prediction method is adopted. In H.264, intra prediction is available when encoding Intra images. For each 4 × 4 block (except for the special treatment of the edge block), each pixel can be predicted with a different weighted sum of the 17 closest previously encoded pixels (some weights can be 0), that is, this pixel The 17 pixels in the upper left corner of the block. Obviously, this intra prediction is not in time, but in the spatial domain of the prediction coding algorithm, which can remove the spatial redundancy between adjacent blocks and achieve more effective compression.

As shown in FIG. 4, a, b, ..., p in the 4 × 4 square are 16 pixels to be predicted, and A, B, ..., P are the encoded pixels. For example, the value of m point can be predicted by (J + 2K + L + 2) / 4 type, or (A + B + C + D + I + J + K + L) / 8 type, etc. According to the selected prediction reference point, there are 9 different modes for luminance, but intra prediction for chroma only has 1 mode.

6. For IP and wireless environment

The H.264 draft contains tools for error elimination, which facilitates the transmission of compressed video in error-prone and packet-loss environments, such as the robustness of transmission in mobile channels or IP channels.

To resist transmission errors, time synchronization in H.264 video streams can be accomplished by using intra-frame image refresh, and spatial synchronization is supported by slice structured coding. At the same time, in order to facilitate resynchronization after a bit error, a certain resynchronization point is also provided in the video data of an image. In addition, intra-frame macroblock refresh and multi-reference macroblocks allow the encoder to consider not only the coding efficiency but also the characteristics of the transmission channel when determining the macroblock mode.

In addition to using changes in quantization step size to adapt to the channel code rate, in H.264, data division methods are often used to cope with changes in channel code rate. In general, the concept of data segmentation is to generate video data with different priorities in the encoder to support the quality of service QoS in the network. For example, the syntax-based data parTItioning method is used to divide the data of each frame into several parts according to their importance, which allows discarding less important information when the buffer overflows. A similar temporal data partitioning method can also be adopted by using multiple reference frames in P frames and B frames.

In the application of wireless communication, we can support large bit rate changes of wireless channels by changing the quantization accuracy or spatial / temporal resolution of each frame. However, in the case of multicast, it is impossible to require the encoder to respond to varying bit rates. Therefore, unlike the method of Fine Granular Scalability (FGS) used in MPEG-4 (lower efficiency), H.264 uses stream-switched SP frames instead of hierarchical coding.

H.264 performance comparison

TML-8 is the test mode of H.264, use it to compare and test the H.264 video coding efficiency. The PSNR provided by the test results has clearly shown that the results of H.264 have obvious advantages over the performance of MPEG-4 (ASP: Advanced Simple Profile) and H.263 ++ (HLP: High Latency Profile).

The PSNR of H.264 is obviously better than that of MPEG-4 (ASP) and H.263 ++ (HLP). In the comparison test of 6 rates, the PSNR of H.264 is 2dB higher than MPEG-4 (ASP) on average It is 3dB higher than H.263 (HLP) on average. The six test rates and their associated conditions are: 32 kbit / s rate, 10f / s frame rate and QCIF format; 64 kbit / s rate, 15f / s frame rate and QCIF format; 128kbit / s rate, 15f / s Frame rate and CIF format; 256kbit / s rate, 15f / s frame rate and QCIF format; 512 kbit / s rate, 30f / s frame rate and CIF format; 1024 kbit / s rate, 30f / s frame rate and CIF format.

Getting the lighting right in a commercial space can be tricky. However, with our selection of recessed 2x2 Troffer Lights, it's easy.our 2X2 Light Fixture for recessed lighting is great for commercial applications.With 2X2 Troffer Led Retrofit Kit, you can accommodate a range of settings and equip your space with gentle white light that is powerful.Our 2X2 Led Troffer Retrofit Kit provides high CRI (Color Rendering Index) and low energy consumption.

2X2 Troffer Lights

2X2 Light Fixture,2X2 Troffer Lights,2X2 Troffer Led Retrofit Kit,2X2 Led Troffer Retrofit Kit

Shenzhen Bbier Lighting Co., Ltd , https://www.chinabbier.com