Jump to content

Audio Video Standard

From Wikipedia, the free encyclopedia
(Redirected from AVS (codec))

Audio Video Coding Standard (AVS) refers to the digital audio and digital video series compression standard formulated by the Audio and Video coding standard workgroup of China. Work began in 2002, and three generations of standards were published.[1]

The first generation AVS standard includes "Information Technology, Advanced Audio Video Coding, Part 2: Video" (AVS1) and "Information Technology, Advanced Audio Video Coding Part 16: Radio Television Video" (AVS+.) For the second generation, referred to as AVS2, the primary application target was ultra-high-definition television video, supporting the efficient compression of ultra-high-resolution (4K and above), high-dynamic-range videos, and was published as IEEE international standard IEEE 1857.4. An industry alliance was established to develop and promote AVS standards.[2] A patent pool charges a small royalty for terminal products (like TVs,) excluding content providers and operators.[3]

The AVS3 codec was added to DVB's media delivery toolbox.[4]

Organizations

[edit]

Workgroup

[edit]

The AVS workgroup was founded in June 2002 to cooperate with Chinese enterprises and scientific research institutions, to formulate and revise common technical standards such as digital audio and digital video's compression, decompression, processing and representation, thus to provide efficient and economic coding/decoding technologies for digital audio and digital video devices and systems, serving the high-resolution digital broadcasting, high-density digital laser storage media, wireless broadband multimedia communication, Internet broadband streaming media, and other applications.

The workgroup is headed by Gao Wen, the academician of Chinese Academy of Engineering, the professor and Ph.D. supervisor of Peking University, and the deputy director of the National Natural Science Fund Committee, consisting of a requirement group, system group, video group, audio group, test group, intellectual property group and other departments. The first setback was when China did not use AVS for its own digital television broadcast system in 2003.[5]

Patent pool

[edit]

A patent pool which manages and authorizes the patents was founded on September 20, 2004. The committee was the first patent pool management institution in China. An independent corporate association, the Beijing Haidian District Digital Audio and Video Standard Promotion Center, is registered in the Civil Affairs Bureau of Haidian District of Beijing City.[6] for patent technologies included in the standard, as the expert committee and the main business decision-making institution of the promotion center. The royalty for the first generation AVS standard was only one yuan per terminal. The plan was to charge a small amount of royalty only for the terminal, excluding the contents, as well as software services on the Internet.[7]

Alliance

[edit]

The AVS Industry Alliance, short for Zhongguancun Audio Visual Industry Technology Innovation Alliance, was formed in May 2005 in Beijing by twelve entities, including TCL Group Co., Ltd., Skyworth Group Research Institute, Huawei Technology Co., Ltd., Hisense Group Co., Ltd., Haier Group Co., Ltd., Beijing Haier Guangke Co., Ltd., Inspur Group Co., Ltd., Joint Source Digital Audio Video Technology (Beijing) Co., Ltd., New Pudong District Mobile Communication Association, Sichuan Changhong Co., Ltd., Shanghai SVA (Group) Central Research Institute, Zte Communication Co., Ltd., and Zhongguancun Hi-Tech Industry Association. The organization is also known as AVSA, and it collaborates with "AVS Workgroup" and "AVS Patent Pool Management Committee" as part of the "Three Carriages."

First generation

[edit]

The first generation AVS standard included Chinese national standard "Information Technology, Advanced Audio Video Coding, Part 2: Video" (AVS1 for short, GB label:GB/T 20090.2-2006) and "Information Technology, Advanced Audio Video Coding Part 16: Radio Television Video" (AVS+ for short, GB label: GB/T 20090.16-2016). A test hosted by the Radio and Television Planning Institute of the State Administration of Radio, Film, and Television (SARFT, later part of the National Radio and Television Administration) shows: if the AVS1 bitrate is half of MPEG-2 standard, the coding quality will reach excellent for both standard definition or high definition; if the bitrate is less than 1/3, it also reaches good-excellent levels. The AVS1 standard video part was promulgated as the Chinese national standard in February 2006. Around this time, AVS was considered for use in the enhanced versatile disc format,[8] although products never reached the market.

During the May 7–11, 2007 meeting of the ITU-T (ITU Telecommunication Standardization Sector), AVS1 was one of the standards available for Internet Protocol television (IPTV) along with MPEG-2, H.264 and VC-1. On June 4, 2013, the AVS1 video part was issued by the Institute of Electrical and Electronics Engineers (IEEE) as standard IEEE1857-2013, AVS+ is not only the radio, film and television industry standard GY/T 257.1-2012 "Advanced Audio Video Coding for Radio and Television, Part 1: Video" issued by the SARFT on July 10, 2012, but also the enhanced version of AVS1.[9]

Second generation

[edit]

The second-generation AVS standard included the series of Chinese national standard "Information Technology, Efficient Multi Media Coding" (AVS2). AVS2 mainly faces the transmission of extra HD TV programs, The SARFT issued AVS2 video as the industry standard in May 2016, and as the Chinese national standard on December 30, 2016. AVS2 was published by the Institute of Electrical and Electronics Engineers (IEEE) as standard 1857.4-2018 in August, 2019.[10]

A test showed the coding efficiency of AVS2 more than doubled that of AVS+, and the compression rate surpassed the international standard HEVC (H.265). Compared with the first generation AVS standard, the second can save half transmission bandwidth.

Features

[edit]

AVS2 adopts a hybrid-coding framework, and the whole coding process includes modules such as intra-frame prediction, inter-frame prediction, transformation, quantization, inverse quantization and inverse transformation, loop filter and entropy coding. It owns technical features as followings:[11]

  • Flexible Coding Structure Partition
    • In order to satisfy the requirements of HD and Ultra HD resolution videos for the compression efficiency, AVS2 adopts a block partition structure based on the quadtree, including the CU (Coding Unit), PU (Prediction Unit) and TU (Transform Unit). An image is partitioned into LCU (Largest CU) of fixed size, which is iterated and partitioned into a series of CUs in the form of quadtree. Each CU contains a luminance-coding block and two corresponding chrominance-coding blocks (the size of the block unit below refers to the luminance coding block). Compared with the traditional macro block, the partition structure based on the quadtree is more flexible, with the CU size extended from 8×8 to 64×64.
    • The PU stipulates all prediction modes of CU, and it is the basic unit for the prediction, including intra-frame and inter-frame prediction. The maximum size of PU is not permitted to exceed that of the current CU it belongs to. On the basis of AVS1 square intra-frame prediction blocks, the non-square intra-frame prediction block partition is added. Meanwhile, on the basis of the symmetric prediction block partition, the inter-frame prediction also adds 4 asymmetric partition ways.
    • Besides CU and PU, AVS2 also defines a transformation unit TU for the prediction of residual transformation and quantization. TU is the basic unit of transformation and quantization, defined in CU like PU. Its size selection is related to the corresponding PU shape. If the current CU is partitioned into non-square PU, the non-square partition will be applied to the corresponding TU; otherwise, the square partition type will be applied. The size of TU could be greater than that of the PU, but no more than that of the CU it belongs to.
  • Intra Prediction Coding
    • Compared with the AVS1 and H.264/AVC, AVS2 designs 33 modes for the intra-frame prediction coding of luminance blocks, including DC prediction mode, plane prediction mode, bilinear prediction mode and 30 angel prediction modes. There are 5 modes for chrominance blocks: DC mode, horizontal prediction mode, vertical prediction mode, bilinear interpolation mode as well as the luminance derived mode (DM) newly added.
  • Inter Prediction Coding
    • Compared with AVS1, AVS2 increases the maximum quantity of candidate reference frames to 4, so as to adapt to the multi-level reference frame management, which also takes full advantage of the redundant space of the buffer.
    • In order to satisfy the requirements of multiple reference frame management, AVS2 adopts a kind of multi-level reference frame management mode. In this mode, the frames in each GOP (Group of Pictures) are partitioned into multiple levels according to the reference relationship between frames.
  • Inter Prediction Mode
    • On the basis of AVS1's three image types I, P, B, according to the requirements of application, AVS2 adds the forward multi-hypothesis prediction image F. Aiming at the video surveillance, scene play and other specific applications, AVS2 designs scene frames (Image G and Image GB) and reference scene frame S.
    • For Frame B, in addition to traditional forward, backward, two-way mode and skip/direct mode, a new symmetric mode is added. In symmetric mode, only forward motion vectors are required to be encoded, and then backward motion vectors will be derived from the forward motion vectors.
    • In order to fully exert the performance of the skip/direct mode of Frame B, AVS2 also adopts multi-direction skip/direct mode under the premise of retaining the original skip/direct mode of Frame B: two-way skip/direct mode, symmetrical skip/direct mode, backward skip/direct mode and forward skip/direct mode. For the four particular modes, the same prediction mode block between adjacent blocks is discovered according to the prediction mode of the current block, and the motion vectors of adjacent blocks with the same prediction mode, which are found out first, will be considered as that of the current block.
    • For Frame F, coding blocks can refer to the two forward reference blocks, equivalent to the double hypothesis prediction of Frame P.
    • AVS2 divides the multi-hypothesis prediction into two categories, namely temporal and spatial multi-hypothesis mode.
    • The current encoding block of the time-domain double hypothesis applies the weighted average of prediction blocks as the current prediction value, but there is only one for both the MVD (Motion Vector Difference) and the reference image index, while another MVD and reference image index are derived from linear scaling based on the distance in the time domain.
    • The spatial-domain double prediction is also called DMH (Directional Multi-Hypothesis), which is obtained by fusing two prediction points around the initial prediction point, and the initial point is located in the line between the two prediction points. In addition to the initial prediction point, there are 8 prediction points in total, to be fused only with the two prediction points located in the same straight line with the initial prediction point. Besides four different directions, the adjustment will also be conducted according to the distance, and the four modes with 1/2 pixel distance and 1/4 pixel distance will be respectively calculated, plus the initial prediction point, to work out 9 modes in total for comparison, thus to select out the optimal prediction mode.
    • The scene frame is proposed by AVS2 based on the surveillance video coding method of background modeling. When the surveillance tool is not opened, Frame I is only for reference for images before the next random access point. When the surveillance tool is opened, AVS2 will apply a certain frame in the video as the scene image frame G, which can be considered as a long-term reference for the subsequent images.
    • AVS2 can generate the scene image frame GB with some frames in the video, and frame GB can also be applied as a long-term reference.
    • In order to simplify the motion compensation, AVS2 adopts an 8-tap interpolation filter based on DCT transformation, which requires only one filtering, and supports the generation of higher motion vector accuracy than 1/4 pixel.
  • Transformation
    • Transformation coding in AVS2 mainly applies integer DCT transformation, which is directly performed on the transformation blocks of Size 4×4, 8×8, 16×16, 32x32.
    • For one transformation block with dimension greater than 64, a logical transformation LOT is adopted to conduct the wavelet transformation, followed by the integer DCT transformation.
    • After the DCT transformation is achieved, AVS2 will conduct the second 4 x 4 transformation for the 4 x 4 blocks with low frequency coefficients, thus further to reduce the correlation between coefficients, and enable the energy to be more concentrated.
  • Entropy Coding
    • The AVS2 entropy coding divides transformation coefficients into CGs (Coefficient Group) of 4 x 4 size first, and then conducts encoding and zigzag scan according to the CGs.
    • Coefficient coding encodes the CG position containing the last non-zero coefficient first, and then encodes each CG, until all CG coefficients are completed, so as to enable zero coefficients to be more concentrated during the encoding process.
    • Binary arithmetic coding and two-dimensional variable-length coding based on the context are still applied in the AVS2.
  • Loop Filter
    • Loop filter modules of AVS2 contain three parts: deblocking filter, adaptive sample point offset and sample compensation filter.
    • The filtering blocks of the deblocking filter are of an 8×8 size, which conduct filtering on the vertical edge first, followed by the horizontal edge. And diverse filtering methods are selected for each edge according to different filtering intensities.
    • After the deblocking filter, the adaptive sample offset compensation is adopted to further reduce the distortion.
    • The AVS2 adds an adaptive filter after the deblocking filter and sample offset compensation, a Wiener filter with 7×7 cross plus 3×3 square centrosymmetry, which applies the original undistorted image and encoding reconstructed image to figure out the least square filter coefficient, and conduct filtering on the decoding reconstructed image, thus to reduce the compression distortion in the decoding image, and enhance the quality of the reference image.

Implementations

[edit]

uAVS3

[edit]

uAVS3 is an open source and cross-platform AVS3 encoder and decoder. The decoder (uAVS3d) and encoder (uAVS3e) support the AVS3-Phase2 baseline profile. uAVS3d can be compiled for Windows, Linux, macOS, iOS and Android,[12] whilst uAVS3e can only be compiled for Windows and Linux.[13] uAVS3d and uAVS3e are released under the terms of the BSD 3-clause[12] and BSD 4-clause[13] licenses respectively.

FFmpeg v6 can make use of the uAVS3d library for AVS3-P2/IEEE1857.10 video decoding.[14]

uAVS2

[edit]

An encoder called uAVS2 was developed by the digital media research center of Peking University Shenzhen Graduate School. Subsequently, AVS2 Ultra HD real-time video encoder and mobile HD encoder were announced.[15][16]

OpenAVS2

[edit]

OpenAVS2 is a set of audio and video coding, transcoding and decoding software based on the AVS2 standard.[17]

xAVS2 & dAVS2

[edit]

xAVS2 and dAVS2 are open-source encoder and decoder published by Peking University Video Coding Laboratory (PKU-VCL) based on AVS2-P2/IEEE 1857.4 video coding standard, which is offered under either version 2 of the GNU General Public License (GPL) or a commercial license.

FFmpeg Version 6 can make use of the dAVS2 library for AVS2-P2/IEEE1857.4 video decoding[18][19] the xAVS2 library for AVS2-P2/IEEE1857.4 video encoding.[20][21]

libdavs2 and libxavs2 are under the GNU General Public License Version 2 or later.

See also

[edit]

References

[edit]
  1. ^ "Youwei Vision launches AVS3 8K video real-time decoder (in Chinese)". Tencent. May 29, 2019.
  2. ^ "Introduction to AVSA". Official website of AVSA. Archived from the original on March 24, 2019. Retrieved September 29, 2017.
  3. ^ "Who will lead the new video coding standard: a performance comparison report of HEVC、AVS2 and AV1". Archived from the original on July 28, 2018. Retrieved September 29, 2017.
  4. ^ "AVS3 codec added to DVB's media delivery toolbox". July 7, 2022. Retrieved September 7, 2022.
  5. ^ Elspeth Thomson, Jon Sigurdson, ed. (2008). China's Science and Technology Sector and the Forces of Globalisation. World Scientific Publishing. pp. 93–95. ISBN 978-981-277-101-8. Retrieved June 15, 2022.
  6. ^ 跳转提示. www.avs.org.cn.
  7. ^ National Research Council (October 7, 2013). Patent Challenges for Standard-Setting in the Global Economy: Lessons from Information and Communications Technology. National Academies Press. ISBN 978-0-309-29315-0. Retrieved June 15, 2022.
  8. ^ Liu Baijia (March 6, 2006). "Standard Issue". China Business Weekly. Retrieved June 14, 2022.
  9. ^ Xinhua (August 27, 2012). "China to promote its own audio-video coding standard". The Manilla Times. Retrieved June 15, 2022.
  10. ^ IEEE Standard for Second-Generation IEEE 1857 Video Coding. Institute of Electrical and Electronics Engineers. August 30, 2019. pp. 1–199. doi:10.1109/IEEESTD.2019.8821610. ISBN 978-1-5044-5461-2. Retrieved June 13, 2022. {{cite book}}: |journal= ignored (help)
  11. ^ "AVS2 special column".
  12. ^ a b uavs3d, UAVS, April 11, 2023, retrieved April 29, 2023
  13. ^ a b uavs3e, UAVS, April 4, 2023, retrieved April 29, 2023
  14. ^ FFmpeg. "1.8 uavs3d". Retrieved April 6, 2023.
  15. ^ "High definition real-time encoder of AVS2 came out with better performance than x265 the encoder of HEVC/H.265".
  16. ^ "AVS2 Real-time codec——uAVS2". Archived from the original on April 27, 2018. Retrieved September 29, 2017.
  17. ^ "Official website of OpenAVS2". Archived from the original on December 31, 2019.
  18. ^ FFmpeg. "1.7 dAVS2". Retrieved April 6, 2023.
  19. ^ dAVS2. "dAVS2". GitHub. Retrieved April 6, 2023.{{cite web}}: CS1 maint: numeric names: authors list (link)
  20. ^ FFmpeg. "1.27 xAVS2". Retrieved April 6, 2023.
  21. ^ dAVS2. "dAVS2". GitHub. Retrieved April 6, 2023.{{cite web}}: CS1 maint: numeric names: authors list (link)
[edit]