With the rapid increase of internet and video content delivery, efficient streaming has gained significant importance in recent years. There are numerous challenges in this continuously evolving field of video delivery, and technologies are competing to find a place in this fast growing area. An immediate challenge in video delivery is to combine metadata with multiple audio languages to enable a high-quality viewing experience. Content generation for adaptive streaming and support for multiple streaming technologies is critical for acceptable Quality of Experience (QoE). Moreover, the extinction of Flash and the advent of HTML5 has opened up new avenues for video – including advanced native support for multimedia playback.
Additionally, increased growth in portable devices has accelerated demand for a richer mobile viewing experience. Content delivery systems have been battling hard to deliver a television-like viewing experience to multiple platforms. The demand for multi-screen delivery with minimal latency is the latest challenge in the broadcast industry. A recent dominant content delivery network report found that video contributes to more than half of today’s internet traffic and is predicted to grow in the coming years. As people migrate to streamed media, there is a need to guarantee QoE that is similar to a standard TV.
Television broadcast in general carries auxiliary information in addition to program content starting with closed-captions in multiple languages, multiple language audio streams, ratings information and emergency information. Some popular streaming technologies, such as Flash/RTMP, are not well equipped to provide QoE with all the auxiliary data – and not all mobile devices support Flash. Considering the above, one of the best ways to generate next generation content delivery is HTTP Live Streaming, known in short as HLS.
HLS is an HTTP-based media streaming communications protocol implemented by Apple software. One of the reasons why HLS is a superior standard is its support for metadata, including SCTE-128 closed captions and multiple audio streams. Specifically, SCTE-128 is a closed-captioning standard for MPEG2/H264 compressed video, where the caption data is stored as user-data in MPEG2 and supplemental enhancement information (SEI) in H264 video. Similarly, multiple audio streams can be added to MPEG-TS natively.