One can never emphasize enough the benefits of closed captioning for both content creators and viewers. FCC defined closed captioning as the “audio portion of a television program displayed as text on the TV screen, providing a critical link to news, entertainment, and information for individuals who are deaf or hard-of-hearing.”
Closed captions’ benefits just don’t stop at deaf or hard-of-hearing viewers, but it goes beyond that. Closed captions make content accessible to viewers around the globe. It improves the search engine rankings of the video. With the growing use of video as a medium of content consumption, adding closed captions to the videos makes them more accessible to search engines. Hence, a video with closed captions will have a better ranking than one without them.
Closed captions also help in increasing the user experience and average watch time. According to the report by Verizon, 50% of people prefer captions because they consume the videos with the sound off, irrespective of the device. 80% admitted that they are more likely to watch an entire video when captions are available. Closed captions help users to consume video in a sound-sensitive environment. This results in the following direct gains for the broadcasters: Closed captions increase the average watch time and ensure users stay engaged with the content as captions provide context to the viewer.
Standards and guidelines for closed captioning have been defined by various organizations like FCC, DCMP, CVAA, and WCAG. FCC and DCMP laid out the following broad guidelines for maintaining the quality of captions:
When meeting all the guidelines and laws about closed captions, content creators and broadcasters need to adhere to closed captioning best practices. These best practices help content creators to generate highly accurate captions in record turnaround time while saving time and money. Following best practices should be adhered to generate quality closed captions:
Accuracy: The industry standard for caption generation is a 99% accuracy rate. However, this still leaves an opportunity of 1% error. Let us see how it looks like in an actual file. For example, in a file of 2000 words, 20 errors are allowed in total. Digital Nirvana’s accuracy rate for generating closed captions is above 99%, with extreme scenarios like poor audio, multiple speakers, and multiple languages and accents.
Grammar: To generate high-quality closed captions, one should strictly adhere to the grammar and punctuation rules. A grammar-appropriate and well-punctuated caption increase the accuracy and readability.
Presentation Rate: As per the DCMP guidelines, captions frames should have two lines with 32 characters per line with proportional spacing. Captions must be time-synchronized, and no caption should remain on screen for less than 2 seconds or exceed 225 wpm.
Caption Placement: Captions should typically be placed in the lower center of the screen. However, the captions should be moved accordingly if it is blocking the critical video content. Captions should also not be seen in case of long pauses or silence in the video not to confuse the viewer.
Speaker Labels: In the case of multiple speakers, they should be clearly labeled to match the video content with captions. When names are known, they should be used to identify the speakers. In case of unknown, generic labels such as speaker one and speaker two should be applied.
Subscribe to keep up-to-date with recent industry developments including industry insights and innovative solution capabilities