Video content creators are constantly looking at ways to make their videos more accessible and searchable. Creating closed captions, interactive transcripts and audio descriptions are essentially the easiest ways to make your videos reach out to more audience. It is also important for content owners to deliver their videos in widely accepted media formats.
Closed captions or subtitles are textual representation of audio and visual cues synchronized with the visuals in the video. Closed captions or subtitles ensure that your videos are accessible to deaf and hard of hearing population. Captioning also helps students and non-native English speakers to interpret videos with much ease.
Transcripts are complete textual representation of the media content. Basically, transcripts cover complete spoken audio descriptions of visuals that wouldn’t be otherwise accessible without seeing the video. Transcripts essentially help your videos to be widely accessible, which also includes viewers who are not able to consume your videos because of technical limitations or any other accessibility problems. Interactive transcripts help users to click a particular line in the transcript and jump to the exact portion of the video where the spoken text is present. Interactive transcripts extensively help in enhancing searchability of your videos. People who have time limitations can quickly scan through the transcripts to get the essence of videos without having to watch the entire video.
Audio description is the narration of the story that runs in a different audio track. Audio description basically narrates the entire visual content. This makes the video accessible to people who have difficulty with vision. By listening to audio description, individuals with difficulty with vision can comprehend the video. Good audio description will include description of every action present in the video. Audio description enhances the accessibility of your videos by helping you to reach out to the visually challenged audience.
Ways to Get Your Videos Closed Captioned
Generally, there are two ways by which you can get your videos closed captioned.
One way is by doing it on your own. There are quite a few online tools available for you to caption your video. Another more reliable way to get it done is by outsourcing to professional captioning service providers.
Whichever way you choose to get your captioning done, you get time-coded text files. Once caption files are available, you must integrate the file into your videos.
Digital Nirvana recently announced that it has joined forces with Clearcast, the UK’s sole television ad clearing house. Our cloud-based subtitling solutions are now a featured tool within MediaCentral, Clearcast’s online portal for the video advertising industry. Through this strategic alliance, we are now Clearcast’s partner for subtitling TV advertisements.
Check out the below video of our CEO, Hiren Hindocha, and Clearcast’s Managing Director, Chris Mundy, discussing the new partnership at the NAB Show in Las Vegas last month.
“With this portal, Clearcast has created a place where advertisers can go to not only make sure their ads follow the UK compliance regulations, but also for adjunct services. Adding these new services – all within one central site – greatly increases the value they’re offering their customers. We’re thrilled to be a part of such an integral, forward-thinking site for the advertising community.” – Hiren Hindocha
Closed captioning or subtitling is swiftly gaining popularity among organizations to drive substantial traffic to their marketing videos. They understand that quality traffic generation on to marketing content in turn helps them in quality lead generation. Captioning or subtitling marketing videos is one quick way to drive traffic and reach out to broader prospects.
Companies understand that creating closed captions is not a simple task and inaccurate subtitles or captions can hurt their marketing efforts very badly. This prompts them to look for caption service providers with certain qualities. Let us look at some of the factors that content owners consider while identifying ideal service providers for their captioning needs.
Closed Caption Accuracy:
The first and the most important factor that the companies, considering to invest in closed captioning, look for is the accuracy of captions. To create accurate closed captions, the service providers should get their transcription right. Nowadays, a lot of service providers depend on automated tools to deliver final transcript for creating captions. This is one of the fastest means to create transcripts, but entirely depending on such automated tools can hurt caption quality really bad. The best transcription accuracy these tools give is not more than 95% while ideal accuracy is at least 98%. So, having qualified and highly experienced professionals is the only way to hit target accuracy.
Closed Caption Synchronization & Placement:
Another factor is accurate synchronization of closed captions to the visuals. Captions should appear exactly corresponding to the visual frames of the videos. The captions should also be placed properly without hindering the visuals. The captions should be easily readable and should not overlap.
It is very important for the service providers to stick to the timelines. This is an essential factor people look while hiring caption service providers.
Security & Confidentiality:
Organizations would definitely want their content to be secure. Organizations enter into a confidentiality agreement with the service providers.
These are some of the key factors to be considered while zeroing in on an ideal captioning service partner.
It makes sense to spend a little extra to create subtitlesor closed captions for your advertisements as it allows your adverts to connect with a larger audience. It is estimated that over 10 million people in the UK are facing difficulty with hearing loss. This means that there is a significant chance for UK TV advertisers to extend their influence to this community by having subtitles on their commercials.
It is also estimated that there are more than 3 million people in the UK exercising English as their second language, and advertisements in English are not really getting the message across to these people. Subtitling would significantly help this community to comprehend advertisements with fast and accented English dialogues.
With subtitles running on them, your advertisements communicate to your potential customers even with the volume turned off. Subtitling helps to comprehend adverts even in public and other noisy environments like restaurants, gyms, airports, bars, etc.
Companies that operate globally would need their commercials to communicate with a diverse set of audience spreading across the globe. Rather than recreating commercials in different languages, it is always better for these companies to create subtitles in different languages for their adverts. This essentially helps to cut down the cost of creating commercials multiple times. Creating subtitles or captions in multiple languages is apparently less expensive than recreating commercials multiple times in different languages.
In addition to reaching out to a wider audience; subtitling your advertisements can also help when you repurpose your commercials for social media. While ads are played on social media like Facebook or Instagram, often they are muted. When users interact with the commercials in social media, they do not have audio to deliver the context. Subtitling your advertisement engages your users, which in turn increases the impact of your advertisement. So, subtitles are not only important for your broadcast TV, but it is all the more important when you repurpose your ads for social media.
Digital Nirvana Partners with Clearcast to Provide Subtitles for Advertisements
Digital Nirvana, leading global provider of media solutions, has entered into a partnership with Clearcast to provide subtitles for commercials in the UK market. The service is offered through an easy-to-use online portal called subtitle-now.com. The advertisers can upload their videos and get them subtitled quickly through this portal.
Clearcast is the UK’s sole television clearance house that looks at about 61,000 TV commercials a year. Being able to upload commercials online and have subtitling done in minutes is going to be a big benefit for the UK advertising agencies as it speeds up the process, and would also reduce the barrier that is holding back advertisement subtitling in the UK.
When we talk about closed captioning or subtitling process, one question that immediately comes to our mind is; whether an automated transcription processgood enough or do we really need to have human involvement during the process.
As we all know, there are a quite a few software tools and equipment available in the market that can effectively convert speech to text automatically. However, there are limitations for these software tools or equipment when it comes to transcribing different accents and audios with different qualities. For instance, in the US, people of different regions have different styles of speaking. Most of the time, converting speech to text automatically for a television show is challenging because TV shows feature people from different regions and backgrounds who would pronounce English words in completely different styles.
During a study conducted by Google and Harvard University, it was found that over the last century, English as a language has doubled its size and it continues to expand yearly by around 8,500 new words, and now stands at 1,022,000 words. Growing glossary of vocabulary makes automated transcription a challenging process. A lot of new terms and words adding up to the vocabulary make automated tools vulnerable to misinterpret them. Although internet helps software tools and equipment to get updated regularly, there is still some amount of difficulty faced by these automatic transcription tools when it comes to interpretation of growing vocabulary.
Human transcriptionists still have advantages compared to speech recognition tools although manual process is more time consuming and requires more effort by transcriptionists to keep themselves updated with new words and phrases. Voice recognition tools are trained on patterns and styles of specific voices, whereas human transcriptionists have experience listening to and communicating to people with a variety of dialects. New phrases and words get communicated to humans quickly across different regions. The new technologies are undoubtedly making transcription process easy and quick; however, the need to have human intervention to create transcripts for closed captions is still preferred by most of the broadcasters and content owners. Broadcasters and content owners do not want to take chances when it comes to quality of closed captions and are still not ready to rely completely on automated processes. For them having quality closed captions are all the more important as they add value to their content.
Digital Nirvana’s closed captioning process makes use of a hybrid transcription process wherein a quick preliminary transcript is created by an automated process, which is further edited by qualified and highly experienced transcriptionists for error-free delivery. The combination of automation and human intervention make Digital Nirvana’s captioning process quick and foolproof.
Audio fingerprinting is an audio-retrieval technique specifically built on content-based identification method. Using audio fingerprinting system, it is possible to detect a particular audio segment from a huge audio library.
How does audio fingerprinting technique work?
Audio fingerprinting technique generates a database of compressed acoustic structures from a large audio library. It creates a virtual plot of anchor points for the recording attributes using parameters such as frequency, time, signal interferences, intensity, etc. Every audio fingerprint contains its own unique combination of metadata making it easy for retrieval.
At a certain point when an unknown audio fragment is ingested into the system, the system scans through the database where audio fingerprints are stored. It tries to match the features of the ingested audio fragment with the metadata available in the database. Once the fingerprint of the ingested audio fragment matches with the data in the database, they can be confirmed as the same audio content and can be retrieved.
For an audio fingerprinting system to be robust, it has to meet certain requirements.
To be resistant against audio distortions.
Database to be scalable with growing digital audio sequences.
The size of audio fingerprint database to be kept minimal by having compressed and compact fingerprints.
It is important to have distinctly specific fingerprints so that even a short audio fragment matches only with the corresponding data.
The system should adopt a very efficient strategy while it looks up for a metadata.
The two main pieces of an audio fingerprinting framework are extraction of fingerprint from the audio query and matching it with fingerprint available in the database. Audio fingerprinting based content tracking has seen exponential growth in terms of its application.
Audio Fingerprinting technique helps in a big way to lookup lost closed captions. Broadcasters and content owners do not have to generate closed captions again from scratch if captions had been originally created. Digital Nirvana’sclosed captioning processes effectively make use of audio fingerprinting technology to retrieve closed captions.
At next week’s NAB Show, the latest innovative technologies in broadcast and media technology will be showcased. With automation and improved operational efficiency in mind, Digital Nirvana will introduce Metadator – a software application that makes the editing process more efficient for broadcasters and content creators using the AVID Interplay media asset management platform.
With its ability to generate locators for media assets outside of AVID’s Interplay MAM system, the application makes it easy for AVID users to export media to external sources for generating transcripts or metadata that can be automatically ingested back into the AVID Interplay MAM platform. The Metadata application automatically extracts video footage and audio, generates content metadata or transcripts using speech-to-text technology, then ingests the information back into AVID Interplay. It simplifies the process of metadata and transcript generation by automating it with proven technology – improving the turnaround time while reducing costs and improving overall operational efficiency.
The application automates what’s been a manual process of combing through footage and creating scene summaries. Metadator communicates with AVID Interplay using web service APIs to access content. Users can export media to Digital Nirvana’s cloud service for generating transcripts or metadata with locators for the media assets, and automatically ingest it back into AVID Interplay, so content creators can easily locate metadata when editing footage down to a singular show.
Discover how Digital Nirvana’s new Metadator software can improve your operations. Visit us at NAB in Booth SU10121!
Closed captioning, as we all know, is the textual interpretation of speech and non-speech elements presented on visual display screens. Closed captions help to reach out to a wider audience including hard of hearing and people with different language capabilities. There are many closed caption service providers operating in the market. Let us look at some best practices that can be adopted by the service providers while creating closed captions.
Closed captions should be displayed on the screen in synchronization with the visuals.
Captions should fade away from the screen once the corresponding visuals disappear.
It is essential that the captions stay on screen long enough for the viewers to read them.
Minimum display time can be set as 1.5 seconds for dialogs that are short as a word or two; however, this cannot be applied for rapid dialogs.
Closed captions should be placed in a fashion that the visuals are not obstructed. Viewers should be able to read through the captions and at the same time follow the visuals.
There should not be more than two lines of text at any given time on screen.
Try not to end a sentence and begin another sentence on the same line and retain all the words as it is spoken.
Do not avoid words like “so”, “because”, “but”, “too”, etc. These words are essential to convey the exact meaning of spoken words.
Where ever there is an “inaudible”, place a label to explain the cause. For example; crowd noise sinks speech, noisy market, etc.
Display closed captions describing sound effects in lowercase italics inside brackets/parentheses. For Example; (child crying) (car screeching).
Identify speakers and display their names against the captions. Example; (Joe) How are you? (Mary) I am doing great.
Inserting music icon is a method to indicate that a song is being played on the screen. A hashtag can also be used to indicate songs.
Movies and TV content closed captions do not generally use full stops/periods, but it should be left to content owners’ discretion. However, question marks or exclamation marks should be used to give clarity to a phrase.
It is always good to start sentences in capital letters. Capitalize an entire word only if it indicates screaming.
Spell numbers out from one to ten and numerals for numbers higher than ten. For technical and sports terms, use numerals. Example: (scored 5 goals out of 6 penalties)
These are general closed captioning styles in practice; however, these rules can be tweaked or altered as per specific customer requirements.
Over the recent years, video marketing has been gaining wide popularity as more and more people started consuming videos as opposed to images and other readable formats. How can your marketing videos gain more traction and some serious attention? Well, your videos need to be more engaging and should be able to connect to as much audience as possible. Keep in mind that your audience comprises of deaf and hard of hearing and also people with language difficulties. Attaching closed captions and transcripts on to your videos are the quickest and easiest means by which you can reach out and engage these audiences.
There are significant SEO benefits with the integration of closed captions and transcripts on your marketing videos. How do closed captions and transcripts help your video SEO activities? Google doesn’t search your videos, instead what it does is; it depends on transcripts, captions or metadata to comprehend the story of your video and identify whether it is connected to a particular search term. So the textual representation makes it quite easy for the search engine to correlate the video with the search term.
How to get your videos captioned or transcribed? There are quite a few automatic speech recognition tools available in the market that can transcribe and caption your videos, but the question here is whether they can give you the desired result? Not really. It is very important for you to have your marketing videos to communicate clearly to your target audience and you wouldn’t want your transcripts or captions to have any kind of errors that can spoil your brand image. So, it is sensible to engage professional transcription and caption providers to do the job for you. Though there are many companies providing quality transcripts and captions, it is essential for you to identify the one that can respond to your specific requirements.
Closed captions are one of the effective ways through which hearing impaired or deaf can fully experience and enjoy entertainment or broadcast events. Nowadays, almost all the broadcasted events have closed captioning allowing deaf or hard of hearing fully enjoy their favorite programs. Sports stadiums used to be a place where these individuals were often neglected. Deaf or hard of hearing fans often found it disappointing as they’re not able to follow commentary, announcements or music that are played in the loudspeakers. Closed captioning on the giant screens is the only way to make sports more accessible for these individuals who enjoy watching sports from stadiums.
The Washington Redskins is a popular professional American Football Team. In 2006, three hearing-impaired ardent sports enthusiasts filed a lawsuit against the Washington Redskins as they were not able to follow announcements, public service spots and advertisements played on midfield giant screens. They demanded closed captioning on the screen every time something is played including music lyrics or any announcements.
Under the Americans with Disabilities Act; District Judge, Alexander Williams Jr., ruled that running closed captions on the stadium screen is no more an option, but an obligation.
The hearing-impaired sports fans felt that they used to struggle to follow the action when they first started going to games many years ago. It was difficult for them to enjoy the game as they were not able to follow why the penalties were awarded and who all were the players. With the help of closed captions, they are able to experience the games the similar way as the other fans.