Digital Nirvana To Show AI-Driven Solutions At NAB Show

MediaServicesIQ

FREMONT, Calif.—Digital Nirvana will show its artificial intelligence (AI) and machine learning (ML) solutions for post-production and content creation workflows at the 2023 NAB Show, April 15-19, in Las Vegas.

They include:

  • MediaServicesIQ Version 2 is the latest version of the company’s gateway to its technology stack. MediaServicesIQ Version 2 is a multipurpose cognitive platform that integrates with all of the company’s other products. It makes it possible to generate metadata and add generative AI capabilities based on that metadata, which enables users to make faster content production decisions.
  • TranceIQ, a self-service SaaS application that automatically generates transcripts, closed captions/subtitles and translations for content localization. Automatic transcription, NLP-based caption splits and machine translation-based subtitling — coupled with new back-end architecture changes — make TranceIQ faster and easier to use. With more than 30 UX improvements, TranceIQ now gives users more control than with past versions of Trance.
  • MetadataIQ, the company’s solution for automating generation of transcripts and video intelligence metadata, increasing the visibility into content within Avid PAM/MAM. Users now are able to use generative AI capabilities on the generated metadata for refined intelligence, increasing efficiency of content production, archive management and content decision making for live news and sports. MetadataIQ also integrates directly with Digital Nirvana’s TranceIQ platform to generate transcripts, captions and translations in all industry-supported formats.

See Digital Nirvana at NAB Show booth W1657.

More information is available on the company’s website.

Hiren Hindocha in conversation with TM Broadcast International about AI and Big Data for M&E organizations

Answers supplied by Hiren Hindocha, CEO, Digital Nirvana

What does Big Data bring to this sector?

When we talk about big data in broadcast, we’re talking about the hundreds of terabytes or even petabytes of data that a system gathers during direct interaction with end users. Typically that happens when broadcasters make their content available through VOD or streaming options. Broadcasters can analyze this big data to understand customers’ preferences, which in turn helps them serve better content to viewers and serve the right demographic to advertisers.

Besides the massive amounts of data exchanged between broadcasters and end users, big data also refers to the many content feeds most broadcasters ingest continuously and simultaneously. For example, the volume of incoming video feeds for a news organization is huge —several hundred gigabytes to terabytes on a daily basis.If broadcasters can make sense of that big data, they can use it to help make content.

What are the possibilities of Artificial Intelligence in the broadcast industry?

Applying artificial intelligence across audio and video opens a world of possibilities for the broadcast industry. For example, speech-to-text technology has reached a point where it is better than humans at understanding specific domains. A well-trained speech-to-text engine can provide a very accurate transcript and captions of incoming content. At the same time, other well-trained engines can extract facial recognition, perform on-screen text detection, detect objects in the background, and more.

In the case of multiple and continuous incoming video feeds, artificial intelligence can help describe what is in the feed and make it very easy for editors to find what they’re looking for. AI capabilities can also generate metadata that makes the content easily searchable and retrievable, leading to easier content creation and better content publishing decisions.

How can all these technologies enrich the content consumer experience?

Once content providers become more familiar with user preferences,they can bubble up content within their archive that is better-suited to those preferences. They can also use AI to quickly access data that will inform their content feeds, such as in social media channels. Take World Cup soccer, for example. Rightsholder Fox Sports could use AI technology to identify moments in the game that are worthy of viewing, and within a few minutes of the game ending, they can put up those highlights on YouTube. Before AI, this process would have taken a human many hours.

And of course, the more consumers watch content that is in tune with their preferences (action, drama, certain news topics), then the better the system gets at predicting and serving similar content. That’s an example of tailoring the content for a better consumer experience.

How should traditional broadcasting adapt to these technologies to get the most out of them?

Broadcasters need a website or platform where users can search, find, and consume content. The smaller the chunks broadcasters produce, the greater the consumption,which means they need to be able to capture all of that consumer information and make it useful. (See examples mentioned before.) To be able to do that at scale, broadcasters have to adopt technologies that can process the information faster and better than employing an army of people.

From what perspective does Digital Nirvana approach the use of technology associated with Big Data?

We don’t do anything with big data at this point.

From what perspective does Digital Nirvana approach the possibilities of Artificial Intelligence?

Digital Nirvana believes AI has great potential to accelerate media workflows and make life easier for our clients. To realize that potential, we’re always looking for new and better ways to help our clients use artificial intelligence tools like speech to text and facial or object recognition to describe what is in their audio and video.

Digital Nirvana is doing a lot of work on training speech-to-text engines to automatically recognize who is speaking and what they are saying — such as distinguishing one media personality from another and identifying different topics.

How does Digital Nirvana intend to take advantage of the confluence of both technologies?

Our focus right now is to leverage AI technologies in the audio, video, and natural language processing sectors. Natural language processing is the ability to understand what is being said in the content. Not only can we provide a verbatim transcript of what is being said, but then we use natural language processing to figure out who is doing the talking and what the topic and context are.For example, our Trance application uses multiple technologies, including automatic speech to text and an automated translation engine. Our goal is to make sure those engines keep getting better and better.

It has not yet become main stream technology, but there are already many developments and pilot projects. Which ones would you highlight as the most challenging and interesting?

One pilot project we’ve been working on with a major U.S. broadcaster is automatic segmentation and summarization of incoming news feeds. Suppose the programming in the feed lasts 60-90 minutes and contains multiple segments on different topics. Today in production, we are generating real-time text of that content, but in the future,we’ll automatically be able to figure out which people and places are being discussed in that feed, then provide a headline and summary of each of the segments.We’ll also be able to detect changes in topics and categorize accordingly. This is not an easy thing to do.

A similar use case relates to podcasts. Today, a well-designed podcast will have what we call chapter markers within an hour long or 45-minute podcast. The chapter markers delineate the different segments, and there are show notes related to each chapter marker.Right now this process is done manually. We foresee technology that will listen to a podcast and automatically generate chapter markers along with a summary of each chapter.

Finally, Digital Nirvana is developing an advertising intelligence capability for a large ad intelligence provider that needs to analyze advertisements at scale. This provider must process close to 20 million advertisements per year, and there is no way to do it manually. They have to use technology.

The technology we’re developing will look at an advertisement —whether it be outdoor creative, a six-second social media advertisement, or a 30-second broadcast commercial —and determine the product, the brand, and the category (e.g., alcohol ad, political ad, automobile commercial).That kind of analysis is a challenge, and being able to do it automatically will significantly improve this company’s workflow.

What future developments is Digital Nirvana involved in regarding the capabilities of this technology?

Digital Nirvana already processes media in a multitude of languages. Our goal is to keep evolving so that we not only improve the accuracy of our existing languages but continually add new ones.

Also, we are looking at ways to apply generative AI —AI that helps generate content — to the media and entertainment space.

MetadataIQ – Winner for Media & Entertainment: Best in Market 2022 Awards, presented by TV Tech

media

We are pleased to announce that our product MetadataIQ has won in this year’s Media & Entertainment: Best in Market 2022 Awards, presented by TV Tech.

The awards aim to recognize the very best in innovative products from the last 12 months that have made an impact within the media and entertainment industries. Our product was judged based on its feature set, innovation, perceived value and ease of use, following which it was deemed to be a standout within the sector and selected as a winner.

The awards’ editorial team have said that “these awards have proven to be more popular than we anticipated, with a vast range of products and solutions to consider from all corners of the industry. Overall scoring was high, meaning that those who have been selected as a winner really stood out – a huge success for those companies and we extend our congratulations to all those who won.”

We’re thrilled to have won this award, if you would like to see our winning product then please click here.

https://digital-nirvana.com/products/automated-metadata-generation-metadataiq/

You can read more about our success on TV Tech’s website, in their regular newsletter and other promotional channels from the brand in the coming weeks.

Digital Nirvana Makes Metadata Generation Tool Available To More Avid Users

MetadataIQ

Avid users can now use MetadataIQ to generate metadata automatically without Interplay

FREMONT, Calif.—Digital Nirvana has added support for Avid CTMS APIs to its MetadataIQ software-as-a solution (SaaS) tool for automatically generating speech-to-text and video intelligence metadata, the company said today.

With the new Avid support, editors and producers using MetadataIQ can extract media directly from Avid Media Composer or Avid MediaCentral Cloud UX (MCCUS) rather than being required to connect first with Avid Interplay. As a result, broadcasters, post houses, sports organization and other Avid users are not required to have Interplay in their environments to use MetadataIQ, the company said.

The support gives all Avid Media Composer and MCCUS users the ability to insert speech-to-text and video intelligence metadata as markers within an Avid timeline, it said.

It also enables the ability to:

  • Ingest different types of metadata, such as speech to text, facial recognition, OCR, logos and objects, each with customizable marker durations and color codes for easy identification of metadata type.
  • Submit files without having to create low-res proxies or manually import metadata files into Avid Media Composer and MCCUX.
  • Automatically submit media files to Digital Nirvana’s transcription and caption service to receive the highest quality, human-curated output.
  • Submit data from MCCUX into Digital Nirvana’s Trance product to generate transcripts, captions and translations in-house and publish files in all industry-supported formats.

These capabilities will improve the workflow for media companies that use Avid Media Composer or MCCUX to produce content, it said.

“MetadataIQ used to only connect with Avid Interplay for media extraction, which meant customers had to use Interplay in their Avid environments to use MetadataIQ. With the success of MetadataIQ and its ability to enhance content production, Digital Nirvana… received a lot of requests from prospective customers to extend our services to non-Interplay users. And that’s exactly what we’ve done,” said Digital Nirvana CEO Hiren Hindocha.

“This integration makes it so editors can use MetadataIQ throughout the entire pipeline without having to create additional proxies, import metadata files or anything else — and no Interplay required,” he said.

More information about Digital Nirvana and its products and services is available at www.digital-nirvana.com

Digital Nirvana Introduces Trance 4.0, Adding Modular Applications to Its SaaS Transcription and Captioning Tool

FREMONT, Calif. — July 28, 2022 — Digital Nirvana, a provider of leading-edge media monitoring and metadata generation services, today announced an upgrade to its Trance self-service SaaS application. Trance 4.0 works within a media company’s own workflow to generate transcripts, closed captions/subtitles, and translations automatically for content localization. Both enterprises and individual users will benefit from new features and capabilities, including elaborate account management, modular rights for users, and an advanced pro-captioning window.

Major changes in terms of user experience are broadly classified into the following categories:

Modular Application: Now the application can be used as a combined or individual tool for transcription, captioning, text localization, or conformance of existing captions.

Transcription: The new and improved transcription window can now be used as a stand-alone app instead of being part of a three-step transcription, captioning, and subtitling workflow that only generates output after all steps are complete. Users can upload media assets and quickly gain access to highly accurate, time-coded, speaker-segmented, automatic transcripts in the transcription window in real time. In addition to media and entertainment content, users can now get interviews, meetings, podcasts, and more quickly transcribed, reviewed, and exported in different formats.

transcription services

Captioning: The pro-captioning window has undergone a major change to make it both user-friendly and efficient. Machine-generated timecodes can now be adjusted using spectrogram and manual inputs. It is now possible to ingest predefined sound cues, import custom vocabularies, view frames, and import guidelines to view across the organization. The stand-alone pro-captioning window will allow users to upload media, generate automatic captions, and display in the pro-captioning widow using the desired presets defined by the user or the organization.

Text Localization: An extension of the pro-captioning window, text localization can now be used as a Trance module to focus on localizing captions. Users can now import existing captions and quickly use the machine translation feature to generate highly accurate localized text specifically designed for subtitling purposes. Then they can review it side by side and export in various formats. Users can also create different presets for various languages since localization varies from language to language. This capability helps enterprises easily localize both legacy and new content and extend the content to a wider audience.

Caption Conformance: This application enables users to import text with new presets to conform with various publishing platforms. In addition, the application will automatically review the text and highlight any nonconformities with the selected guidelines. For example, if a user wants to publish content on Netflix, they can import the caption file, and the system will generate a time-coded alert against each nonconformity, enabling the user to quickly navigate to each instance and correct it. This helps enterprises avoid rejections from the publishing platform.

Management Functions: Digital Nirvana has upgraded the dashboard for better real-time visibility into which users are online and the progress of jobs. Administrators now have better management controls, including the ability to define user hierarchy, monitor user performance, and grade users. They can also see commercial usage in real time, add multiple projects with different presets, and upload custom vocabulary, among many other capabilities.

User Control: Users now have more control over processing jobs, including the ability to choose from available jobs, import a transcript, and create captions in existing or new presets if required. There’s also speaker differentiation, single-click access to predefined sound cues, options for a personal dictionary, the ability to precisely time captions by moving the spectrograph or overwriting manually in the caption window, easy-access icons on the top array, and advanced position controls.

transcription

Assigning Internal and External Tasks in the Same Application: Enterprises are now able to use Trance 4.0 to both assign work items to internal staff and to share work items with Digital Nirvana’s service team to receive fully qualified, human-curated output using the same application. This eliminates the need to go to multiple places for data sharing.

Also, in an industry first, Digital Nirvana has implemented machine learning- and NLP-based presets to segment transcripts accurately into captions.

“Our customers asked, and we delivered. With this release, we’ve given Trance a major upgrade in capabilities and user experience,” said Russell Vijayan, director of product at Digital Nirvana. “With advanced management controls, real-time monitoring of online users, better visibility into commercial usage and status of jobs, access to advanced security, and so much more, users will see a significant boost in both efficiency and experience.”

Trance 4.0 is scheduled for release in September. Existing users will be notified when the upgrade is ready. New users will see the updated interface while onboarding.

More information about Digital Nirvana and its products and services is available at www.digital-nirvana.com

Digital Nirvana Announces MonitorIQ RM for Monitoring Remote Hub or Headend Sites

Digital_Nirvana_MonitorIQ_RM

FREMONT, Calif. — April 12, 2022 — Digital Nirvana, a provider of leading-edge media monitoring and metadata generation services, today introduced MonitorIQ RM, a cost-effective video monitoring solution for remote hub and headend sites. MonitorIQ RM addresses the critical need among video distributors to replace the antiquated Slingbox and obsolete Volicon RPM solutions, which provided remote video monitoring at the edge

A derivative of Digital Nirvana’s MonitorIQ, the most reliable and intuitive solution for broadcast compliance and media monitoring on the market, MonitorIQ RM is designed to monitor huge numbers of individual locations very quickly. Through MonitorIQ RM, cable operators and other video distributors can monitor hundreds of servers across a large geographic area to verify ad insertion, signal quality, service delivery, and proper programming placement at the local level — all from one or many central locations anywhere in the country. MonitorIQ RM allows technical and support teams to provide service and monitor high-profile events live from any central location without having to be on-site, which eliminates unnecessary truck rolls.

Through the MonitorIQ RM interface, technicians can see the entire channel lineup; change channels; and view, fast forward, rewind, and scroll through live or recorded content using remote control — all by region, ad zone, or even down to specific hubs or headends.

Technicians can also schedule recordings quickly through the electronic program guide in just a few simple steps, a feature that saves a significant amount of time when programming the same recording many times over. Rather than having to go to every channel individually to set up a recording, technicians can schedule it on different channels and in different zones instantly and simultaneously. For example, it’s possible to record, say, Bloomberg Television at 2 p.m. in a few locations on the West Coast and a few more in the Midwest and on the East Coast — all with one click.

In doing so, Digital Nirvana was able to lower the cost of MonitorIQ RM units to a price that works for this application. Cable operators can afford to deploy hundreds of units to headends throughout their network with the same high scalability and highly secure, Linux-based operating system they’d get with the traditional MonitorIQ solution

The benefits for video distributors and their technicians include:

  • •Fewer physical trips to hubs or headends to correct problems.

  • •More efficient operations because technicians no longer have to log in to multiple different systems to monitor or record different individual channels in different areas.

  • •Greater flexibility — Any person, no matter where they are, can monitor any zone at any time, which is especially helpful when people are absent due to illness, vacations, etc.

“We designed MonitorIQ RM only for remote monitoring, taking away the things people don’t need and simplifying the user interface to make it extremely easy to do the job with as few clicks as possible. Then we found a price and functionality point that makes it a smart solution to both purchase and deploy,” said Russell Wise, senior vice president at Digital Nirvana. “It’s especially pertinent in Volicon RPM and Slingbox remote monitoring-type situations, where we can bring access to live and recorded video at the touch of a fingertip from any place in the world.”

MonitorIQ RM will be available in two options: a software-only solution and a turnkey appliance with server and hardware included.

More information about Digital Nirvana and its products and services is available at www.digital-nirvana.com