Digital Nirvana Rolls Out Advanced Features for Trance Closed-Captioning and Transcription Solution

Winner of NAB’s 2020 Best of Show Award; New Capabilities Make Professional Captioning Even Faster, Easier, and More Accurate

FREMONT, Calif. —Oct. 26, 2020 — Digital Nirvana, a provider of leading-edge media-monitoring and metadata generation services, today unveiled several powerful new capabilities for Trance, the company’s award-winning, enterprise-grade, cloud-based application for closed captioning and transcription. With Trance 3.1, Digital Nirvana has added new features that use natural language processing (NLP) to automatically convert transcripts into captions and to automatically detect shot changes.

Digital Nirvana’s Trance unites cutting-edge speech-to-text technology and other AI-driven processes with cloud-based architecture to drive efficient broadcast and media workflows. Implementing cloud-based metadata generation and closed captioning as part of their existing operations, media companies can radically reduce the time and cost of delivering accurate, compliant content for publishing worldwide. They also can enrich and classify content, enabling more effective repurposing of media libraries and facilitating more intelligent targeting of advertising spots.

With Version 3.1, Trance is the industry’s first closed-captioning solution to offer NLP-based auto-conversion, which automatically converts transcripts into highly accurate, time-synched closed captions. As part of this exclusive feature, Trance automatically identifies music, silence, and other nontext audio and provides intelligent line breaks for captions. Previously, operators had to manually correct the awkward splitting of words that appeared at the end of lines or dangling punctuation. Now, Trance automatically moves such words to the next line and takes into account multiword proper names that should be kept together. For instance, if “New York” appears at the end of a caption line, Trance will move the full title to the next line. Robust presets and grammar rules ensure that lines do not start with misplaced punctuation, such as the apostrophe in a contraction (e.g., with the contraction “they’re,” Trance will move the entire word to the next line).

Digital Nirvana

The latest version of Trance sets another industry standard with automatic shot change detection, including configurable thresholds for generating alerts when captions fall out of compliance. For instance, Trance can identify the end of a shot and ensure that captions belonging to the shot do not cross over. Besides, Trance 3.1 now includes a frame-rate conversion feature that automatically converts captions and video that is to be repurposed for multiple platforms. An example is a broadcaster that would like to repurpose a previously aired video for presentation on the network’s website. The captions would go out of sync when the footage was converted to the target frame rate, requiring the broadcaster to create an entirely new caption file manually. Trance automatically converts the original captions from 29.97 fps (for broadcast) and generates a new caption file with time codes in the 24.97 frame rate needed for web streaming.

“Introduced earlier this year, Trance 3.0 was a major update that brought significant efficiency gains to media companies’ internal captioning processes — helping to open up powerful new business opportunities for distributing compliant content on popular streaming platforms,” said Russell Wise, senior vice president at Digital Nirvana. “But we haven’t rested on our laurels with Trance 3.0; instead, we’re continuing to develop the platform and add new capabilities designed to make the professional captioning process faster, easier, and more efficient than ever and also help broadcasters deliver the best possible experience to viewers who rely on captions.”

More information about Digital Nirvana and its products and services is available at www.digital-nirvana.com.

About Digital Nirvana

Founded in 1996, Digital Nirvana, with its repertoire of innovative solutions, specializes in empowering customers worldwide with knowledge management technologies. Digital Nirvana’s comprehensive service portfolio includes media monitoring and analysis, media solutions and services, investment research services, and learning management services. Customers rely on Digital Nirvana to improve operational efficiencies, ensure compliance, reduce costs, and protect revenue streams. Digital Nirvana’s compliance-driven solutions offer its customers unmatched quality, proven versatility, and best-in-class performance that help organizations to streamline operations and gain competitive advantage. Digital Nirvana is headquartered in Fremont, California with global delivery locations in Hyderabad and Coimbatore, India.

How AI can accelerate content creation

Media companies are in a race to create volumes of compelling, high-quality content for many different markets and distribution channels. But doing so has its challenges, explains Russell Wise.

Russell Wise is Senior Vice President of Sales and Marketing at Digital Nirvana.

Digital Nirvana

For one thing, the Covid-19 pandemic has turned the media industry on its head. Most media companies are struggling valiantly to produce content remotely at a time when in-person contact is restricted. That means remote production and cloud-based tools have become more important than ever.

The same is true for repurposing content. Even in the best of times, media companies have been keen to monetise their existing content. After all, some of these companies have vast asset repositories, and what better way to get more bang for the production buck than by using an asset in multiple ways, such as localising it for a new audience in another country? In this time of limited production options, repurposing content can be a critical source of revenue.

Then there’s the direct-to-consumer model, in which media companies avoid distribution points and go over the top directly to the viewer. To do it, they must be able to go through their libraries to sort, find, create, and distribute a piece of content, usually in four or five different versions, depending on the customer, geography, etc.

Even at the highest level, corporations like NBC are reassessing their strategies in light of the pandemic, deciding what media products they want to bring to the market and what technologies they need to make it happen. Those three scenarios – remote production, content repurposing, and direct-to-consumer delivery – are driving media companies to seek help creating compelling and compliant content more quickly, while adhering to government and internal standards and practices at the time of distribution. Most media companies are taking a serious look at AI and machine learning to help with this task.

There are clearly some discrete applications that are prime candidates for AI.

Content enrichment is one such example. At any time, you can add intelligence in the form of metadata; and at any point in the chain, you can produce better, more targetted content more quickly – and a lot more of it.

In pre-production and post-production workflows, a classic use case is to enhance the metadata of existing assets. AI can “watch” or “listen” and tag content in an existing library faster and more accurately than humans ever could. Or media companies can rely on AI to index any incoming feed so that metadata already exists before the content even enters the main system. In either case, having more – and more accurate – metadata speeds up the search process and accelerates content creation.

In one real-world example, a news website must deliver video within one hour of hitting post-production. The company uses speech-to-text engines to tag the video on its many incoming feeds, which makes it possible to meet deadlines and provide captions. Likewise, two major sports leagues are using AI for real-time captioning during live sports broadcasts. In all cases, AI helps broadcasters get content to air faster with better-quality captions – and without using human labour to do it.

Repurposing and localisation is another area where AI is critical. Once there is enriched content in the repository, it’s easy for media companies to repurpose it. For instance, they can use AI to translate content and localise it for other geographies. That’s exactly what one Spanish-language broadcaster is doing. AI takes Spanish content and captions it in different languages for different markets.

AI is also very useful where quality assurance is concerned. So, on the far end of the chain, for instance, there’s content distribution. This is where broadcasters check to see if the content is compliant to local government standards, like the FCC, or their own internal standards, like a style guide. They can use AI to assess, for example, the quality and accuracy of captions, which is becoming an issue in countries with stringent captioning laws.

Monitoring is another element worth considering. AI comes in handy when weeding out objectionable content, such as unacceptable language or images that would violate strict rules if they were to go on air. AI engines such as image and speech recognition can monitor and automatically flag such issues for review instead of a human continually viewing it. Likewise, AI can automatically quantify logo insertions, product placements, ads or even the number of times a given person appeared within a broadcast – all information that can correspond to billing.

AI technologies have improved substantially in the past years in terms of accuracy, but the big barrier to adoption in the media industry has been a practical application. That is, how to insert it into the content workflow. Take speech-to-text, for example. Broadcasters can get a lot of value from a speech-to-text engine, but it’s difficult for them just to order one. It has to come with management tools. It has to have a good UI and basic user functionality to get the full benefit. Fortunately, there are companies that have built a solid workflow that lets broadcasters harness the power of AI.

For instance, some products allow users to upload media through a portal to a cloud-based system. Once the content is there, the system essentially does all the work – transcription, translation, captioning, and monitoring. Users can set up presets to publish captions in the format required by the distributor, which is a pretty big thing, especially with Netflix. There are some basic tools for, say, editing the transcription, sort of like a word processor. There’s also a set of management tools that let users do things like assign, handoff, and track the progress of jobs. And when it’s time for content delivery, a content monitor automatically checks for compliance, quality and more.

The big benefits of AI in the content creation workflow are increased speed and reduced effort. The whole idea is to get the rudimentary, repetitive work away from humans. This way, humans can be creative while repetitive tasks can be designated to AI.

Digital Nirvana’s AI captioning solution powers workflow for US tennis body

The AI-powered workflow Trance looks to eliminate most of the manual work by providing captioning in a fraction of the time.

Digital Nirvana has announced that its Trance automated postproduction captioning solution with advanced AI is powering the closed captioning workflow for the United States Tennis Association (USTA), the US governing body for tennis.

Integrated with Adobe Premiere Pro, Trance promises to improve the speed and efficiency of the league’s caption generation process while freeing technical personnel to focus on the creative aspects of their jobs.

Trance application aims to unite STT technology and other AI-driven processes with cloud-based architecture to drive efficient broadcast and media workflows. Implementing cloud-based metadata generation and closed captioning as part of their existing operations, media companies can radically reduce the time and cost of delivering accurate, compliant content for publishing worldwide.

The latest version, Trance 3.0, has a text translation engine that simplifies and speeds captioning in additional languages and automated caption conformance to accelerate delivery of content to new platforms and geographic regions.

“The USTA aims to give tennis enthusiasts access to its content with minimal delay, especially when it pertains to live events or other timely content — something it couldn’t do with its old captioning process. Now with Trance seamlessly integrated into its Adobe postproduction workflow, the league gets a rapid turnaround time that is virtually unheard of in the captioning industry,” said Russell Wise, senior vice president of sales and marketing for Digital Nirvana. “Not only that, but the USTA has the industry’s highest caption-quality standards, with captions that are virtually 100% accurate and display with minimal delay, as the words are spoken.”

Since moving to Digital Nirvana, the USTA production team has been able to cut the turnaround time for captioning an average video title from hours to only 30 minutes, with the captioning task completely offloaded from the technical team. In a typical captioning workflow, a video technician simply drags the clip to a “hot folder” in the Digital Nirvana media service portal. The clip is then automatically uploaded to the processing centre, where the Digital Nirvana team uses AI-based speech-to-text algorithms to generate a transcript, validate its accuracy, and then burn captions into the video. Once complete, the captioned video is automatically uploaded to the Premiere Pro timeline.

Digital Nirvana Intros Transport Stream Outage Detection For MonitorIQ

New capability enhances outage detection by inserting black frames in recorded video.

FREMONT, Calif.—Digital Nirvana today unveiled a nearly frame-level-accurate transport stream outage detection capability for its MonitorIQ 7.0 broadcast monitoring and compliance logging platform.

This capability enhances outage detection by inserting black frames into recorded video to indicate the exact instances in which a loss of signal has occurred.

“With our highly accurate transport stream outage detection feature, MonitorIQ takes the guesswork out of understanding the impact of signal loss in the video chain,” said Keith DesRosiers, director of sales solutions at Digital Nirvana. “Customers are notified of the exact length of the outage, and end-users are able to create clips for proof of the exact outage duration.”

Digital Nirvana’s MonitorIQ allows operators to record, store, monitor, analyze, and repurpose content with a minimum of clicks. It natively records content from any point in the video delivery chain, enabling broadcasters to collect and use knowledge about their content to meet regulatory and compliance requirements. The platform also provides access to valuable next-generation content processing and analysis tools.

The new transport stream outage detection feature provides an accurate video record of any spot in the video delivery chain where a signal loss has occurred. Rather than relying on recorded video to detect a loss, MonitorIQ constantly monitors the physical input of any loss.

When a loss of the transport stream input signal is detected an encoding process immediately starts inserting black frames into the recorded video. When MonitorIQ detects the return of a good signal input, it stops the insertion of black frames into the video stream.

This process allows MonitorIQ to report highly accurate outage durations and allows end-users to view the actual outage in the browser-based user interface with a black slate inserted into the video.

Digital Nirvana Offers Continued Support and Migration Path for Volicon Observer Users

FREMONT, Calif.—Aug. 24, 2020—With support for Volicon Observer now ended, Digital Nirvana, a provider of leading-edge media-monitoring and metadata generation services, is offering ongoing technicalsupport to users of Volicon Observer as well as the option to migrate tothe MonitorIQ broadcast monitoring and compliance logging platform.

“Now that Volicon Observer has reached end of life and is out of support, it’s critical for media operations that were using it to find options to keep it functioning,” said Russell Wise, senior vice president of sales and marketing for Digital Nirvana. “Additionally, when Volicon users are ready in the future, our MonitorIQ system offers the best migration option with the best match to the Observer feature set. What’s more, the core developers of the Volicon product line are now at Digital Nirvana. Who better to help former Observer users confidently transition to a new system than the experts who know both systems best?”

Refined by experts, including architects of the original Volicon Observer product, MonitorIQ is a secure, easy-to-use solution that allows broadcasters to record, store, monitor, analyze, and repurpose content. The platform enables broadcasters to collect and use knowledge about their broadcast content to meet a wide range of regulatory and compliance requirements. Built on a reliable and secure Linux platform, it is extensible into broadcast operations with open APIs. With MonitorIQ, broadcasters gain access to AI-based cloud microservices for closed caption generation; caption quality assessment; caption realignment; and video intelligence for objects, ads, logos, and facial recognition.

MonitorIQ gives broadcasters all the key features of Volicon’s beloved Observer product and improves upon each one to provide unparalleled broadcast monitoring and compliance capabilities. The latest version, MonitorIQ 7.0,has an updated and intuitive web interface, core feature improvements, and new cutting-edge capabilities, all of which combine to make everyday tasks effortless.

About Digital Nirvana

Founded in 1996, Digital Nirvana, with its repertoire of innovative solutions, specializes in empowering customers worldwide with knowledge management technologies. Digital Nirvana’s comprehensive service portfolio includes media monitoring and analysis, media solutions and services, investment research services, and learning management services. Customers rely on Digital Nirvana to improve operational efficiencies, ensure compliance, reduce costs, and protect revenue streams. Digital Nirvana’s compliance-driven solutions offer its customers unmatched quality, proven versatility, and best-in-class performance that help organizations to streamline operations and gain competitive advantage. Digital Nirvana is headquartered in Fremont, California with global delivery locations in Hyderabad and Coimbatore, India.

Digital Nirvana AI in the cloud with MediaServicesIQ and MonitorIQ for compliance

FREMONT, Calif. — July 24, 2020— Digital Nirvana announced in July the launch of MediaServicesIQ, a comprehensive suite of cloud-based microservices that leverage advanced artificial intelligence and machine learning (ML) capabilities to streamline media production, postproduction and distribution workflows. MediaServicesIQ offers a comprehensive portal from which users can access Digital Nirvana’s high-performance, AI-powered technology solutions: speech synthesis, content classification, subtitle generation and compliance, and an ad-capable video intelligence engine advertisements, logos, objects and faces.

Digital Nirvana

MediaServicesIQ offers a core set of AI capabilities and makes them available in a layer that can be easily accessed from newsrooms, live sports and entertainment productions, post houses and other multimedia operations to accelerate critical processes, reduce mundane tasks and freeing up staff to do more creative work.

Building on custom or standard in-house production and monitoring workflow platforms, MediaServicesIQ provides access to a collection of high-performance AI capabilities in the cloud including text-to-speech, video intelligence, and metadata generation that can be orchestrated to provide intelligence and usable insights.

These capabilities enable intelligent and immediate recording and feedback of content quality and compliance, better broadcaster positioning to meet regulatory, compliance and licensing requirements for subtitles, decency and ad tracking.

Digital Nirvana

MediaServicesIQ provides an easy-to-access portal to these services through custom-developed workflows or through continuous access to popular Digital Nirvana applications including MonitorIQ 7.0, MetadatorIQ and Trance.

An integration of AI microservices with Digital Nirvana’s MonitorIQ 7.0 compliance logging system enables powerful video intelligence applications and insights, including:

  • The ability to record, archive and retrieve content for compliance, quality of service and insights into broadcast content
  • Automatic transcription of content from live broadcasts and commercials
  • Automatic detection of logos, objects, faces and shots
  • Automatic extraction of on-screen text
  • The ability to identify commercial breaks in recorded content
  • The ability to identify confidential words or topics in logged / registered content, as well as the classification of incoming advertising material for confidential content
  • Automatic reporting for volume compliance, QoE, SCTE inserts, ad detection and identification
  • Automatic content classification
  • The ability to assess the quality and compliance of captions

Digital Nirvana

MediaServicesIQ integrates with Digital Nirvana’s MetadatorIQ to provide automated metadata creation in pre-production, production and live content workflows.

Optimized for Avid MediaCentral | Production management resources, MetadatorIQ applies advanced content analytics based on AI and ML to automatically generate better structured, more detailed and more accurate metadata. Media operations benefit in two main ways: first, by saving huge time in anticipating metadata generation, and then by giving producers the ability to focus on the resources they need right away.

For automatic subtitle/subtitle generation and subtitle quality compliance, MediaServicesIQ provides seamless access to Digital Nirvana’s Trance 3.0 toolset.

Trance combines state-of-the-art STT technology and other AI-based processes with cloud-based architecture to bring metadata and subtitle generation into existing operations, enabling media companies to radically reduce delivery times and costs of accurate and compliant content for worldwide publication. They can also enrich and classify content, enabling more effective reuse of media libraries and facilitating smarter targeting of commercials.

MonitorIQ,

The MonitorIQ broadcast monitoring and compliance platform enables a seamless migration path for broadcasters who have used Volicon Observer to the end of support. The Volicon system ended support in June 2020 and the development and refinement of Observer’s feature sets have been transferred to MonitorIQ to ensure more in-depth control solutions on the content transmitted and therefore to derive greater value from those assets.

Digital Nirvana

MonitorIQ goes beyond regulatory and compliance requirements, giving broadcasters access to valuable next-generation tools for content processing and analysis.

Digital Nirvana offers MonitorIQ as a secure and easy-to-use solution that allows broadcasters to record, archive, monitor, analyze and reuse content.

The platform enables broadcasters to collect and use knowledge about their broadcast content to meet a wide range of regulatory and compliance requirements.

Built on a reliable and secure Linux platform, it is extensible to broadcast operations with open APIs.

With MonitorIQ, broadcasters gain access to AI-based cloud microservices for subtitle generation, subtitle quality assessment, caption realignment, and video intelligence for objects, advertising, logos, and facial recognition.

AI Can Be Leveraged to Simplify, Enhance STT Services

Artificial intelligence (AI) can be used by media and entertainment companies to simplify and enhance all of their subtitling, translation and transcription (STT) services in the cloud, according to M&E technology firm Digital Nirvana.

Digital Nirvana’s Russell Wise, SVP of sales and marketing, and Ed Hauber, its business development manager, used the June 24 webinar “Leveraging AI for Speed & Efficiency in M&E STT” to detail how Trance — the company’s enterprise-level, cloud-based closed captioning and translation solution — can simplify the process, as a managed or self-service STT tool.

Bloomberg, Turner and other major media organizations are already using the plug-and-play, AI-powered offering to produce captions at record speed, improving productivity by 50% and more, according to Digital Nirvana. The workflow can be used across the industry, with media, post and caption service providers all able to take advantage.

Trance is a “cloud-based, enterprise-level” Software-as-a-Service (SaaS) platform that is “used to generate automated transcripts, to create closed captions, to translate those captions into alternate languages and also to export captioned files in all known industry-supported formats,” Hauber pointed out.

“Trance is also fully web-based,” he noted, adding: “It’s accessible via a LAN, WAN or even a basic Internet connection. As an enterprise tool, Trance is fully configurable for an unlimited number of users, groups and roles.”

Administrators, meanwhile, can “manage multiple projects, they can create manage users, define roles and permissions, as well as establish system presets,” he said, while giving viewers a demonstration of Trance.

The Manage Presets section “gives users the ability to define caption attributes, such as the number of lines, the line length” and the total number of characters, he pointed out during the demo.

“To get media into Trance, we have a tool that we use called Media Services Portal and, like Trance, Media Services Portal — also called MSP – [is] a cloud-based platform, which allows users to ingest any number of common audio and video file formats into Trance,” he said. MSP “can directly integrate with both FTP and Amazon S3,” he also noted.

Digital Nirvana also offers an open application programming interface (API) to “integrate Media Services Portal directly into large enterprise media systems,” he pointed out. “Using our API, those operators don’t need to create a secondary workflow process to move media into and out of Trance — and this is a really big time-saving and productivity advantage of Trance,” he said.

The Trance speech-to-text engine “has created a highly-accurate transcript of the media that we just imported,” he also showed during the demo, noting that “eliminates the necessity of doing the manual transcribing of content and… delivers huge productivity gains over conventional transcription methods.” It is also “highly accurate — between roughly 90 to 95 percent accurate — based on good good-quality content,” he noted.

The transcript interface includes text on the right side of the screen and a media player on the left with intuitive controls to play back audio and video, he demonstrated. Also featured are tools that help provide fast text editing, including an auto highlight of potentially misspelled words and spell check, he showed. Users can also create captions in more than one language, he noted.

During the Q&A, he said: “Unlike other providers, we’re not limited to one specific speech-to-text engine. In fact, we, by design, do not operate that way. We constantly evaluate and measure the performance of all the best speech-to-text engines that exist in the marketplace today. And so, we’re not limited to just one. And the reason that that’s important is this technology is progressing and developing and advancing very quickly and so being tied to one or the other is inherently limiting. We would rather take the approach of using them all and continually measuring and evaluating them.”

So, as an example, “if we detect that ‘Engine A’ is performing better in scenarios — say where there is sports content, and we can even be more specific: domestic American basketball – we see that speech-to-text ‘Engine A’ is performing better in this application, we automatically in the background route that content based on machine learning capability to say we’re going to route this client’s content through this speech-to-text engine because we see it now as performing better” than the other options,” he explained.

There is a “great degree of accuracy that we can accomplish” by using that process, he noted.

Although Trance is “currently not a live captioning solution, he was quick to say: “It is on our roadmap and it is something that we’re actively developing. So, live captioning with the ability to run our speech-to-text engine, to collapse the time of that speech-to-text process down to near real-time, or essentially real-time, giving an operator the ability to make very quick edits …within a few seconds of live and be able to do that on the fly. That’s something that we’re evaluating and we’re working towards as the technology matures and there’s a degree of reliability and consistency that we can bring to the market that is on the roadmap for sure. Not today — but coming soon.”

He went on to point out: “We’re constantly developing the product…. This company really adheres to a philosophy and a down-to-earth principle in being very, very agile. And, as much as this is an enterprise tool, the product operates on a very agile basis, meaning it’s able to take and respond to customer requests very, very quickly.”

There is a “long history” at Digital Nirvana of “continual development and being very much in tune with exactly what the market wants and being able to deliver these expanded feature sets and new capabilities that don’t currently exist in the product today,” he added.

Digital Nirvana selected to provide automated closed caption generation

FREMONT, Calif. — July 8, 2020 — Digital Nirvana today announced the launch of MediaServicesIQ™, a comprehensive suite of cloud-based microservices that leverage advanced AI and machine-learning (ML) capabilities to streamline media production, postproduction, and distribution workflows. MediaServicesIQ provides a comprehensive, one-stop portal by which users can access Digital Nirvana’s high-performance, AI-powered technology solutions: speech-to-text, content classification, closed caption generation and conformance, and a video intelligence engine that can detect ads, logos, objects, and faces.

“The promise of AI and ML is being touted throughout the media industry, and these technologies do offer powerful potential for new efficiencies, time savings, and expanded potential for monetizing valuable content. But AI and ML tools need to be accessible and easy to apply in order to see widespread adoption in the media industry,” said Russell Wise, senior vice president at Digital Nirvana. “That’s exactly what we’ve done with MediaServicesIQ. We’ve created a set of core AI capabilities and made them available in a layer that can be easily accessed by newsrooms, live sports and entertainment productions, post houses, and other media operations to expedite critical processes, reduce mundane tasks, and free creative personnel to do their jobs.”

Building upon internal custom or industry-standard production and monitoring workflow platforms, MediaServicesIQ provides access to a collection of high-performance AI capabilities in the cloud that encompass speech-to-text, video intelligence, and generation of metadata that can be orchestrated to provide intelligence and actionable insights. These capabilities enable intelligent and immediate logging and feedback of content quality and compliance, better positioning broadcasters to meet regulatory, compliance, and licensing requirements for closed captioning, decency, and advertising monitoring.

MediaServicesIQ provides an easy-to-access portal to these services through custom-developed workflows or through seamless access to Digital Nirvana’s award-winning applications including MonitorIQ 7.0, MetadatorIQ, and Trance.

An integration of the AI microservices with Digital Nirvana’s MonitorIQ 7.0 compliance logging system enables powerful insights and video intelligence applications, including:

  • The ability to record, store, and retrieve content for compliance, quality of service, and insights into broadcast content
  • Automatic transcription of content from live broadcasts and commercials
  • Automated detection of logos, objects, faces, and shots
  • Automatic extraction of on-screen text
  • The ability to identify ad breaks in logged content
  • The ability to identify restricted words or topics in recorded/logged content, as well as the classification of incoming ad material for restricted content
  • Generation of automated reports for loudness compliance, QoE, SCTE inserts, and ad detection and identification
  • Automatic content classification
  • The ability to assess the quality and conformance of captions

MediaServicesIQ integrates with Digital Nirvana’s MetadatorIQ to provide automated metadata creation in preproduction, production, and live content workflows. Optimized for Avid MediaCentral | Production Management assets, MetadatorIQ applies advanced AI- and ML-based content analysis for automatic generation of better-structured, more detailed, and more accurate metadata. Media operations benefit in two key ways — first, with tremendous time savings in up-front metadata generation, and then by giving producers the ability to zero in on the assets they need right away.

For automatic closed caption/subtitle generation and closed-caption quality conformance, MediaServicesIQ provides seamless access to Digital Nirvana’s Trance 3.0 toolset. Trance unites cutting-edge STT technology and other AI-driven processes with cloud-based architecture to bring metadata generation and closed captioning into existing operations, enabling media companies to radically reduce the time and cost of delivering accurate, compliant content for publishing worldwide. They also can enrich and classify content, enabling more effective repurposing of media libraries and facilitating more intelligent targeting of advertising spots.

Digital Nirvana Appoints Keith DesRosiers to Director of Sales Solutions

FREMONT, Calif. — July 8, 2020 — Digital Nirvana today announced the launch of MediaServicesIQ™, a comprehensive suite of cloud-based microservices that leverage advanced AI and machine-learning (ML) capabilities to streamline media production, postproduction, and distribution workflows. MediaServicesIQ provides a comprehensive, one-stop portal by which users can access Digital Nirvana’s high-performance, AI-powered technology solutions: speech-to-text, content classification, closed caption generation and conformance, and a video intelligence engine that can detect ads, logos, objects, and faces.

“The promise of AI and ML is being touted throughout the media industry, and these technologies do offer powerful potential for new efficiencies, time savings, and expanded potential for monetizing valuable content. But AI and ML tools need to be accessible and easy to apply in order to see widespread adoption in the media industry,” said Russell Wise, senior vice president at Digital Nirvana. “That’s exactly what we’ve done with MediaServicesIQ. We’ve created a set of core AI capabilities and made them available in a layer that can be easily accessed by newsrooms, live sports and entertainment productions, post houses, and other media operations to expedite critical processes, reduce mundane tasks, and free creative personnel to do their jobs.”

Building upon internal custom or industry-standard production and monitoring workflow platforms, MediaServicesIQ provides access to a collection of high-performance AI capabilities in the cloud that encompass speech-to-text, video intelligence, and generation of metadata that can be orchestrated to provide intelligence and actionable insights. These capabilities enable intelligent and immediate logging and feedback of content quality and compliance, better positioning broadcasters to meet regulatory, compliance, and licensing requirements for closed captioning, decency, and advertising monitoring.

MediaServicesIQ provides an easy-to-access portal to these services through custom-developed workflows or through seamless access to Digital Nirvana’s award-winning applications including MonitorIQ 7.0, MetadatorIQ, and Trance.

An integration of the AI microservices with Digital Nirvana’s MonitorIQ 7.0 compliance logging system enables powerful insights and video intelligence applications, including:

  • The ability to record, store, and retrieve content for compliance, quality of service, and insights into broadcast content
  • Automatic transcription of content from live broadcasts and commercials
  • Automated detection of logos, objects, faces, and shots
  • Automatic extraction of on-screen text
  • The ability to identify ad breaks in logged content
  • The ability to identify restricted words or topics in recorded/logged content, as well as the classification of incoming ad material for restricted content
  • Generation of automated reports for loudness compliance, QoE, SCTE inserts, and ad detection and identification
  • Automatic content classification
  • The ability to assess the quality and conformance of captions

MediaServicesIQ integrates with Digital Nirvana’s MetadatorIQ to provide automated metadata creation in preproduction, production, and live content workflows. Optimized for Avid MediaCentral | Production Management assets, MetadatorIQ applies advanced AI- and ML-based content analysis for automatic generation of better-structured, more detailed, and more accurate metadata. Media operations benefit in two key ways — first, with tremendous time savings in up-front metadata generation, and then by giving producers the ability to zero in on the assets they need right away.

For automatic closed caption/subtitle generation and closed-caption quality conformance, MediaServicesIQ provides seamless access to Digital Nirvana’s Trance 3.0 toolset. Trance unites cutting-edge STT technology and other AI-driven processes with cloud-based architecture to bring metadata generation and closed captioning into existing operations, enabling media companies to radically reduce the time and cost of delivering accurate, compliant content for publishing worldwide. They also can enrich and classify content, enabling more effective repurposing of media libraries and facilitating more intelligent targeting of advertising spots.

Digital Nirvana Introduces Media ServicesIQ Custom AI Workflows Portal

FREMONT, Calif. — July 8, 2020 — Digital Nirvana today announced the launch of MediaServicesIQ™, a comprehensive suite of cloud-based microservices that leverage advanced AI and machine-learning (ML) capabilities to streamline media production, postproduction, and distribution workflows. MediaServicesIQ provides a comprehensive, one-stop portal by which users can access Digital Nirvana’s high-performance, AI-powered technology solutions: speech-to-text, content classification, closed caption generation and conformance, and a video intelligence engine that can detect ads, logos, objects, and faces.

“The promise of AI and ML is being touted throughout the media industry, and these technologies do offer powerful potential for new efficiencies, time savings, and expanded potential for monetizing valuable content. But AI and ML tools need to be accessible and easy to apply in order to see widespread adoption in the media industry,” said Russell Wise, senior vice president at Digital Nirvana. “That’s exactly what we’ve done with MediaServicesIQ. We’ve created a set of core AI capabilities and made them available in a layer that can be easily accessed by newsrooms, live sports and entertainment productions, post houses, and other media operations to expedite critical processes, reduce mundane tasks, and free creative personnel to do their jobs.”

Building upon internal custom or industry-standard production and monitoring workflow platforms, MediaServicesIQ provides access to a collection of high-performance AI capabilities in the cloud that encompass speech-to-text, video intelligence, and generation of metadata that can be orchestrated to provide intelligence and actionable insights. These capabilities enable intelligent and immediate logging and feedback of content quality and compliance, better positioning broadcasters to meet regulatory, compliance, and licensing requirements for closed captioning, decency, and advertising monitoring.

MediaServicesIQ provides an easy-to-access portal to these services through custom-developed workflows or through seamless access to Digital Nirvana’s award-winning applications including MonitorIQ 7.0, MetadatorIQ, and Trance.

An integration of the AI microservices with Digital Nirvana’s MonitorIQ 7.0 compliance logging system enables powerful insights and video intelligence applications, including:

  • The ability to record, store, and retrieve content for compliance, quality of service, and insights into broadcast content
  • Automatic transcription of content from live broadcasts and commercials
  • Automated detection of logos, objects, faces, and shots
  • Automatic extraction of on-screen text
  • The ability to identify ad breaks in logged content
  • The ability to identify restricted words or topics in recorded/logged content, as well as the classification of incoming ad material for restricted content
  • Generation of automated reports for loudness compliance, QoE, SCTE inserts, and ad detection and identification
  • Automatic content classification
  • The ability to assess the quality and conformance of captions

MediaServicesIQ integrates with Digital Nirvana’s MetadatorIQ to provide automated metadata creation in preproduction, production, and live content workflows. Optimized for Avid MediaCentral | Production Management assets, MetadatorIQ applies advanced AI- and ML-based content analysis for automatic generation of better-structured, more detailed, and more accurate metadata. Media operations benefit in two key ways — first, with tremendous time savings in up-front metadata generation, and then by giving producers the ability to zero in on the assets they need right away.

For automatic closed caption/subtitle generation and closed-caption quality conformance, MediaServicesIQ provides seamless access to Digital Nirvana’s Trance 3.0 toolset. Trance unites cutting-edge STT technology and other AI-driven processes with cloud-based architecture to bring metadata generation and closed captioning into existing operations, enabling media companies to radically reduce the time and cost of delivering accurate, compliant content for publishing worldwide. They also can enrich and classify content, enabling more effective repurposing of media libraries and facilitating more intelligent targeting of advertising spots.