SVG SportsTech On Demand: Digital Nirvana’s Russell Wise on Delivering Video Content Through AI-Powered Tools

While the 2021 NAB Show has been moved to October, the spring season will still feature a cavalcade of new product releases and groundbreaking news coming out of the broadcast technology sector. In an effort to keep the video-production community informed, SVG is hosting a series of SportsTech On Demand video interviews throughout April, May, and June with executives from the industry’s top technology vendors.

Russell Wise, Digital Nirvana, SVP, Sales and Marketing, addresses how sports operations are challenged like never before to deliver engaging video content to their viewers in the tightest-possible timeframes and the way Digital Nirvana’s portfolio of AI- and ML-based solutions are helping sports organizations around the world to streamline critical processes – from generating transcriptions to delivering compliant closed captions and beyond.



Listen to the Podcast



Interview Source

Digital Nirvana: AI, Machine Learning are Revolutionising Media Creation, Distribution

AI machine learning revolutionizing content creation and distribution using Trance

Cloud-based artificial intelligence (AI) and machine learning (ML) are bringing new levels of accuracy, efficiency, compliance and cost savings to broadcast operations, according to Digital Nirvana.

In the process, the technologies are continuing to transform and accelerate virtually every aspect of content creation and distribution.

During the AI & Automation breakout session “Revolutionising Media Creation and Distribution Using the Science of AI & Machine Learning” at the March 16 Smart Content Summit, Digital Nirvana representatives highlighted real-world examples of media enterprises that are transforming every aspect of their workflows, from subtitles creation to the distribution of content, by implementing the use of AI and ML technologies.

In the area of content acquisition, content contribution and production, it first explored a speech-to-text and captioning workflow that Digital Nirvana co-developed with what Ed Hauber, director of business development at Digital Nirvana, identified as a “well-known entertainment, news and information outlet.”

Digital nirvana content creation and distribution with AI and ML

“The single biggest driver was time or the extreme lack of time,” he noted, adding: “Like many operating in the news arena, deadlines are tight and time is in short supply. Our client had a two-hour window in which to ingest, edit, caption and deliver its finished product to” an over-the-top (OTT) provider in the expected format.

Digital Nirvana used automatic speech recognition (ASR) technology to help solve that challenge, according to Russell Vijayan, head of AI products and services at Digital Nirvana.

Digital Nirvana provided the client with the ability to go to specific places in a piece of video and, by clicking on search terms, find all the content they needed “much faster,” allowing it to produce content within the timeline that they were given, explained Vijayan.

“The second challenge for that client was the need to deliver accurately captioned content within the time frame allotted” to it, Hauber said. Adding to the challenge was the need to deliver content in full compliance with the OTT service provider’s strict style guidelines for captioning and subtitles, he noted.

The client did not have time to use a turnkey, third-party resource to send out the content, have it captioned and turned around, according to Hauber.

Digital Nirvana used a combination of technologies to overcome that challenge, Vijayan explained. Digital Nirvana created transcripts that could be converted into subtitles or captions quickly.

It then applied a layer of natural language processing (NLP) that would help split the captions so that the user didn’t need to spend much time making any changes, he said. The captions created using this AI-based system have 98-99% accuracy, so very few corrections need to be made, he added.

The third challenge was something that Digital Nirvana is increasingly seeing: Operators needs to localise their content – specifically the captions – into other languages, in this case, English and Latin American Spanish, Hauber said.

In this case, Digital Nirvana needed to create Spanish captions after the English captions were created, Vijayan said, adding it created a set of algorithms that would dictate how much content needed to be translated to get a much more accurate result. The second use case was to import existing English captions and then quickly translate them to other languages, he added.

Digital Nirvana Trance transcript editor and Accurate transcripts

Moving on to distribution and video metadata, Hauber briefly discussed applications of AI in the content delivery space, “primarily the mining of insights from within content, which allow our clients to analyse and report on production product placements, perform ad classification and uncover potential compliance issues such as identification of explicit content.”

Hauber pointed to use cases with two multichannel video programming distributors (MVPDs). In both cases, they had original content but most of the distribution was other peoples’ content — 300-400 channels worth of it, he said, noting both companies used compliance logging and monitoring technology to record distributed content that had been aired. The goal was extracting data from the distributed content, he noted.

The technology can be used for product placement and to replace one product with another in a piece of video content. As an example, Vijayan noted that when a refrigerator is opened on the screen, there may be a Coke in there and a company might want to change it to Pepsi.

Digital Nirvana solutions provide: Customised video-based metadata for better searchability; the ability to accurately identify spots based on content; identify and replace brands within content based on consumer data; and a reduction of effort in identifying explicit content, according to the company.

Click here to access video of the presentation.

Click here to download the presentation.

The Smart Content Summit was produced by MESA and the Smart Content Council, and was sponsored by Microsoft Azure, Whip Media Group, Richey May Technology Solutions, BeBanjo, Digital Nirvana, Softtek, 24Notion, EIDR, The Quorum and Signiant.

Article Source

Digital Nirvana Powers Sports-Centric Captioning Through AI and the Cloud

The Sports Video Group is excited to introduce Digital Nirvana as a new corporate sponsor. The company specializes in the delivery of elevated efficiency and accelerated caption generation through AI-driven cloud-based captioning solutions.

“We are committed to technical excellence in the sports industry,” says Russell Wise, SVP, sales and marketing, Digital Nirvana. “We look forward to being part of the Sports Video Group community and contribute to the advances in production workflows.”

Digital Nirvana’s knowledge management technologies empower organizations to create, share, and mine insights from electronic media. The company’s offerings include an advanced speech-to-text (STT) engine for generating accurate and searchable transcripts of programs and identify key segments of content and quickly cut them into finished programs. Digital Nirvana also focuses on automatic generation of accurate closed captions from the transcripts that can be instantly translated into multiple languages. The company drives custom production workflows through a comprehensive portfolio of solutions including media monitoring and analysis; generation and management of closed captions, subtitles, transcripts, and metadata; and advanced AI-based technologies.

Users are able to capture rich metadata for identifying logos, faces, and objects in a video stream and instantly identify every instance in which a sponsor’s logo is shown to confirm contractual obligations – and down the road to leverage object identification for myriad new sponsorship opportunities.

As cloud-based AI and ML solutions continue to transform and accelerate virtually every aspect of content creation and distribution, Digital Nirvana’s AI-based solutions drive custom production workflows. Digital Nirvana’s compliance-driven solutions deliver unmatched quality, proven versatility, and best-in-class performance to help organizations surmount difficult business challenges and drive rapid and profitable growth.

Customers in the sports production industry rely on Digital Nirvana solutions to improve operational efficiencies, ensure compliance, reduce costs, and expand revenue streams.

“Sports operations are challenged like never before to deliver engaging video content to their viewers in the tightest-possible timeframes,” adds Wise. “Our portfolio of AI- and ML-based solutions are helping sports organizations around the world to streamline critical processes – from generating transcriptions to delivering compliant closed captions. And we’re right on the cusp of a new age of AI- and ML-based capabilities that are opening up huge new potential for content repurposing and monetization.”

Recently, the U.S. Tennis Association (USTA) leveraged Digital Nirvana’s AI-driven post-production captioning service Trance to accelerate the efficiency and speed of their caption generation process. As a non-profit organization with more than 700,000 members, USTA is responsible for making high-value video content available for free streaming on its platform and social media platforms. These videos range anything from instructional videos and training courses to up-to-the-minute highlight reels of the most-watched events in the tennis world. USTA’s previous captioning process was time-consuming and inefficient, relying on manual tasks and transcripts generated by a third-party speech-to-text service that lacked an effective captioning integration tool for Adobe Premiere Pro. Depending on the length of the video program, captioning was taking several hours – an unsatisfactory turnaround, especially for timely content such as the latest news from the U.S. Open.

With Trance, USTA was able to reduce the turnaround time for captioning an average video from hours to only 30 minutes. Digital Nirvana’s AI-based speech-to-text algorithms generate a transcript, validate its accuracy, and then burn captions into the video. Digital Nirvana’s solution Trance has met and exceeded USTA’s captioning requirements in several ways as they gained rapid turnaround time that is virtually unheard of in the captioning industry with captions that are 100-percent accurate and display with minimal delay as the text is spoken.

For more information about Digital Nirvana, click here.

Digital Nirvana Rolls Out Advanced Features for Trance Closed-Captioning and Transcription Solution

Winner of NAB’s 2020 Best of Show Award; New Capabilities Make Professional Captioning Even Faster, Easier, and More Accurate

FREMONT, Calif. —Oct. 26, 2020 — Digital Nirvana, a provider of leading-edge media-monitoring and metadata generation services, today unveiled several powerful new capabilities for Trance, the company’s award-winning, enterprise-grade, cloud-based application for closed captioning and transcription. With Trance 3.1, Digital Nirvana has added new features that use natural language processing (NLP) to automatically convert transcripts into captions and to automatically detect shot changes.

Digital Nirvana’s Trance unites cutting-edge speech-to-text technology and other AI-driven processes with cloud-based architecture to drive efficient broadcast and media workflows. Implementing cloud-based metadata generation and closed captioning as part of their existing operations, media companies can radically reduce the time and cost of delivering accurate, compliant content for publishing worldwide. They also can enrich and classify content, enabling more effective repurposing of media libraries and facilitating more intelligent targeting of advertising spots.

With Version 3.1, Trance is the industry’s first closed-captioning solution to offer NLP-based auto-conversion, which automatically converts transcripts into highly accurate, time-synched closed captions. As part of this exclusive feature, Trance automatically identifies music, silence, and other nontext audio and provides intelligent line breaks for captions. Previously, operators had to manually correct the awkward splitting of words that appeared at the end of lines or dangling punctuation. Now, Trance automatically moves such words to the next line and takes into account multiword proper names that should be kept together. For instance, if “New York” appears at the end of a caption line, Trance will move the full title to the next line. Robust presets and grammar rules ensure that lines do not start with misplaced punctuation, such as the apostrophe in a contraction (e.g., with the contraction “they’re,” Trance will move the entire word to the next line).

Digital Nirvana

The latest version of Trance sets another industry standard with automatic shot change detection, including configurable thresholds for generating alerts when captions fall out of compliance. For instance, Trance can identify the end of a shot and ensure that captions belonging to the shot do not cross over. Besides, Trance 3.1 now includes a frame-rate conversion feature that automatically converts captions and video that is to be repurposed for multiple platforms. An example is a broadcaster that would like to repurpose a previously aired video for presentation on the network’s website. The captions would go out of sync when the footage was converted to the target frame rate, requiring the broadcaster to create an entirely new caption file manually. Trance automatically converts the original captions from 29.97 fps (for broadcast) and generates a new caption file with time codes in the 24.97 frame rate needed for web streaming.

“Introduced earlier this year, Trance 3.0 was a major update that brought significant efficiency gains to media companies’ internal captioning processes — helping to open up powerful new business opportunities for distributing compliant content on popular streaming platforms,” said Russell Wise, senior vice president at Digital Nirvana. “But we haven’t rested on our laurels with Trance 3.0; instead, we’re continuing to develop the platform and add new capabilities designed to make the professional captioning process faster, easier, and more efficient than ever and also help broadcasters deliver the best possible experience to viewers who rely on captions.”

More information about Digital Nirvana and its products and services is available at www.digital-nirvana.com.

About Digital Nirvana

Founded in 1996, Digital Nirvana, with its repertoire of innovative solutions, specializes in empowering customers worldwide with knowledge management technologies. Digital Nirvana’s comprehensive service portfolio includes media monitoring and analysis, media solutions and services, investment research services, and learning management services. Customers rely on Digital Nirvana to improve operational efficiencies, ensure compliance, reduce costs, and protect revenue streams. Digital Nirvana’s compliance-driven solutions offer its customers unmatched quality, proven versatility, and best-in-class performance that help organizations to streamline operations and gain competitive advantage. Digital Nirvana is headquartered in Fremont, California with global delivery locations in Hyderabad and Coimbatore, India.

How AI can accelerate content creation

Media companies are in a race to create volumes of compelling, high-quality content for many different markets and distribution channels. But doing so has its challenges, explains Russell Wise.

Russell Wise is Senior Vice President of Sales and Marketing at Digital Nirvana.

Digital Nirvana

For one thing, the Covid-19 pandemic has turned the media industry on its head. Most media companies are struggling valiantly to produce content remotely at a time when in-person contact is restricted. That means remote production and cloud-based tools have become more important than ever.

The same is true for repurposing content. Even in the best of times, media companies have been keen to monetise their existing content. After all, some of these companies have vast asset repositories, and what better way to get more bang for the production buck than by using an asset in multiple ways, such as localising it for a new audience in another country? In this time of limited production options, repurposing content can be a critical source of revenue.

Then there’s the direct-to-consumer model, in which media companies avoid distribution points and go over the top directly to the viewer. To do it, they must be able to go through their libraries to sort, find, create, and distribute a piece of content, usually in four or five different versions, depending on the customer, geography, etc.

Even at the highest level, corporations like NBC are reassessing their strategies in light of the pandemic, deciding what media products they want to bring to the market and what technologies they need to make it happen. Those three scenarios – remote production, content repurposing, and direct-to-consumer delivery – are driving media companies to seek help creating compelling and compliant content more quickly, while adhering to government and internal standards and practices at the time of distribution. Most media companies are taking a serious look at AI and machine learning to help with this task.

There are clearly some discrete applications that are prime candidates for AI.

Content enrichment is one such example. At any time, you can add intelligence in the form of metadata; and at any point in the chain, you can produce better, more targetted content more quickly – and a lot more of it.

In pre-production and post-production workflows, a classic use case is to enhance the metadata of existing assets. AI can “watch” or “listen” and tag content in an existing library faster and more accurately than humans ever could. Or media companies can rely on AI to index any incoming feed so that metadata already exists before the content even enters the main system. In either case, having more – and more accurate – metadata speeds up the search process and accelerates content creation.

In one real-world example, a news website must deliver video within one hour of hitting post-production. The company uses speech-to-text engines to tag the video on its many incoming feeds, which makes it possible to meet deadlines and provide captions. Likewise, two major sports leagues are using AI for real-time captioning during live sports broadcasts. In all cases, AI helps broadcasters get content to air faster with better-quality captions – and without using human labour to do it.

Repurposing and localisation is another area where AI is critical. Once there is enriched content in the repository, it’s easy for media companies to repurpose it. For instance, they can use AI to translate content and localise it for other geographies. That’s exactly what one Spanish-language broadcaster is doing. AI takes Spanish content and captions it in different languages for different markets.

AI is also very useful where quality assurance is concerned. So, on the far end of the chain, for instance, there’s content distribution. This is where broadcasters check to see if the content is compliant to local government standards, like the FCC, or their own internal standards, like a style guide. They can use AI to assess, for example, the quality and accuracy of captions, which is becoming an issue in countries with stringent captioning laws.

Monitoring is another element worth considering. AI comes in handy when weeding out objectionable content, such as unacceptable language or images that would violate strict rules if they were to go on air. AI engines such as image and speech recognition can monitor and automatically flag such issues for review instead of a human continually viewing it. Likewise, AI can automatically quantify logo insertions, product placements, ads or even the number of times a given person appeared within a broadcast – all information that can correspond to billing.

AI technologies have improved substantially in the past years in terms of accuracy, but the big barrier to adoption in the media industry has been a practical application. That is, how to insert it into the content workflow. Take speech-to-text, for example. Broadcasters can get a lot of value from a speech-to-text engine, but it’s difficult for them just to order one. It has to come with management tools. It has to have a good UI and basic user functionality to get the full benefit. Fortunately, there are companies that have built a solid workflow that lets broadcasters harness the power of AI.

For instance, some products allow users to upload media through a portal to a cloud-based system. Once the content is there, the system essentially does all the work – transcription, translation, captioning, and monitoring. Users can set up presets to publish captions in the format required by the distributor, which is a pretty big thing, especially with Netflix. There are some basic tools for, say, editing the transcription, sort of like a word processor. There’s also a set of management tools that let users do things like assign, handoff, and track the progress of jobs. And when it’s time for content delivery, a content monitor automatically checks for compliance, quality and more.

The big benefits of AI in the content creation workflow are increased speed and reduced effort. The whole idea is to get the rudimentary, repetitive work away from humans. This way, humans can be creative while repetitive tasks can be designated to AI.

Digital Nirvana’s AI captioning solution powers workflow for US tennis body

The AI-powered workflow Trance looks to eliminate most of the manual work by providing captioning in a fraction of the time.

Digital Nirvana has announced that its Trance automated postproduction captioning solution with advanced AI is powering the closed captioning workflow for the major broadcaster of tennis sports content.

Integrated with Adobe Premiere Pro, Trance promises to improve the speed and efficiency of the league’s caption generation process while freeing technical personnel to focus on the creative aspects of their jobs.

Trance application aims to unite STT technology and other AI-driven processes with cloud-based architecture to drive efficient broadcast and media workflows. Implementing cloud-based metadata generation and closed captioning as part of their existing operations, media companies can radically reduce the time and cost of delivering accurate, compliant content for publishing worldwide.

The latest version, Trance 3.0, has a text translation engine that simplifies and speeds captioning in additional languages and automated caption conformance to accelerate delivery of content to new platforms and geographic regions.

“The major broadcaster of tennis sports content aims to give tennis enthusiasts access to its content with minimal delay, especially when it pertains to live events or other timely content — something it couldn’t do with its old captioning process. Now with Trance seamlessly integrated into its Adobe postproduction workflow, the league gets a rapid turnaround time that is virtually unheard of in the captioning industry,” said Russell Wise, senior vice president of sales and marketing for Digital Nirvana. “Not only that, but the major broadcaster of tennis sports content has the industry’s highest caption-quality standards, with captions that are virtually 100% accurate and display with minimal delay, as the words are spoken.”

Since moving to Digital Nirvana, the major broadcaster of tennis sports content production team has been able to cut the turnaround time for captioning an average video title from hours to only 30 minutes, with the captioning task completely offloaded from the technical team. In a typical captioning workflow, a video technician simply drags the clip to a “hot folder” in the Digital Nirvana media service portal. The clip is then automatically uploaded to the processing centre, where the Digital Nirvana team uses AI-based speech-to-text algorithms to generate a transcript, validate its accuracy, and then burn captions into the video. Once complete, the captioned video is automatically uploaded to the Premiere Pro timeline.

Digital Nirvana Intros Transport Stream Outage Detection For MonitorIQ

New capability enhances outage detection by inserting black frames in recorded video.

FREMONT, Calif.—Digital Nirvana today unveiled a nearly frame-level-accurate transport stream outage detection capability for its MonitorIQ 7.0 broadcast monitoring and compliance logging platform.

This capability enhances outage detection by inserting black frames into recorded video to indicate the exact instances in which a loss of signal has occurred.

“With our highly accurate transport stream outage detection feature, MonitorIQ takes the guesswork out of understanding the impact of signal loss in the video chain,” said Keith DesRosiers, director of sales solutions at Digital Nirvana. “Customers are notified of the exact length of the outage, and end-users are able to create clips for proof of the exact outage duration.”

Digital Nirvana’s MonitorIQ allows operators to record, store, monitor, analyze, and repurpose content with a minimum of clicks. It natively records content from any point in the video delivery chain, enabling broadcasters to collect and use knowledge about their content to meet regulatory and compliance requirements. The platform also provides access to valuable next-generation content processing and analysis tools.

The new transport stream outage detection feature provides an accurate video record of any spot in the video delivery chain where a signal loss has occurred. Rather than relying on recorded video to detect a loss, MonitorIQ constantly monitors the physical input of any loss.

When a loss of the transport stream input signal is detected an encoding process immediately starts inserting black frames into the recorded video. When MonitorIQ detects the return of a good signal input, it stops the insertion of black frames into the video stream.

This process allows MonitorIQ to report highly accurate outage durations and allows end-users to view the actual outage in the browser-based user interface with a black slate inserted into the video.

Digital Nirvana Offers Continued Support and Migration Path for Volicon Observer Users

FREMONT, Calif.—Aug. 24, 2020—With support for Volicon Observer now ended, Digital Nirvana, a provider of leading-edge media-monitoring and metadata generation services, is offering ongoing technicalsupport to users of Volicon Observer as well as the option to migrate tothe MonitorIQ broadcast monitoring and compliance logging platform.

“Now that Volicon Observer has reached end of life and is out of support, it’s critical for media operations that were using it to find options to keep it functioning,” said Russell Wise, senior vice president of sales and marketing for Digital Nirvana. “Additionally, when Volicon users are ready in the future, our MonitorIQ system offers the best migration option with the best match to the Observer feature set. What’s more, the core developers of the Volicon product line are now at Digital Nirvana. Who better to help former Observer users confidently transition to a new system than the experts who know both systems best?”

Refined by experts, including architects of the original Volicon Observer product, MonitorIQ is a secure, easy-to-use solution that allows broadcasters to record, store, monitor, analyze, and repurpose content. The platform enables broadcasters to collect and use knowledge about their broadcast content to meet a wide range of regulatory and compliance requirements. Built on a reliable and secure Linux platform, it is extensible into broadcast operations with open APIs. With MonitorIQ, broadcasters gain access to AI-based cloud microservices for closed caption generation; caption quality assessment; caption realignment; and video intelligence for objects, ads, logos, and facial recognition.

MonitorIQ gives broadcasters all the key features of Volicon’s beloved Observer product and improves upon each one to provide unparalleled broadcast monitoring and compliance capabilities. The latest version, MonitorIQ 7.0,has an updated and intuitive web interface, core feature improvements, and new cutting-edge capabilities, all of which combine to make everyday tasks effortless.

About Digital Nirvana

Founded in 1996, Digital Nirvana, with its repertoire of innovative solutions, specializes in empowering customers worldwide with knowledge management technologies. Digital Nirvana’s comprehensive service portfolio includes media monitoring and analysis, media solutions and services, investment research services, and learning management services. Customers rely on Digital Nirvana to improve operational efficiencies, ensure compliance, reduce costs, and protect revenue streams. Digital Nirvana’s compliance-driven solutions offer its customers unmatched quality, proven versatility, and best-in-class performance that help organizations to streamline operations and gain competitive advantage. Digital Nirvana is headquartered in Fremont, California with global delivery locations in Hyderabad and Coimbatore, India.

Digital Nirvana AI in the cloud with MediaServicesIQ and MonitorIQ for compliance

FREMONT, Calif. — July 24, 2020— Digital Nirvana announced in July the launch of MediaServicesIQ, a comprehensive suite of cloud-based microservices that leverage advanced artificial intelligence and machine learning (ML) capabilities to streamline media production, postproduction and distribution workflows. MediaServicesIQ offers a comprehensive portal from which users can access Digital Nirvana’s high-performance, AI-powered technology solutions: speech synthesis, content classification, subtitle generation and compliance, and an ad-capable video intelligence engine advertisements, logos, objects and faces.

Digital Nirvana

MediaServicesIQ offers a core set of AI capabilities and makes them available in a layer that can be easily accessed from newsrooms, live sports and entertainment productions, post houses and other multimedia operations to accelerate critical processes, reduce mundane tasks and freeing up staff to do more creative work.

Building on custom or standard in-house production and monitoring workflow platforms, MediaServicesIQ provides access to a collection of high-performance AI capabilities in the cloud including text-to-speech, video intelligence, and metadata generation that can be orchestrated to provide intelligence and usable insights.

These capabilities enable intelligent and immediate recording and feedback of content quality and compliance, better broadcaster positioning to meet regulatory, compliance and licensing requirements for subtitles, decency and ad tracking.

Digital Nirvana

MediaServicesIQ provides an easy-to-access portal to these services through custom-developed workflows or through continuous access to popular Digital Nirvana applications including MonitorIQ 7.0, MetadatorIQ and Trance.

An integration of AI microservices with Digital Nirvana’s MonitorIQ 7.0 compliance logging system enables powerful video intelligence applications and insights, including:

  • The ability to record, archive and retrieve content for compliance, quality of service and insights into broadcast content
  • Automatic transcription of content from live broadcasts and commercials
  • Automatic detection of logos, objects, faces and shots
  • Automatic extraction of on-screen text
  • The ability to identify commercial breaks in recorded content
  • The ability to identify confidential words or topics in logged / registered content, as well as the classification of incoming advertising material for confidential content
  • Automatic reporting for volume compliance, QoE, SCTE inserts, ad detection and identification
  • Automatic content classification
  • The ability to assess the quality and compliance of captions

Digital Nirvana

MediaServicesIQ integrates with Digital Nirvana’s MetadatorIQ to provide automated metadata creation in pre-production, production and live content workflows.

Optimized for Avid MediaCentral | Production management resources, MetadatorIQ applies advanced content analytics based on AI and ML to automatically generate better structured, more detailed and more accurate metadata. Media operations benefit in two main ways: first, by saving huge time in anticipating metadata generation, and then by giving producers the ability to focus on the resources they need right away.

For automatic subtitle/subtitle generation and subtitle quality compliance, MediaServicesIQ provides seamless access to Digital Nirvana’s Trance 3.0 toolset.

Trance combines state-of-the-art STT technology and other AI-based processes with cloud-based architecture to bring metadata and subtitle generation into existing operations, enabling media companies to radically reduce delivery times and costs of accurate and compliant content for worldwide publication. They can also enrich and classify content, enabling more effective reuse of media libraries and facilitating smarter targeting of commercials.

MonitorIQ,

The MonitorIQ broadcast monitoring and compliance platform enables a seamless migration path for broadcasters who have used Volicon Observer to the end of support. The Volicon system ended support in June 2020 and the development and refinement of Observer’s feature sets have been transferred to MonitorIQ to ensure more in-depth control solutions on the content transmitted and therefore to derive greater value from those assets.

Digital Nirvana

MonitorIQ goes beyond regulatory and compliance requirements, giving broadcasters access to valuable next-generation tools for content processing and analysis.

Digital Nirvana offers MonitorIQ as a secure and easy-to-use solution that allows broadcasters to record, archive, monitor, analyze and reuse content.

The platform enables broadcasters to collect and use knowledge about their broadcast content to meet a wide range of regulatory and compliance requirements.

Built on a reliable and secure Linux platform, it is extensible to broadcast operations with open APIs.

With MonitorIQ, broadcasters gain access to AI-based cloud microservices for subtitle generation, subtitle quality assessment, caption realignment, and video intelligence for objects, advertising, logos, and facial recognition.