AI offers enhanced Media Asset Management (MAM) for Broadcast

The sad history of video archives—whether at a TV station, network, production center, post-production house or even non-broadcast corporate or government institutions—is the desire to find the right content has always outstripped the ability to do so.
Computer with the video indexing ai
The sheer amount of archive footage available to broadcasters makes it difficult to find specific content

Whether it was a TV station with ¾-inch U-matic or ½-inch Betacam cassettes on shelf after shelf of storage in some out-of-the-way tape library room, a film studio with the same—but larger—type of archive for movies and other film content, NAS or SAN storage, RAIDs or JBODs or special-purpose Media Asset Management systems, the concept is clear. It makes sense to hold onto valuable footage that can be reused, re-monetized and refurbished for re-release when it comes to some movies.

Anyone who ever worked in a newsroom in the days of videocassettes saw this scenario or some variation play out more than once: a reporter, editor or news producer squirreling away a certain cassette with key footage that would be in demand for the foreseeable future just to avoid having to search the shelves or badger colleagues to find it.

That scenario illustrates a fundamental truth. Archived footage is only valuable if you can find it.

 

The introduction of Media Asset Management systems

The genesis of Media Asset Management (MAM) systems was the desire to unlock this value by making stored content—whether online, nearline or in deep archive—discoverable and accessible. The market has rewarded MAM vendors for their efforts and looks to continue doing so.

In its “Global Media Asset Management (MAM) Solutions” report, market research and advisory firm Technavio forecasts that worldwide the MAM market will grow by $6.74 billion during the period of 2020-2024.

Many factors contribute to this expected growth. The dramatic increase in the number of people around the world who are connected to the internet and wish to stream OTT content, the desire on the part of media professionals to find greater workflow efficiencies, the growth of digital commerce and digital advertising to support it and even the desire of media companies to protect staff from COVID-19 by enabling them to work from home by having access to content stored in a cloud-based MAM are helping to drive this growth.

Clearly, the desire to store and access media assets is strong. But how about the ability to find the right content stored in a MAM system? Has that kept pace? Even more important, are MAMs offering more than storage and discovery? Can they be more helpful in generating new content?

 

The problem with Media Asset Management (MAM) today

The weakest link in the media asset management broadcast chain is metadata generation. While today’s digital cameras, for instance, have the ability to generate some metadata automatically, such as GPS location data, camera settings, lens used, scene info, date, time and other useful information, that only goes so far.

More detailed information, such as who or what is in the shot, the context of the shot, such as a press conference from the governor’s office and key words spoken that provide further context, must be entered manually if entered at all. That defeats, at least partially, one of the chief reasons for integrating a MAM into a media workflow, namely enhancing workflow efficiency.

 

How AI can help discover lost content

Recently, however, artificial intelligence algorithms have become available that not only remove the drudgery of manually reviewing footage and generating this level of metadata, but also do so incredibly quickly, making fast work of enhancing existing metadata for already stored footage and thereby more fully unlocking its hidden value.

Leveraging AI algorithms, such as speech-to-text, object recognition and facial recognition, available via TVU Networks’ MediaMind AI engine, to generate metadata on a frame-by-frame basis for content stored on-premise or in the cloud in a MAM or even on any other existing digital storage medium makes it far easier to find and use the precise video clip desired.

The implication of having this level of metadata is greater still in that it has the potential to transform MAMs forever. How? By making each digital storage device holding media simply one of a countless number of storage nodes in a ubiquitous, virtual MAM available to anyone with the right permissions at any time from any place with an internet connection.

 

The future of MAM for broadcast

AI-enhanced MAM storage in television will touch many workflows. In the newsroom, access to an AI-enhanced MAM will increase the productivity of reporters and editors who need to find archived footage for a story.

During live news and public affairs broadcast, such a MAM will make it practical for a show producer to find stored clips on the fly that are pertinent to what a guest or host is discussing, and for management, having this level of metadata will make it far easier to complete documentation regularly required to prove compliance with FCC regulations.

Similarly, in post-production AI-enhanced MAM storage will make it easier to find the right takes while editing and quickly identify archived clips for things like establishing shots or even suitable substitute shots in some instances when plans change.

But this is just the beginning of how AI-enhanced MAM storage ultimately will affect content creation. One day in the not too distant future it will be possible for reporters writing stories, editors populating NLE timelines and others to begin their creative process and have their scriptwriting or editing tool pull up stored clips that AI technology determines might be appropriate for the work in progress.

For instance, as TV reporter working on a story about a ballot recount in George begins typing the words “Brad Raffensperger” (the Georgia Secretary of State), a clip in which he says “It looks like Vice President Biden will be carrying Georgia” automatically pops up for the reporter, who can make the choice to use or disregard it. Further along in the story, when the reporter types “President Trump counters,” a relevant clip from the 45-minute speech made Dec. 2 in which he laid out election fraud allegations pops up. Once again, the reporter can choose to use the clip or leave it out of the story.

When that day arrives, the desire to find the right video clip and the ability to actually do so will be evenly matched. 

Let's
talk!

Talk to our team to learn more about our solutions