In the time it takes you to read the first two paragraphs of this blog, the modern media supply chain must be able to begin generating detailed metadata for newly ingested content.
The reason is simple. In the world of social media, there’s no time to waste. The concept of “film at 11” is long gone and has been replaced by an endless stream of updated Tweets, Facebook posts, Instagram messages and YouTube videos.
Key to competing and succeeding in this world –whether you are a TV broadcaster or a social media publisher—is reducing the time between when video is shot and it is seen by viewers.
The TV Newsroom
The problem for TV newsrooms is that there’s been a disconnect between the news production workflow for stories aired during newscasts and the demand for immediacy in the era of people first picking up their smart phones when something happens.
Seeking to remedy this disconnect, many TV station newsrooms have adopted what’s been called a “digital first” philosophy. Simply stated, when news breaks the race is on to get stories out to social media, web sites and –if a story is particularly urgent—onto air, even if doing so interrupts regularly scheduled programming. Teasing a story on-air throughout the day simply to be aired during the 5 or 6 p.m. newscast is now passé.
A transition in newsroom workflows over the past few years has been required to bring digital first to life. At its core is putting the story at the center of news production –not a newscast rundown. In practice, what this story-centric workflow means for reporters and news producers is a fast and easy way to publish stories to the digital and social media platforms of their choosing.
AI and Metadata
Digital first only addresses the distribution side of the equation. Upstream of publishing news is an entire workflow involving scriptwriting, editing and graphics and title creation that begins with capturing raw news video and ingesting it into servers.
It’s at the point of ingest where things start to fall apart for broadcasters who are racing against digital publishers to be first. While raw video that’s being ingested may be available immediately throughout the newsroom, detailed metadata, as discussed in previous blogs, is not. This makes it tougher to find specific content that’s relevant to a story, and thereby delays the point in time when viewers actually see the story.
However, with the help of artificial intelligence (AI) it is possible to generate detailed metadata within seconds, enabling stations to go live –on-air, on a website or via social media—in mere moments.
TVU Networks MediaMind
Our TVU Networks MediaMind is a media supply chain platform that puts this powerful capability in the hands of broadcasters and others by leveraging AI to enable real-time searches of content, including newly ingested content.
Leveraging powerful voice and object recognition technology, our MediaMind platform indexes content with frame-specific metadata in seconds, giving everyone in the newsroom the ability to locate within moments the footage that best tells their stories and to disseminate content to viewers as quickly as possible regardless of what device they use to watch.
For more information on this topic or to contact the blog authorContact us