AI and Deepfake Videos: Speech Synthesis and Lip Sync Model

Deepfake videos enable an Asian news broadcaster to bolster its expansion strategy with minimal investment

Table of contents
Contributors
Shilpa Ramaswamy

The Client & the Challenge

An Asian broadcaster with 1.3 billion viewers across 173 countries, with channels spanning entertainment, news, and sports wanted to expand its reach. They needed an out-of-the box solution to provide current affairs programs to an audience viewing content in 12 plus regional languages.

Industry Overview

Disruption

In the post-pandemic ecosystem, with changing consumer habits, the industry is likely to focus on cost-efficiency, revenue enhancement opportunities, and profit protection with greater technology integration. For a 4-5 year period, the revenue growth globally is projected to be at 4.5% CAGR. Asia is expecting growth at 17% CAGR, and India at 11% CAGR over the same period amounting to INR 4.5 Trillion by 2023.

Business Challenge

Customer Experience

This prominent English news brand wanted to expand into the fast-growing regional market and establish themselves as a premier news source regionally. They were looking to broadcast news programs that focused on local events and test their regional expansion strategy by re-using newsroom footage with AI-powered synthetic speech and lip-sync.

They were looking to broadcast news programs that focused on local events by re-using newsroom footage with AI-powered synthetic speech and lip-sync.


Solution

We used a blend of vision AI and Deep Learning to solve the customer's challenge. Here is a breakdown of the steps we used:

Step 1: TTS and Video Synthesis

The broadcaster had more than 260,000 hours of video in its archives.  Focusing on the reusability of the client’s media assets, from the available  video footage, a few of the newsroom’s panel of anchors were selected.

Step 2: Post video selection

Post video selection, an AI recipe was whipped up for image synthesis and automated lip synchronization which blended Computer Vision, Deep  Learning, and GAN (Generative Adversarial Networks) technology.

Step 3: Custom Speech Solution built as per the speaking face video

The custom solution converted written text to natural-sounding speech. This was achieved by using deep neural networks trained on human speech to create human-like expressive speech. The target speech segment was then accurately adapted to a video with a speaking face using GAN.


Impact Delivered

  • Solution deployed in 12 regional languages
  • More than 260,000 videos worked upon

Top Benefits

  • Cost cut down on video campaigns
  • Omnichannel content
  • Hyper-personalized content for the audience

The Akaike Edge

Inbuilt libraries, DL models with transfer learning capabilities