Deepfake videos enable an Asian news broadcaster to bolster its expansion strategy with minimal investment

Speech Synthesis and Lip Sync model

The Client

Asian broadcaster

An Asian broadcaster with 1.3 billion viewers across 173 countries, with channels spanning entertainment, news, and sports wanted to expand its reach. They needed an out-of-the box solution to provide current affairs programs to an audience viewing content in 12 plus regional languages.

Akaike’s edge-cloud agnostic solutions offer great flexibility.

Flexibility 0%
Cost 0%

Executive Summary

Industry Overview


In the post-pandemic ecosystem, with changing consumer habits, the industry is likely to focus on cost-efficiency, revenue enhancement opportunities, and profit protection with greater technology integration. For a 4-5 year period, the revenue growth globally is projected to be at 4.5% CAGR. Asia is expecting growth at 17% CAGR, and India at 11% CAGR over the same period amounting to INR 4.5 Trillion by 2023.
Business Challenge

Customer Experience

This prominent English news brand wanted to expand into the fast-growing regional market and establish themselves as a premier news source regionally. They were looking to broadcast news programs that focused on local events and test their regional expansion strategy by re-using newsroom footage with AI-powered synthetic speech and lip-sync.

The Akaike Edge

Inbuilt libraries, DL models with transfer learning capabilities

Experienced ML and DL Ops teams

Efficient Deployment 0%
Integration 0%
Ongoing Maintenance 0%


Inbuilt libraries, DL models with transfer learning capabilities
Step 1.

TTS and Video Synthesis

The broadcaster had more than 260,000 hours of video in its archives. Focusing on the reusability of the client’s media assets, from the available video footage, a few of the newsroom’s panel of anchors were selected.

Step 2.

Post video selection

Post video selection, an AI recipe was whipped up for image synthesis and automated lip synchronization which blended Computer Vision, Deep Learning, and GAN (Generative Adversarial Networks) technology.

Step 3.

Custom Speech Solution built as per the speaking face video

The custom solution converted written text to natural-sounding speech. This was achieved by using deep neural networks trained on human speech to create human-like expressive speech. The target speech segment was then accurately adapted to a video with a speaking face using GAN.

Research shows that 87% of Vision AI
projects do not yield expected results

either owing to training data insufficiencies stalling the project, or being too slow to deploy. Our AI experts can help you accelerate in data sparse environments.

We use cookies to give you the best experience.

Get in Touch