Deepfakes: A Popular Indian Actor’s Viral Video Spotlights Big Tech’s Deepfake Problem

timer
1 min read
Deepfakes: A Popular Indian Actor’s Viral Video Spotlights Big Tech’s Deepfake Problem Blog Image

What’s in Today’s Article?

  • Why in News?
  • What is Deepfake?
  • How does Deepfake Work?
  • Issues with Deepfakes
  • Legal Framework Related to AI in India
  • Recent Global Efforts to Regulate AI
  • Way Ahead to Curb the Menace of the Deepfake Technology

Why in News?

  • In a recent turn of events, popular actress Rashmika Mandanna has found herself at the center of a controversy involving a deepfake video.
  • The video, which has gone viral on social media, shows a woman (in revealing clothes) entering an elevator, but her face has been digitally altered to resemble Mandanna.

What is Deepfake?

  • Deepfake uses deep learning techniques in AI to generate videos, photos, or news that seems real but is actually fake.
  • These techniques can be used to synthesise faces, replace facial expressions, synthesise voices, and generate news.
  • This technique is also used to create special effects in movies. However, more recently this technique is being widely used by criminals to create disinformation.
  • For example, in March 2022, Ukrainian President Volodymyr Zelensky revealed that a video posted on social media in which he appeared to be instructing Ukrainian soldiers to surrender to Russian forces was actually a deepfake.

How does Deepfake Work?

  • Deepfake techniques rely on a deep learning technique called autoencoder, which is a type of artificial neural network which contains an encoder and a decoder.
  • The input data is first decomposed into an encoded representation then these encoded representations are reconstructed into new images which are close to input images.
  • Deepfake software works by combining several autoencoders, one for the original face and one for the new face

Issues with Deepfakes

  • Spread misinformation and propaganda: Deep fakes seriously compromise the public’s ability to distinguish between fact and fiction. For example, recent events that never happened include -
    • Football fans in a stadium in Madrid holding an enormous Palestinian flag.
    • A video of Ukrainian President Volodymyr Zelenskyy calling on his soldiers to lay down their weapons.
  • Can depict someone in a compromising and embarrassing situation: For instance, deepfake pornographic material of celebrities not only amounts to an invasion of privacy, but also to harassment (especially of women).
  • Deepfakes have been used for financial fraud: Scammers recently used AI-powered software to deceive the CEO of a U.K. energy company into thinking he was speaking with the CEO of the German parent company over the phone.
    • As a result, the CEO transferred a large sum of money (€2,20,000) to what he thought was a supplier.
  • Deepfakes could lead to the ‘Liar’s Dividend’: This refers to the idea that individuals can take advantage of the growing awareness and prevalence of deepfake technology by denying the authenticity of certain content.

Legal Framework Related to AI in India

  • In India, there are no legal rules against using deepfake technology. However, specific laws can be addressed for misusing the tech, which include Copyright Violation, Defamation and cybercrimes.
  • For example, the Indian Penal Code (defamation) and the Information Technology Act 2000 (punish sexually explicit material) can be potentially invoked to deal with the malicious use of deepfakes.
  • The Representation of the People Act 1951 includes provisions prohibiting the creation or distribution of false or misleading information about candidates or political parties during an election period.
  • The Election Commission of India has set rules that require registered political parties and candidates to get pre-approval for all political advertisements on electronic media.
  • All of the aforementioned are insufficient to adequately address the various issues that have arisen due to AI algorithms, like the potential threats posed by deepfake content.

Recent Global Efforts to Regulate AI

  • The world’s first ever AI Safety Summit (at Bletchley Park, UK):
    • 28 major countries including the US, China, Japan, the UK, France and India, and the EU agreed to sign on a declaration saying global action is needed to tackle the potential risks of AI.
    • The declaration incorporates an acknowledgment of the substantial risks from potential intentional misuse or unintended issues of control of frontier AI - especially cybersecurity, biotechnology and disinformation risks.
  • US President’s executive order: It aims to safeguard against threats posed by AI, and exert oversight over safety benchmarks used by companies to evaluate generative AI bots such as ChatGPT and Google Bard.
  • G20 Leaders’ Summit in New Delhi:
    • The Indian PM had called for a global framework on the expansion of “ethical” AI tools.
    • This shows a shift in New Delhi’s position from not considering any legal intervention on regulating AI in the country to a move in the direction of actively formulating regulations based on a “risk-based, user-harm” approach.

Way Ahead to Curb the Menace of the Deepfake Technology

  • Companies should respond with tech solutions: While laws could take a long time to bear fruit, the menace of the technology has prompted some of the online platforms to come up with clear policies on how they will deal with deepfakes. For example,
    • Google announced tools, which rely on watermarking and metadata to identify synthetically generated content.
    • The AI Foundation created a browser plugin called Reality Defender to help detect deep fake content online. Another plugin, SurfSafe, also performs similar checks.
  • Startups should work on innovative solutions: For example, OARO Media creates an immutable data trail that allows businesses, governing bodies, and individual users to authenticate any photo or video.

Q1) What do you mean by infodemic?

An infodemic ("information" and "epidemic") is a rapid and far-reaching spread of both accurate and inaccurate information about certain issues. It is used as a metaphor to describe how misinformation and disinformation can spread like a virus from person to person and affect people like a disease.

Q2) What is ChatGPT?

Chat Generative Pre-trained Transformer (ChatGPT) is a large language model-based chatbot developed by OpenAI and launched in 2022. It enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language.


Source: Viral ‘Rashmika Mandanna video’ spotlights Big Tech’s deepfake problem, yet again | BT