Impact of Artificial Intelligence (AI) on Elections

16-03-2024

11:10 AM

timer
1 min read
Impact of Artificial Intelligence (AI) on Elections Blog Image

What’s in Today’s Article?

  • Why in News?
  • Impact of Artificial Intelligence (AI) on Elections
  • Imminent Dangers Posed by the AI on the Election Process
  • Recent Regulatory Steps by India to Curtail Misinformation by AI Models

Why in News?

The shadow of large language models (LLMs) looms over elections around the world. The stakeholders are aware that even one relatively successful deployment of an AI-generated disinformation tool could impact both campaign narratives and election results very significantly.

Impact of Artificial Intelligence (AI) on Elections

  • In 2018, the Cambridge Analytica scandal brought into mainstream public discourse the impact of social media on electoral politics, and the possibility of manipulating the views of Facebook users using data mined from their private posts.
  • AI can accelerate the production and diffusion of disinformation in broadly three ways, contributing to organised attempts to persuade people to vote in a certain way.
    • First, AI can magnify the scale of disinformation by thousands of times.
    • Second, hyper-realistic deep fakes of pictures, audio, or video could influence voters powerfully before they can be possibly fact-checked.
    • Third, and perhaps most importantly, by microtargeting.
  • AI can be used to inundate voters with highly personalised propaganda, as the persuasive ability of AI models would be far superior to the bots and automated social media accounts.
  • The risks are compounded by social media companies such as Facebook and Twitter significantly cutting their fact-checking and election integrity teams.
    • While YouTube, TikTok and Facebook do require labelling of election-related advertisements generated with AI, that may not be a foolproof deterrent.

Imminent Dangers Posed by the AI on the Election Process

  • A new study predicts that AI will help spread toxic content across social media platforms on an almost-daily basis in 2024 and could potentially affect election results in more than 50 countries.
    • This could destabilise societies by discrediting and questioning the legitimacy of governments.
  • The World Economic Forum’s Global Risks Perception Survey, ranks misinformation and disinformation among the top 10 risks.
    • The easy-to-use interfaces of large-scale AI models enable a boom in false information and “synthetic” content - from sophisticated voice cloning to fake websites.

Recent Regulatory Steps by India to Curtail Misinformation by AI Models

  • The Indian government has asked digital platforms to provide technical and business process solutions to prevent and weed out misinformation that can harm society and democracy.
  • According to the Ministry of Electronics and Information Technology (MeitY), a legal framework against deepfakes and disinformation will be finalised after the elections.
  • Earlier this month, the MeitY had issued an advisory to companies such as Google and OpenAI, that their services should not generate responses that are illegal under Indian laws or threaten the integrity of the electoral process.
    • The advisory had faced a backlash from some generative AI space startups over fears of regulatory overreach that could throttle the emerging industry.
  • While the government stressed that the advisory was only meant at "significant" platforms and not startups, the incident highlights the need for regulators to tread carefully on the narrow line between -
    • Combating AI-linked misinformation and
    • Being perceived as restricting AI-led innovation.

Q1) How AI-generated disinformation might impact elections?

A potential new danger of AI-generated disinformation stems from the fact that it can be used to convincingly impersonate politicians, and use their image or voice to spread falsehoods among their own supporters.

Q2) What is the EU AI Act?

The EU AI Act aims to provide a risk-based framework that imposes varying levels of obligations based on the potential impact and risks posed by different AI applications, with the objective to foster innovation and safeguard democracy and environmental sustainability.