Vajram-And-RaviVajram-And-Ravi
hamburger-icon

AI Good, Bad and Ugly: Why AI Must Be Regulated

26-08-2023

11:44 AM

timer
1 min read
AI Good, Bad and Ugly: Why AI Must Be Regulated Blog Image

Why in News?

  • As Artificial Intelligence (AI) impacts our everyday lives, there has been a lot of discussion about how this exciting yet powerful and potentially problematic technology should be regulated.
  • Recently, Sam Altman (CEO of OpenAI, which developed ChatGPT) emphasised the importance of international cooperation on issues such as AI licensing and auditing.          

 

What is AI?

  • AI is an emerging technology that facilitates intelligence and human capabilities of sense, comprehend, and act with the use of machines.
    • For example, Siriis a human-like reasoning displayed by computer systems. 
  • Applications of AI include natural language processing, speech recognition, machine vision and expert systems. Examples include manufacturing robots, self-driving cars, marketing chat bots, etc.

 

Concerns and the Need for AI Regulations

  • Three major concerns as to why there should be international cooperation regarding the regulation of AI.
    • First, that AI could go wrong.ChatGPT, for instance, often gives inaccurate or wrong answers to queries.
    • Second, thatAI will replace some jobs leading to layoffs in certain.
    • Finally, AI could be used to spread targeted misinformation. The misinformation could influence the country elections.
  • AI poses a risk to human linguistic, cultural, and geopolitical systems and has the potential to change the way war is waged.
    • For instance, the war in Ukraine has accelerated the deployment of drones that “will be used to identify, select and attack targets without help from humans” powered by AI.
    • Whether fully autonomous killer drones will be programmed to work according to the Geneva Conventions that prohibit the targeting of civilians and non-combatants is an important concern.
    • At present, drones still require a human to choose targets over a live video feed.AI may soon change this – enabling AI to pickits own target.
  • Highlighting the concern over AI a statement “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” was signed.
  • Subsequently, AI experts and tech giants signed a letter to temporarily stop training systems more powerful than GPT-4 - the technology released by a Microsoft-backed startup.

 

Dangerous nature of Generative AI

  • Generative AI is a type of AI technology that can produce various types of content, including text, imagery, audio and synthetic data.
  • ChatGPT is an example of LLMs (Large Language Models) or Generative AI. Generative AI can be very dangerous.
  • For example,ChatGPT can amplify existing AI risks and increase potential harm such as discrimination, bias, toxicity, misinformation, security, and privacy.
  • While disinformation is already a problem across social media, ChatGPT4 and other LLMs can enable more targeted and effective disinformation campaigns that make it more challenging to determine the truthfulness of information.
  • Privacy is a real issue online and ChatGPT4 could make it easier to infer personal identities, reducing privacy protection even further.

 

Challenges in Regulating AI

  • Profitability and Efficiency
    • The need for regulation of AI will be tampered by its profitability and efficiency.
    • The use of AI has increased manifold over the last few years. It is now used in guiding weapons, driving cars, conducting medical procedures, and even writing legal memos.
  • Fast Development of AI related technology: The processes for developing AI regulation increasingly stand in contrast to the current scenario where AI systems are becoming increasingly powerful and having impact much faster than the government can react.
  • It is premature to talk about AI regulation:Because there is nothing specific that requires regulation yet. Regulation could stifle innovation in an industry that is exploding, and the need is to understand its full potential before it’s regulated. 

 

Sam Altman’s Suggestions to Regulate AI

  • He suggested various regulatory thresholds based on “how much compute goes into a model.”
  • A model that can “persuade, manipulate and influence a person’s beliefs will be one threshold.”
  • A model that can “create novel biological agents” is another threshold.
  • He suggests that each capability threshold ought to have a different level or regulation and that models of low capability should be kept as open-use.

 

Steps Taken on International level

  • In February, the US launched an initiative to promote international cooperation on the responsible use of AI and autonomous weapons by militaries.
  • Several forums such as the US-EU Trade and Technology Council (TTC), the Global Partnership in AI (GPAI), the Organisation for Economic Co-operation and Development (OECD) are deliberating on the regulations of AI.
  • The recent G-7 Leaders Communiqué also underscored the need for cooperation on AI, including on the impact of LLMs such as ChatGPT.
  • Italy became the first Western country to ban ChatGPT out of privacy concerns, the EUis bringing in the AI Act this year, the US government released a blueprint for an AI Bill of Rights. 

 

Steps Taken by India

  • The Indian government has taken a proactive stance on technology, particularly AI, intending to position India as a global leader in the field.
  • The Indian government sees AI as a ‘kinetic enabler’ and wants to harness its potential for better governance. 
  • The government is harnessing the potential of AI to provide personalised and interactive citizen-centric services through Digital Public Platforms.
  • According to the Ministry of Electronics and Information Technology,India will regulate AI “through the prism of user harm.”

 

Implications of AI Regulation

  • Regulation has implications for constitutional rights like privacy, equality, liberty, and livelihood.
  • It leads to very significant constitutional debate between state intrusion and the privacy claims of the individual.
    • In India, while the state has unilateral rights to collect and use our data, it has also given itself the ability to regulate private parties.
    • Private parties and individual citizens could use some protections and rights.

 

Way forward:

  • To make thoughtful and constitutionally tenable regulations, our leaders must educate themselves on the technology.


Conclusion

  • The balance between technological gains and the harmful effect of the technology is a policy debate that will challenge governance all over the world.
  • India could take the lead in conversations around global regulation for AI, as it hosts the G20 summit later this year.

 


Q1) Why do we need Artificial Intelligence?

The goal of Artificial intelligence is to create intelligent machines that can mimic human behaviour. We need AI for today's world to solve complex problems, make our lives more smoothly by automating routine work, saving manpower, and to perform many other tasks.

 

Q2) What is the limitation of AI in its current form? 

In the current stage of Artificial intelligence, one can only employ it for individual solutions to carry out certain AI-powered marketing tasks.  There are numerous intelligent marketing solutions available, ranging from using AI to optimise and personalise your content to an AI tool that helps optimise paid advertising. However, as there is no universally applicable answer, using a variety of different tools to complete a variety of artificially intelligent activities can be costly, time-consuming, and messy.

 


Source: The Indian Express