What is Hiroshima AI Process?
26-08-2023
01:16 PM
1 min read
What’s in today’s article?
- Why in News?
- About Artificial Intelligence
- Difference Between AI and Regular Programming
- Experts’ Concern with Artificial Intelligence
- What is Hiroshima AI Process?
- Likely Outcome of the HAP
- Should AI be Regulated Before It’s Too Late?
- Benefits of Regulating AI Outweigh Potential Losses
- Where Does Global AI Governance Currently Stand?
Why in News?
- The annual Group of Seven (G7) Summit, hosted by Japan, took place in Hiroshima on May 19-21, 2023.
- Among other matters, the G7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP) – an effort by this bloc to determine a way forward to regulate artificial intelligence (AI).
About Artificial Intelligence
- Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.
- The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.
- AI algorithms are trained using large datasets so that they can identify patterns, make predictions and recommend actions, much like a human would, just faster and better.
Difference Between AI and Regular Programming
- Regular programs define all possible scenarios and only operate within those defined scenarios.
- AI ‘trains’ a program for a specific task and allows it to explore and improve on its own.
- A good AI programme ‘figures out’ what to do when met with unfamiliar situations.
- For example, Microsoft Word cannot improve on its own, but facial recognition software can get better at recognizing faces the longer it runs.
Experts’ Concern with Artificial Intelligence
- Recently, a group of more than 1,000 AI experts, including Elon Musk, have written an open letter calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4.
- This AI moratorium has been requested because “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” according to the letter.
- An example of why there is a need for AI moratorium –
- As many as 300 million full-time jobs around the world could be automated in some way by the latest AI, according to Goldman Sachs economists.
What is Hiroshima AI Process?
- At the annual Group of Seven (G7) Summit hosted by Japan, world leaders set in motion an effort to set common rules for governing artificial intelligence with the launch of the “Hiroshima AI Process”.
- At the meeting, the participants agreed that they need to work quickly to identify both the benefits and the risks of generative AI, such as ChatGPT.
- They also plan to continue discussions on how to protect copyright and tackle false information.
- They made a plan for ministers from their countries to meet by the end of the year to compile some basic opinions, with the aim of establishing common rules on promoting trustworthy AI.
- HAP will work in cooperation with the OECD and Global Partnership on Artificial Intelligence (GPAI) will discuss on generative AI by the end of this year.
Likely Outcome of the HAP
- For now, there are three ways in which the HAP can play out –
- It enables the G7 countries to move towards a divergent regulation based on shared norms, principles and guiding values;
- It becomes overwhelmed by divergent views among the G7 countries and fails to deliver any meaningful solution; or
- It delivers a mixed outcome with some convergence on finding solutions to some issues but is unable to find common ground on many others.
Should AI be Regulated Before It’s Too Late?
- Artificial Intelligence is already suffering from three key issues – privacy, bias and discrimination.
- Currently, governments do not have any policy tools to halt work in AI development.
- If left unchecked, it can start infringing on – and ultimately take control of – people’s lives.
- Businesses across industries are increasingly deploying AI to analyse preferences and personalize user experiences, boost productivity, and fight fraud.
- For example, ChatGPT Plus, has already been integrated by Snapchat, Unreal Engine and Shopify in their applications.
- This growing use of AI has already transformed the way the global economy works and how businesses interact with their consumers.
- However, in some cases it is also beginning to infringe on people’s privacy.
- Hence, AI should be regulated so that the entities using the technology act responsible and are held accountable.
- Laws and policies should be developed that broadly govern the algorithms which will help promote responsible use of AI and make businesses accountable.
- Mandatory regulations on AI can go a long way in preventing technology from infringing human rights.
- They can also help ensure that technology is used for the benefit of end users instead of negatively affecting their lives.
Benefits of Regulating AI Outweigh Potential Losses
- It is true that regulating AI may adversely impact business interests. It may slow down technological growth and suppress competition.
- However, taking a cue from General Data Protection Regulation (GDPR), the governments can create a more AI-focused regulations and have a positive long-term impact.
- GDPR is the European Union’s law which ensures the protection of individuals with regard to the processing of personal data and on the free movement of such data.
- Governments must engage in meaningful dialogues with other countries on a common international regulation of AI.
Where Does Global AI Governance Currently Stand?
- The rapidly evolving pace of AI development has led to diverging global views on how to regulate these technologies.
- In May 2023, members of the European Parliament reached a preliminary deal on a new draft of the European Union’s ambitious Artificial Intelligence Act.
- The Act envisages establishing an EU-wide database of high-risk AI systems and setting parameters so that future technologies can be included if they meet the high-risk criteria.
- The U.S. does not currently have comprehensive AI regulation and has taken a fairly hands-off approach.
- On the other end of the spectrum, China over the last year came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI.
- It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information.
- In case of India, the Union Minister for Electronics and Information Technology, Shri Ahwini Vaishnaw, said that the government is not considering any law to regulate the growth of AI in India.
Q1) What is the Group of Seven (G7)?
The Group of Seven (G7) is an intergovernmental organization made up of the world's largest developed economies: France, Germany, Italy, Japan, the United States, the United Kingdom, and Canada.
Q2) Is information technology the subject of a Union List b state list?
Information Technology is included in the Residual Subject. Residuary Subjects are recognised as subjects that are not present in any of the lists stated in the constitution. The government of the Union has the powers to render law on Residuary Subjects.
Source: Explained | The Hiroshima process that takes AI governance global | Indian Express