Digital Jurisprudence in India, In an AI Era

09-07-2024

07:08 AM

timer
1 min read
Digital Jurisprudence in India, In an AI Era Blog Image

Why in News?

  • Generative AI (GAI) represents a transformative force with the potential to revolutionise various facets of society.
  • However, this rapidly evolving technology poses significant challenges to existing legal frameworks and judicial precedents designed for a pre-AI world.
  • Therefore, it becomes imperative to explore the complexities surrounding liability, copyright, and privacy in the context of GAI, highlighting the need for a comprehensive re-evaluation of digital jurisprudence to effectively govern and harness the power of this technology.

Most Contentious Issues Surrounding Internet Governance

  • Safe Harbour and Liability Fixation
    • One of the most contentious issues in Internet governance has been determining the liability of intermediaries for the content they host.
    • The landmark Shreya Singhal judgment upheld Section 79 of the IT Act, granting intermediaries 'safe harbour' protection contingent upon meeting due diligence requirements.
    • However, the application of this protection to Generative AI tools remains problematic.
  • The Copyright Conundrum
    • The Indian Copyright Act of 1957, like many global copyright frameworks, does not adequately address the complexities introduced by AI-generated works.
    • Section 16 of the Act stipulates that no person is entitled to copyright protection except as provided by the Act.
    • This raises critical questions: Should existing copyright provisions be revised to accommodate AI?
    • If AI-generated works gain protection, would co-authorship with a human be mandatory? Should recognition extend to the user, the program, and the programmer, or both?
    • The 161st Parliamentary Standing Committee Report acknowledged that the Copyright Act is not well-equipped to facilitate authorship and ownership by AI.
    • Under current Indian law, a copyright owner can take legal action against infringers, with remedies such as injunctions and damages.
    • However, the question of who is responsible for copyright infringement by AI tools remains unclear.
    • ChatGPT's 'Terms of Use' attempt to shift liability to the user for illegal output, but the enforceability of such terms in India is uncertain.
  • Privacy and Data Protection
    • The landmark K.S. Puttaswamy judgment (2017) by the Supreme Court of India established a strong foundation for privacy jurisprudence, leading to the enactment of the Digital Personal Data Protection Act, 2023 (DPDP).
    • While traditional data aggregators raise privacy concerns, Generative AI introduces a new layer of complexity.
    • The DPDP Act introduces the "right to erasure" and the "right to be forgotten."
    • However, once a GAI model is trained on a dataset, it cannot truly "unlearn" the information absorbed, raising critical questions about how individuals can exercise control over their personal information embedded in AI models.

Arguments Surrounding the Classification of GAI Tools as Intermediary or Active Participant

  • Arguments for GAI Tools as Intermediaries
    • Some argue that GAI tools function similarly to search engines.
    • They respond to user queries without hosting external links or third-party websites, suggesting they should be considered intermediaries eligible for safe harbour protection.
    • GAI tools generate content based on user prompts, implying that the responsibility for the content lies primarily with the user, not the tool.
  • Arguments Against GAI Tools as Intermediaries
    • Critics argue that GAI tools actively create content, making them more than mere conduits.
    • This active role in content creation should subject them to higher liability standards.
    • The distinction between user-generated and platform-generated content becomes increasingly challenging.
    • Unlike traditional intermediaries, GAI tools significantly transform user inputs into new outputs, complicating the liability landscape.

Judicial Precedents, Challenges, Real-World Implications and Legal Conflicts Surrounding the Classification of GAI

  • Judicial Precedent
    • The Delhi High Court's ruling in Christian Louboutin Sas vs. Nakul Bajaj and Ors (2018) introduced the concept of "passive" intermediaries, which applies to entities that merely transmit information without altering it.
    • This ruling complicates the classification of GAI tools, as they do not fit neatly into the passive intermediary category due to their active role in content generation.
  • Key Judicial Challenges
    • Courts must grapple with the challenge of distinguishing between user-generated prompts and platform-generated outputs.
    • This distinction is crucial for determining the extent of liability.
    • Liability issues become more complex when AI-generated content is reposted on other platforms by users.
    • Courts must decide whether the initial generation or subsequent dissemination attracts liability.
  • Real-World Implications and Legal Conflicts
    • Generative AI has already led to legal conflicts in various jurisdictions.
    • For instance, in June 2023, a radio host in the United States filed a lawsuit against OpenAI, alleging that ChatGPT had defamed him.
    • Such cases highlight the ambiguity in classifying GAI tools and the resulting complications in assigning liability.
    • AI-generated content can lead to defamation or the spread of misinformation, raising questions about accountability.
    • The debate continues over whether users should bear the primary responsibility for AI-generated content or whether the creators of GAI tools should be held liable.

Potential Solutions and Future Directions to Address the Challenges Posed by GAI

  • Learning by Doing: Temporary Immunity and Sandbox Approach
    • Granting temporary immunity from liability to GAI platforms under a sandbox approach nurtures innovation.
    • Developers can experiment with new applications and improve existing models without the immediate fear of legal repercussions.
    • A sandbox environment allows regulators to observe the interactions between GAI tools and users, collecting valuable data on potential legal issues, ethical concerns, and practical challenges.
    • The government should establish regulatory sandboxes where GAI developers can operate under relaxed regulations for a limited period.
    • These sandboxes can be overseen by government bodies or independent regulatory authorities.
    • There should be a feedback loop between developers, regulators, and users to address emerging issues and refine the legal framework based on real-world experiences.
  • Data Rights and Responsibilities: Overhauling Data Acquisition Processes
    • Ensuring that data used for training GAI models is acquired legally is crucial for protecting intellectual property rights and maintaining public trust.
    • Developers must recognise and compensate the owners of intellectual property used in training data, developing a fair and transparent ecosystem.
    • The government should develop clear and enforceable licensing agreements for data used in training GAI models.
    • These agreements should outline the terms of use, compensation, and rights of the data owners.
  • Licensing Challenges: Creating Centralised Platforms
    • The current decentralised nature of web data makes licensing complex and inefficient.
    • Centralised platforms can streamline the licensing process, making it easier for developers to access the necessary data.
    • Centralised platforms can maintain the quality and integrity of data, minimising biases and ensuring that GAI models are trained on accurate and diverse datasets.
    • The government should establish centralised repositories or platforms like stock photo websites (e.g., Getty Images) for licensing web data.
    • These platforms can offer standardised licensing terms and facilitate easy access to high-quality data.

Conclusion

  • The jurisprudence around Generative AI is still evolving and demands a comprehensive re-evaluation of existing digital laws.
  • A holistic, government-wide approach and judicious interpretations by constitutional courts are essential to maximise the benefits of this powerful technology while safeguarding individual rights and protecting against unwelcome harm.

As GAI continues to advance, legal frameworks must adapt to ensure responsible and ethical use, balancing innovation with the protection of societal values. 


Q) What is Generative AI? 

Generative AI refers to a class of artificial intelligence systems designed to create new content, such as text, images, music, or even video. These systems use machine learning models, particularly neural networks, to generate data that is similar to the data they were trained on. Popular examples include OpenAI's GPT-4, which can generate human-like text, and DALL-E, which can create images from textual descriptions.

Q) How is Generative AI used in business? 

Generative AI is used in business for a variety of applications, including content creation, product design, marketing, and customer service. For example, it can generate personalised marketing content, automate customer support with chatbots, create new product designs, and analyse large datasets to generate insights. This technology helps businesses improve efficiency, personalise customer experiences, and innovate more rapidly.


Source:The Hindu