This article has been written by Khushi Mishra, a 3rd year law student from Deen Dayal Upadhyay Gorakhpur University.
Introduction
The Artificial Intelligence (AI) space has seen certain developments crucial to its regulation in recent years- the United Station’s Resolution on Artificial Intelligence, the AI Act by the European Parliament, laws introduced on AI in the U.K. and CHINA and the launch of the AI mission in India. The endeavour to establish global AI regulations will be crucial for governance across various sectors in all nations. AI is a transformative technology that aims to stimulate human intelligence in machines to perform tasks that typically require human cognitive functions. AI is being integrated into countless applications across diverse industries, yielding significant benefits and efficiencies. The AI applications such as Healthcare, Finance, Transportation, Customer Service, Manufacturing, Entertainment. It emerges not only as a potent catalyst for economic growth but also as a crucial enabler of efficient public services.
The recent passage of the United Nations resolution on artificial intelligence marks a significant turning point in the discourse surrounding AI regulation, ushering in a new phase of dialogue and consideration regarding the governance of this transformative technology. The unethical and improper use of AI systems is acknowledged as a significant barrier to achieving the 2030 Agenda which is Sustainable Development Goals, undermining ongoing efforts across all three dimensions; social environmental, and economical. The contentious issue highlighted in the UN resolution pertains to the potential adverse effects of AI on the workforce. It is essential, particularly for developing and least countries, to formulate a strategic response, as their labor markets are increasingly susceptible to the impacts of such systems. The effects on small and medium enterprises also require through assessment. As a pioneering initiative, the resolution has illuminated the future implications of AI systems and underscored the urgent need for collaborative action.
India currently lacks a comprehensive regulatory framework specifically for artificial intelligence (AI). However, several advisories, guidelines, and IT rules are in place that provide some legal oversight for the development of AI, Generative AI, and large language models within the country.
On March 1 2024, the Indian government issued an advisory mandating that platforms obtain explicit approval from the Ministry of Electronics and Information Technology (MeitY) before deploying any “unreliable AI models, LLMs, or Generative AI software” for users according the Indian Internet. Intermediaries are required to ensure their systems do not facilitate bias, discrimination, or undermine the electoral process. Moreover, they must label all AI- generated content with unique identifiers or metadata to enable easy identification.
SCOPES OF AI
TERRITORIAL SCOPE
Currently, India lacks specific legal frameworks or regulations that explicitly govern artificial intelligence (AI). As a result, there is no defined territorial scope for the application of AI regulations at this time. This absence of dedicated legislation leaves the regulatory landscape largely uncharted, presenting both opportunities and challenges for stakeholders in the AI domain.
SECTORAL SCOPE
While comprehensive AI regulation is absent, certain sector-specific frameworks have emerged to address the nuances of AI applications in various fields.
FINANCE SECTOR: The Securities and Exchange Board of India (SEBI) issued a circular in January 2019 outlining reporting requirements for Ai and machine learning applications within financial systems. This initiative seeks to enhance transparency and accountability in AI utilization, ensuring that financial entities adhere to best practices in deploying these technologies.
HEALTH SECTOR: The National Digital Health Mission emphasizes the need for developing guidance and standards to bolster the reliability and safety of AI systems in healthcare. This strategic approach aims to ensure that AI technologies are implemented in a manner that prioritizes patient and data integrity.
COMPLIANCE ROLE
In the current regulatory landscape, there are no specific legal obligations imposed on developers, users, operators or deployers of AI systems in India. The lack of dedicated regulations means that entities involved in AI must navigate a complex array of existing laws and ethical considerations without clear tailored compliance frameworks. This situation underscores the necessity for proactive engagement with stakeholders and regulatory bodies to shape future policies that address the unique challenges posed by AI.
REGULATORY LANDSCAPE FOR AI IN INDIA
Although India does not yet have a dedicated regulatory framework for AI, several initiatives and guidelines have been introduced to promote responsible AI development and deployment.
NATIONAL ARTIFICIAL INTELLIGENCE STRATEGY
Launched in 2018 by NITI AAYOG, the National AI Strategy, known as #AIFORALL, aims to provide an inclusive approach to AI.
PRINCIPLES FOR RESPONSIBLE AI
In February 2021, NITI AAYOG published the Principles for Responsible AI, furthering the National AI Strategy. This document addresses the ethical implications of AI implementation in India, categorized into system and societal considerations. Societal considerations examine the impact of automation on job creation and employment.
The framework outlines seven key principles for the responsible governance of AI systems: safety and reliability; inclusivity and non- discrimination; equality; privacy and security; transparency; accountability; and the protection and reinforcement of positive human values.
This approach advocates for stringent oversight and responsible practices for high-risk AI systems, ensuring alignment among stakeholders in establishing technology-agnostic governance frameworks. Collectively, these policy documents represent foundational steps toward developing a robust governance structure that fosters responsible AI systems in India.
CHALLENGES
India faces significant hurdles in establishing a coherent legislative framework for the responsible development and governance of artificial intelligence technologies. The absence of such a framework impedes efforts to effectively evaluate and manage the implementation of various AI tools. Existing Indian laws fall short in addressing critical issues such as the dissemination of fake news, the creation of deepfakes, and the biases inherent in AI systems, as highlighted by concerns raised around Google’s GenAI model, Gemini.
One of the pressing challenges is the attribution of liability when AI tools autonomously generate content that is plagiarized or defamatory, or when they are programmed to create malicious software, akin to historical incidents like the Morris II worm. Indian legal principles traditionally associate criminal liability with human intent, leaving a significant gap in a accountability for AI- driven actions.
COPYRIGHT ACT of 1957
This Act stipulates that only original works produced through human authorship are eligible for copyright protection. As a result, AI- generated content is generally deemed non- copyrightable. Recent legal battles in both the US and India have underscored the complexities involved in asserting copyright over AI -generated materials. For instance, Ankit Sahni’s unsuccessful attempt to register AI- generated artwork, underscore the complexities surrounding the copyright status of such creations. Granting authorship rights to AI raises significant practical issues, particularly regarding the potential for perpetual copyright protection due to the non- human nature of AI entities. Thus, comprehensive legal reforms are essential before contemplating copyright ownership for works produced by AI.
DATA PROTECTION
The Digital Personal Data Protection Act allows for the processing of personal data solely with individual purposes. Additionally, the challenge of enabling AI tools to delete or modify personal data upon withdrawal of consent is compounded by the difficulties in isolating such data from the vast pre-trained parameters that inform an AI’s operation, as well as the inherent complexities of the AI’s capacity to unlearn.
In conclusion, the current regulatory framework in India is ill-equipped to manage the nuanced challenges posed by AI technologies, necessitating a comprehensive overhaul to foster responsible AI development.
MEITY
The Ministry of Electronics and Information Technology (MeitY) has spearheaded several initiatives aimed at fostering responsible innovation and advancing AI development in India. Among these initiatives are the establishment of specialized committees tasked with formulating comprehensive policy framework for AI, the creation of centers, of excellence focused on the Internet of Things (IoT) across various urban centers, and the development of specialised centers dedicated to virtual and augmented reality, gaming, visual effects, computer vision, AI, and blockchain technology.
Moreover, the National Programme on AI aims to harness transformative technologies for social good, concentrating on critical areas such as skilling, ethics, governance, and research and development. These initiatives illustrate India’s commitment to both regulating the AI landscape and promoting inclusive, innovative growth.
NITI AAYOG
The government’s premier policy released the discussion paper “National Strategy for Artificial Intelligence” in 2018, outlining AI’s potential impact across vital sectors including healthcare, agriculture, education, smart cities and transportation. The document proposed the establishment of research centers, a shared cloud platform, and suitable intellectual property frameworks to govern AI innovation effectively.
THE RESERVE BANK OF INDIA
RBI is developing a regulatory framework to oversee the integration of artificial intelligence in the banking and financial services sectors. The rapid adoption of AI technologies by major Indian banks such as HDFC Bank, ICICI Bank, State Bank of India, and Kotak Mahindra Bank.
The RBI is set to implement immediate measures aimed at mitigating the challenges like data security, AI algorithms.
BUREAU of INDIAN STANDARD (BIS)
The Bureau of Indian Standard has introduced the draft Indian standard released in January 2024. This standard delineates the guidelines and requirements for the establishment, implementation, maintenance and continuous improvement of AI management systems with organization. An AI management system encompasses any software platform designed to effectively manage, monitor, and optimize AI operations in various organizational contexts.
INDIA’S RESPONSE TO AI
India’s response to the global movement for AI regulation is poised to be pivotal, especially given its status as one of the largest consumer bases and labor markets for technology companies. By 2030, the nation is expected to host over 10,000 deep tech startups, reflecting its burgeoning innovation landscape. The recent approval of a Rs 10,300 crore allocation for the India AI mission underscores this trajectory, aiming to bolster the AI ecosystem through enhanced public private partnerships and a robust startup environment.
As India’s economy continues to expand, it is essential that AI strategy aligns with is commitments to the Sustainable Development Goals while ensuring sustained economic growth. This necessitates a balanced approach that leverages AI for innovation while addressing inherent risks. A phased, gradual implementation strategy would be more effective in establishing a fair and inclusive AI framework, ensuring that technological advancement translates into broad-based socio-economic benefits. This approach could position India not just as a participant in the global AI arena but as a leader in responsible and equitable AI development.
AI GROWTH PREDICTION TILL 2027
The growth is attributed to a surge in enterprise technology expenditures, a burgeoning AI talent pool, and an increase in investments in artificial intelligence, according to a statement released by the two entities ahead of the report’s publication on Wednesday. The report highlights that India boasts over 420,000 professionals actively engaged in AI- related roles, underscoring the nation’s preeminence in AI skills penetration. This positions India as a key player in the global AI landscape, reflecting both its workforce capabilities and strategic investments in advanced technologies.
Investment in artificial intelligence is surging, driving a corresponding demand for AI talent in India, projected to grow at an annualized rate of 15% through 2027. This trend underscores the critical need for skilled professionals capable of navigating complex AI landscapes, as organizations increasingly rely on advanced analytics and machine learning to enhance operational efficiency and innovate solutions. As the industry evolves, educational institutions and training programs must adapt to equip the workforce with the necessary competencies to meet this burgeoning demand.
OVERVIEW AND COMPLIANCE CONSEQUENCES
The Ministry of Electronics and Information Technology issued a new advisory to reinforce the due diligence obligations for AI intermediaries, on March 15 2024
KEY ELEMENTS OF THE ADVISORY
CLARIFICATION OF RESPONSIBILITIES
The advisory reiterates the critical obligations of AI intermediaries, emphasizing the need for robust processes to ensure compliance with applicable laws and ethical standards. This includes measures for content moderation, user data protection, and transparency in algorithmic operations.
DUE DILIGENCE OBLIGATIONS
Intermediaries are mandated to implement comprehensive sue diligence measures, including but not limited to
- Establishing systems to prevent the hosting and dissemination of unlawful content.
- Implementing user verification processes to enhance accountability.
- Maintaining a robust complaint resolution mechanism.
- Keeping detailed records of content moderation activities.
TRANSPARENCY REQUIREMENTS
The advisory mandates transparency in content moderation policies and algorithms, ensuring users are aware of how content is filtered or removed.
REPORT OBLIGATIONS
Intermediaries must report incidents of non- compliance and cooperate with government agencies in investigations concerning unlawful activities facilitated through their platforms.
CONSEQUENCES OF NON- COMPLIANCE
LEGAL LIABILITIES
Non- compliance can lead to legal action from regulatory bodies, including hefty fines and penalties. This could extend to individual liability for company executives in sever cases.
OPERATIONAL RESTRICTIONS
The advisory stipulates that persistent non-compliance may lead to the suspension or revocation of operating licenses, effectively curtailing the intermediary’s ability to function within the jurisdiction.
REPUTATIONAL DAMAGE
Non- adherence to due diligence obligations can severely impact an intermediary’s reputation, eroding user trust and confidence, which is critical for platform sustainability.
CONCLUSION
As AI continues to advance, it is imperative for stakeholders- developers, users, and policymakers- to engage collaboratively in shaping a regulatory environment that promotes innovation while safeguarding ethical standards and public interests. The future of AI in India hinges on the establishment of a comprehensive regulatory framework that balances technological advancement with accountability, transparency and societal well-being. This will be crucial in harnessing the full potential of AI while mitigating associated risks.
SOURCES~
- The Hindu
- Jdsupra.com
- Morganlewis.com