January 24, 2024

Shilpa Ramaswamy

Should Governments Regulate AI or not? Let’s Debate!

Prominent tech leaders at the forefront of AI development, such as Elon Musk and Sundar Pichai, have shared their perspectives on the need to regulate AI. A flawed AI system could be more damaging than a flawed motor vehicle, Musk has warned, highlighting the importance of AI safety.

Table of contents

Introduction

India is positioning itself as a key player in the artificial intelligence (AI) sector, with the Ministry of Electronics and IT implementing policies and infrastructure to mitigate ethical risks and biases. With the growth of AI, there is potential for an increase in entrepreneurship and business development in India. The country’s startup market has experienced a remarkable surge of 15400%, from 471 in 2016 to 72933 in 2022.

This optimism is shared by Indians, as Stanford University’s annual Artificial Intelligence Index report reveals that 71% of Indians hold a positive outlook on AI products. Indian developers contribute significantly to AI projects on GitHub, and AI services such as OpenAI’s ChatGPT are popular among Indian users. However, with governments increasingly adopting AI for governance and citizen services, it is critical to address risks and challenges associated with the technology, especially given AI’s potential impact on employment.

Meanwhile, temporary bans on AI systems like OpenAI’s ChatGPT in Italy and calls for a halt in developing AI systems more powerful than GPT-4, led by tech leaders such as Elon Musk and Steve Wozniak, have sparked discussions on AI regulation. Generative AI systems, capable of creating convincing but fictional content, have raised concerns about fraud and misinformation, underscoring the need for swift regulatory action.

The 3 Major Challenges of Artificial Intelligence

AI presents three major challenges: safeguarding privacy, preventing bias, and avoiding discrimination. Addressing these challenges is vital to protect individuals’ autonomy and quality of life. Across sectors, companies depend on AI to analyze customer preferences, enhance user experiences, improve efficiency, and combat fraud. AI is used in finance to minimize risks, in healthcare for diagnostics and treatments, and across industries like e-commerce, retail, travel, gaming, entertainment, and social media to optimize operations.

As AI integration into daily life continues to grow, it is crucial to establish regulations that promote responsible use and accountability. Although AI-driven innovations offer promising opportunities, they also raise concerns about job losses and discrimination. The rapid growth of AI-centric startups, increased computing power, and expanding consumer data necessitate policies that ensure ethical AI practices.

To address AI capabilities prone to misuse, governments should adopt a clear regulatory framework focused on privacy, integrity, and security of data access, in compliance with standards like GDPR and PSD2. Requiring explainable AI can promote transparency in decision-making and create an effective balance between regulation and innovation.

What Do the Top Tech Leaders Have to Say About AI Regulation?

Prominent tech leaders at the forefront of AI development, such as Elon Musk and Sundar Pichai, have shared their perspectives on the need to regulate AI.

A flawed AI system could be more damaging than a flawed motor vehicle, Musk has warned, highlighting the importance of AI safety. Similarly, the government oversees the FDA and USDA to ensure food and drug safety, and the FAA for airborne entities. There should be comparable parameters for artificial intelligence.

Drawing on his experience working with Google co-founder Larry Page, Musk stressed that AI may soon outpace human intelligence, making it difficult to predict outcomes in that situation.

Douglas Rushkoff, a professor of Media Theory and Digital Economics at CUNY, compared this development to raising children, as AI will be trained based on human behavior. AI systems will learn from the environment around them, just as children do. With this in mind, the most effective way to regulate and influence AI behaviour is by modifying our own actions and values.

If we prioritize rapid value extraction and disregard people’s rights, our AI creations will ultimately adopt and perpetuate these values. To ensure that AI serves the best interests of society, we must first examine our own principles and behaviours, particularly as these technologies gain the ability to observe and accelerate our actions.

As we continue to develop AI tools, we need to articulate our objectives and aspirations for these advanced systems and mould our behaviour accordingly. By doing so, we can shape a more responsible and ethical future for AI integration into society.

To regulate AI effectively, we need to examine our values and behaviours that will inform AI learning models. The technologies we create will accelerate and adopt our values, but we must first consider the outcomes we wish to achieve with AI.

Google CEO Sundar Pichai emphasized the need for better algorithms to detect issues like spam and deep fake audio and video. Over time, AI systems that cause harm to society should face regulation and consequences. Bill Gates echoed this sentiment, acknowledging the diverse opinions on this matter and pointing out that AI has the potential to address some of the world’s “worst inequities.”

Why It’s Not Practical to Pause AI for a Long Time?

Let’s examine why a proposed pause in AI development may not be an effective solution while exploring the need for better regulation and human involvement in AI training.

The concept of a pause in AI development, akin to a yellow flag in car racing, appears ideal on paper, but the lack of global coordination and oversight makes it unfeasible. Similar to a car race where countries and companies occupy varying positions in AI development, some leading the pack while others lag behind. However, unlike a race where track referees ensure rule adherence, there currently exists no global mechanism capable of implementing and enforcing a pause on AI advancement.

Organizations like the UN cannot effectively monitor AI development labs, and those leading in AI are unlikely to slow down, fearing competitors will capitalize on the halt.

Thankfully, AGI, the most concerning form of AI, remains a decade away, providing ample time for developing regulatory mechanisms. Instead of parking AI research, we must create an international regulatory body to ensure AI safety.

Establishing global consensus and enforcement is challenging but necessary. International organizations must support initiatives like the Lifeboat Foundation’s AI defence against potential threats. Notably, some letter signatories, such as Elon Musk, may be sceptical of such efforts, but laying guardrails ahead of time is vital, as highlighted in Bill Gates’ blog post.

Involving humans in AI training is essential to ensure machines act appropriately in moral situations, according to Oded Netzer, a professor at Columbia Business School. Despite this, recent tech layoffs at Microsoft, Google, and Meta saw AI ethicists let go, potentially risking AI’s responsible development. Ethicists provide diverse perspectives, and contextual understanding, and diminish bias, significantly improving AI outcomes.

The present need for AI regulation should neither hinder innovation nor delay technology adoption. Rather, the focus should be on investing in AI systems that address social priorities and foster responsible AI development with human involvement.

To stay ahead in the B2B decision-making space, it’s crucial to recognize the ongoing AI regulation conversation and the importance of incorporating human insight into artificial intelligence.

Shilpa Ramaswamy
Shilpa Ramaswamy