Google Removes Pledge Not to Use AI for Weapons, Sparking Concerns Over Responsible AI Development

 

Google has removed a pledge from its artificial intelligence (AI) principles that previously stated the company would not use AI to develop weapons.¹ This change has raised concerns among experts, who believe it could lead to the development of AI-powered weapons.

The removed pledge also included a commitment not to develop technology that could cause harm or be used for surveillance violating international norms. Instead, Google’s updated principles focus on responsible AI development and deployment, emphasizing human oversight, due diligence, and adherence to international law and human rights.

This decision has sparked debate about the need for stricter regulations on AI development. James Fisher, chief strategy officer at AI firm Qlik, expressed concerns about Google’s decision, highlighting the need for international governance to ensure AI is developed responsibly.

Key Implications:

  • Increased concern about AI-powered weapons: Google’s decision may lead to the development of AI-powered weapons, sparking concerns about their potential use.
  •  Need for stricter regulations: Experts argue that stricter regulations are necessary to ensure AI is developed responsibly and with consideration for human rights and international law.
  •  International governance: There is a growing need for international governance to address the complexities of AI development and ensure that companies prioritize responsible AI practices.

Leave a Reply

Your email address will not be published. Required fields are marked *