Google drops pledge not to develop AI for weapons or surveillance from its website this week. Bloomberg first spotted the change. The company updated its public AI principles page, deleting the “applications we will not pursue” section, which remained live just last week.
When asked for comment, Google directed TechCrunch to a new blog post on “responsible AI.” The post states, “We believe companies, governments, and organizations sharing these values should collaborate to create AI that protects people, drives global growth, and strengthens national security.”
Google’s updated AI principles emphasize reducing harmful outcomes, preventing unfair bias, and aligning with international law and human rights standards.
The company’s cloud contracts with the U.S. and Israeli militaries have sparked employee protests in recent years. Google insists its AI does not harm humans. However, the Pentagon’s AI chief recently told TechCrunch that some company AI models are accelerating the U.S. military’s kill chain.
Key Takeaways | Google Drops Pledge
Shift in AI Policy: Google removed its pledge against developing AI for weapons, sparking ethical concerns over military involvement.
Focus on Responsible AI: Despite the change, Google emphasizes reducing harm, preventing bias, and aligning AI with international laws and human rights.
Also Read About
DeepSeek: Countries and Agencies That Have Banned Its AI Tech
SoftBank and OpenAI Announce AI Joint Venture in Japan – SB OpenAI Japan