Should artificial intelligence be used for overreaching surveillance and military weapons? Google used to proclaim its opposition to such uses, but in a sudden and ominous update, the ever-insidious tech giant appears to have delusions of Orwellian grandeur and an increasing indifference to its technology’s harmful impacts on users.
Google has become infamous for suppressing conservatives and censoring facts on its search engine and YouTube platform (among other tech), and that is not all. While Google’s AI principles promise to “mitigate unintended or harmful outcomes,” they no longer specifically state that the powerful technology will not be used in military/weapons contexts or for overreaching surveillance. Those specifics, present in the principles as recently as Jan. 30, have been removed, and Rumble CEO Chris Pavlovski slammed Google for doing so.
Responding to a report highlighting the change on X, Pavlovski posted, “Google is a company which pretends to be one thing and is the complete opposite. This is the definition of evil.”
So what specifically did Google change in its new AI standards? In the Jan. 30 version of Google AI principles preserved by Internet Archive, Google promised not to “design or deploy” AI to be used for “[w]eapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” or for technologies “that gather or use information for surveillance violating internationally accepted norms.”
But these passages are conspicuously lacking from the newly updated Google AI standards. It seems Google CEO Sundar Pichai’s appearance at Trump’s inauguration (unsurprisingly) betokened no desire for reform. […]
— Read More: pjmedia.com