THE AMERICA ONE NEWS
Feb 22, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET AI 
Sponsor:  QWIKET AI 
Sponsor:  QWIKET AI: Interactive Sports Knowledge.
Sponsor:  QWIKET AI: Interactive Sports Knowledge and Reasoning Support.
back  
topic
Luke Perlot


NextImg:Time for accountability from Big Tech

Artificial intelligence (A.I.) is one of the most transformative innovations in modern economic history, with the potential to rival the economic and social impact of the internet.  The world’s largest tech companies — Microsoft, Apple, Meta, Alphabet, and Amazon — are locked in an A.I. arms race that could be worth trillions of dollars to the companies that can establish a competitive advantage.

A.I. systems require massive amounts of training data, and Big Tech pulls from sources it shouldn’t — scraping personal information, copyrighted works, and proprietary business data, often without consent. 

Microsoft, for example, has deeply integrated OpenAI into its operations.  OpenAI has been accused of illicitly collecting private conversations, medical data, and copyrighted material to train its models.  The New York Times has sued both Microsoft and OpenAI for models allegedly built on stolen content.  Meanwhile, Microsoft’s controversial “Recall” feature, which records everything a user sees or does on his device, is an alarming breach of privacy.

Apple touts itself as a leader in privacy protections, but its A.I. efforts may tell a different story.  The company also partners with OpenAI — despite OpenAI’s well documented data privacy violations — and is exploring a partnership with Meta, another notorious violator of user trust.  Apple’s strategy of outsourcing unethical data collection while maintaining plausible deniability should concern all stakeholders.

Meta has long been a bad actor in data privacy, and its A.I. ambitions only magnify those concerns.  The company has quietly updated its policies to permit A.I. training on massive amounts of users’ data without their explicit consent.  European regulators have already fined Meta a record $1.3 billion for violating privacy laws.  Consumer rights groups continue to expose its invasive data practices.

Alphabet, the parent company of Google and YouTube, has a track record of exploiting user data.  Google settled a $5-billion lawsuit for secretly tracking users in “private” mode and is under investigation by the European Commission for A.I. models deemed high-risk for privacy violations.  The company’s vast search empire gives it access to an unrivaled data trove, threatening to further erode user privacy around the world.

Amazon, though slower to deploy A.I. consumer products than its tech counterparts, is no less reckless with data.  The company has been caught improperly recording and sharing Alexa users’ interactions, and recently suffered an embarrassing leak when its A.I. chatbot, Q, inadvertently exposed confidential business information.

My organization, the National Legal and Policy Center, has filed shareholder proposals at each of the aforementioned companies to call for greater oversight of A.I. data ethics, and highlight areas of concern where these companies fall short.  Our shareholder proposals call on these companies to disclose how they acquire A.I. training data, what measures they take to mitigate privacy risks, and how they ensure compliance with legal and ethical standards.

These companies are playing a dangerous game.  Regulators worldwide are cracking down on A.I.-generated content, consumer awareness of data privacy is growing, and shareholders are recognizing the financial and legal liabilities resulting from of unchecked A.I. expansion.  Further, with the new administration’s embrace of Elon Musk and other libertarian-influenced “techno-optimists” — who fervently support open-source A.I. development and oppose Big Brother–style data collection — we believe that privacy will be a core focus of A.I. policy moving forward.

Consumers have made clear that they are frustrated with the lack of options that the major technologies have given them to protect their data.  McKinsey & Company argued in 2020 that companies that prioritize data privacy will build a competitive advantage over their competitors that do not.  Five years later, none of the major players has staked out a competitive advantage in data privacy.  In the rapidly expanding and potentially transformative A.I. industry, small changes in market share could be worth tens of billions to the companies involved.

We received strong support for our Microsoft proposal in December — where more than one third of shareholders backed our call for oversight, the highest of any shareholder proposal on the company’s proxy.  That sent a clear message.  Glass Lewis’s support for our proposal — a rare endorsement from one of the two major proxy advisors which heavily influence voting results — provides further evidence that even at companies where executives hold significant voting power, investors demand A.I. accountability.

Our proposals offer these companies a way forward.  They can embrace ethical A.I. practices now, or they can wait until government intervention and public backlash force their hand.  Either way, transparency, accountability, and responsible A.I. development are critical.

Big Tech must implement the vast benefits of A.I. innovation in a way that does not come at the expense of privacy and ethics — and shareholders should demand it.

Luke Perlot is associate director of National Legal and Policy Center’s Corporate Integrity Project.

<p><em>Image via <a data-cke-saved-href=

Image via Pxhere.