


For the past decade or so, the social internet has been largely controlled by secretive algorithms. Designed by tech companies to capture attention and drive engagement, they determine which posts end up in your feeds and which sink like a rock, never to be seen again. These algorithms play a role in polarization, rocketing ordinary people to overnight fame, and the spread of extreme, violence-provoking content. They generally operate as black boxes, hidden from academic researchers and the public, despite a push from notable figures in tech and politics to make them more transparent.
But last week, the world got handed a tiny flashlight and the chance to peek inside. For the first time, a major U.S. social-media company, Twitter, posted part of its algorithm for anyone to see. It made public the source code for its “For You” page, and published a blog post from its engineering team explaining how the recommendation system broadly works. The company hailed the move as the first step toward a “new era of transparency.” In a Twitter Spaces conversation, the platform’s CEO, Elon Musk, said the goal was to build trust with users: How else, he asked, would you know if the algorithm was “subject to manipulation in ways that you don’t understand,” whether that be from code errors or state actors?
The move was unprecedented, but this probably won’t go down as a great day in the history of algorithmic transparency. Researchers told me that the code is notable simply by virtue of its existence—they haven’t seen such a release from major social platforms previously—but said it has significant limitations. The code and accompanying blog post are missing context that would fully explain why you do or don’t see any given tweet, and Musk has also made a number of decisions that reduce transparency, and overall accountability, in other respects. When I emailed Twitter’s press email asking for comment about its supposed push for transparency, I received an auto-reply containing a single poop emoji—part of the CEO’s new approach to media inquiries.
What does the code actually reveal? Zachary Steinert-Threlkeld, an assistant professor of public policy at UCLA, said via email that its technical approaches are “pretty standard these days.” He told me, “It is not surprising, for example, that a social graph, community detection, and embeddings are used.” And Twitter still hasn’t provided a look into the larger AI models that work beneath the surface, nor the data they are trained on. Instead, the company has offered limited insight into part of its selection process, which involves pulling 1,500 tweets “from a pool of hundreds of millions” to serve to a user in the “For You” section. (It’s not altogether clear from the company’s blog post why the number 1,500 was selected, or how often those tweets are refreshed.)
There’s some novel information here about what the system prioritizes. We have a better understanding now of which actions might signal to the system that a tweet deserves more attention, although the complete process is still unclear. One analysis noted that tweets with photos and videos get a bump, and that receiving likes might boost visibility more than replies—but there’s also been disagreement over those conclusions, illustrating the perils of dumping code without context. The tiny flashlight we’ve been given illuminates only one part of a much bigger system, and people are seeing different things within it. (Twitter said in its transparency blog post that it withheld portions of its model to protect user safety and privacy, as well as itself from bad actors, but eventually aims to make more of its product open source.)
There’s also no reason to believe that the snapshot Twitter offered is still relevant. “We know that last week the Twitter icon was a bird, and today it is a dog,” Cameron Hickey, the director of the Algorithmic Transparency Institute, a project by the nonprofit National Conference on Citizenship to study and monitor the spread of problematic content online, told me in an email.. “We can see that they are constantly changing what the platform does, so this moment in time for the recommendation algorithm is likely to quickly become out of date.” Musk has tweeted that the company plans to update its algorithm every 24 to 48 hours with suggestions from the public. But no one is requiring it to disclose every tweak, or holding it accountable to any kind of regular schedule.
Algorithmic transparency is also only one piece of the puzzle. Under Musk’s leadership, Twitter has recklessly pulled down guardrails, such as dramatically downsizing teams dedicated to safety and internal accountability and haphazardly opening up its blue-check verification system to anyone willing to pay a fee (while removing the actual identity-verifying part in the process). Major decisions that affect the user experience are made without clear justification: Over the weekend, the company pulled the blue check off The New York Times’ Twitter account, and today it labeled NPR “state-affiliated media.” Donald Moynihan, a policy professor at Georgetown University who frequently writes on tech governance, noted on Twitter that policies once used to safeguard users “are now being rewritten in obviously nonsensical ways to fit with the whims of its owner.”
As Imran Ahmed, the chief executive of the Center for Countering Digital Hate, put it to me: “Overall, Twitter has become less transparent since Musk, not more, despite showy announcements such as this one.” He cited, for example, recent moves by Twitter to restrict researchers’ access to its data. Historically, although academics have not been able to peer into the actual algorithms that run Twitter, they have been able to access some of the platform’s data for free. Now Twitter is charging them $42,000 to $210,000 a month for the privilege. That makes it more difficult for independent parties to study, say, political polarization on Twitter. “At the same time that they’re making this gesture that some might say is in the right direction, they’re taking away most of the data that most researchers used,” Chris Bail, a professor of sociology, public policy, and data science at Duke University, told me.
Tech companies have good reasons to keep some information locked up. Fully public code would come with some risk, Bail pointed out: People would know exactly how to subvert rules and hack their way toward more visibility. Experts have instead proposed an independent, small group of researchers that would get full access to study and vet these systems, and then report its findings to the public. In a piece for The Atlantic, Rumman Chowdhury, who led a Twitter team dedicated to responsible use of AI and machine learning before it was gutted by Musk, vouched for legislation that would force tech companies to hand over their code to third-party auditors.
“If Mr. Musk truly valued transparency or the equal expression of all voices in this so-called town square, he would invite outside auditors to conduct and publish independent reviews of the technology,” Liz O’Sullivan, the CEO of Vera, an AI trust-and-safety platform, told me.
That’s not what Elon is doing here. Steinert-Threlkeld wondered if Twitter itself would actually end up being the biggest beneficiary of the change, rather than the public. Any random developer on GitHub is now able to suggest edits to the code should they feel like it. “If bugs are discovered or improvements to the algorithms are suggested and accepted, Twitter will have found a way to replace the thousands of staff who left or were fired,” he said. In other words, Twitter’s “open sourcing” of its algorithm may only benefit Twitter: It has no obligation to make any changes, after all.
The aim of true, thoughtful transparency is to make social-media platforms a better place for their users—and for the wider range of people affected by whatever happens on them. Musk’s track record would suggest that his true priorities lie elsewhere. In the code released last week, Musk’s name appears, seemingly confirming reports that he had pressured engineers into creating a special system to have the algorithm prioritize his tweets. “This is the first time I’m seeing this,” he said.