top of page
Writer's pictureJake Browning

Our Overly Managed Social Media Ecosystems

The current proliferation of Twitter-wannabes--and their failure--suggests people learned the wrong lessons from the rise of first-gen social media. When social media arose, the systems largely tailored themselves around engagement. It was a simple strategy: if some people are gravitating towards things, like breaking news or pictures of attractive people, then you should assume other people will, too. Using AI algorithms allowed systems to automatically amplify trends without understanding them, pushing more people towards popular things without caring about what these things were or why they were popular. This, as would be expected, created an ecosystem where certain people and companies thrived: if people loved memes or vines of stupid dances, then you could get a lot of followers--and influence--just by being goofy.


But this also led to conflict: breaking news that attracts news junkies will also encourage partisan fighting, which tends to be by aggressive, condescending, name calling dudes. It discourages quieter voices. Similarly, if your site focuses on pictures of beautiful people, you incentivize shallow, ad-filled influencer culture and lots of self-judgment from the rest of us. And, of course, there's always money to be made: news sites specializing in appealing to partisans; sex workers and bots capitalizing on people obsessed with beautiful people; meme accounts that leaned alt-right. These feed off casual users who are often lulled into stupid consumption of information--something they are always prone to, but which the algorithms amplified.


These weren't necessarily healthy ecosystems, since they often permitted bad behavior and made it deeply uncomfortable for lots of people. But they certainly were thriving, which allowed for lots of different species to carve out a little space to do their own thing. Some species were pernicious and needed to be carefully watched and, occasionally, culled. But the majority were mostly harmless and seemed to enjoy themselves following their own silly pursuits (e.g., meme accounts of Warhammer 40k). As a result, lots of folks co-existed well enough, including the majority who mostly just wanted to scroll without posting.


But 2016 changed that. People became convinced that the technology shaped the users--that the algorithms pushing news made otherwise un-newsy people into news junkies who ended up consuming a steady diet of misinformation. (It didn't, but it provided a nice narrative if you think Trump was elected because of Russian bots or whatever.) There were many reasons to change direction--advertisers, politicians, and users all complaining--but the companies also just saw good reasons to re-think the algorithms: not everything that is getting lots of attention needs to be amplified.


Since 2016, these ecosystems have moved to a heavily curated ecosystem: content moderation became far more pervasive, shutting down lots of conversations by bigots, trolls, and abusers; Facebook began to de-amplify news sources, hoping to make their sites less political; and, especially with COVID, misinformation suppression became intense. The focus was two-fold: shut down some of the highly active people making the sites toxic, and also change the algorithms to make feeds less super-charged.


This worked well--or well enough--for Facebook and Instagram. They opted for less engagement with news and more engagement with people in your network and shallow stuff that shouldn't make the blood boil (Facebook is always showing me highlights of baseball games). They're duller platforms, to be sure, but they also tend to be less aggravating. But they also could coast to some degree because they didn't start by being newsy; the newsy focus came over the course of the 2008 and 2012 elections, when Obama leaned heavily into it. The company could simply lean out after 2016, trying to return to simpler tasks: showing how much fitter, happier, and more successful all your high school classmates are.


But Twitter couldn't follow; its model really depended on active content producers generating conversations around breaking events, and these events tend to be polarizing. It's hard to have a polarized conversation that doesn't turn toxic--and pre-January 6 Twitter didn't really have an incentive to tone down the toxicity by much. Some people love to hate and troll and be nasty (and to amplify this), and other people love to smugly condemn hate and raise hell about it. Twitter would get rid of the worst of the worst, first in 2017 and then again after January 6. But, on the whole, the high activity, minimal regulation site still worked well enough: there were lots of toxic folks, but lots more people willing to fight them and advocate for a less toxic world. And the ability to block or mute could keep the worst off someone's feed. It would never be a money spinner, but it worked as a functional--if often frustrating or discomforting--ecosystem. And there really wasn't an alternative.


When Musk bought Twitter, he figured out part of this: hate can generate engagement, and Twitter does well when lots of people are engaging. And while pre-Musk Twitter often tried to use the algorithm to tamp down the hate, the algorithm could also be tweaked in the other direction: amp it up and set people loose. Musk didn't care that this scared advertisers; he has assumed (evidence be damned) that he can replace the revenue with subscriptions--and that, if Twitter engagement picked up, the advertisers eventually just swallow their scruples (or, at least, accept the risks) and advertise on the platform regardless of whether Trump and Kanye were on there.


Musk didn't grasp, though, that this changes the ecosystem. It basically empowers the predators, leading to a more hostile environment for everyone else. This, predictably, leads to an environment where species "die off" by just deleting the app, become "stealthier" by never commenting, or migrating to other platforms. The result is that many types of engagement simply stopped happening: why have a nice conversation if, at any second, a troll might drop in and call you a nasty name? This especially had an effect on diversity: for months now, female academics have shared their BlueSky handles, commented wistfully on a lost Twitter from the before times, and then disappeared from the site. Male academics are increasingly, though a bit more slowly, following them. The holdouts have also started dual-posting on other sites, trying to manage a smooth transition rather than an abrupt exit.


The AI crowd is leaving more slowly, probably because it isn't as academic. But it is changing in composition: the largest and most vocal groups are from industry, and (not surprisingly) made up of a lot of accelerationists and doomers. These groups have drifted from the disreputable margins of AI discourse into the driver's seat as others leave the platform. (It is no surprise AI extinction letters and Andreessen's manifesto dropped since Musk took over Twitter.) These views are no more plausible now than they were two years ago, but they seem more sensible if you spend your time on Twitter since those voices are so much more prominent than they used to be in a more diverse ecosystem. The critics, often academics, have already found other homes.


But Musk's actions led to imitators. Threads was designed to be a Twitter clone that could simply take all the disaffected users, and it immediately gained network effects by having millions of signups. But network effects aren't everything; what is really needed is the self-fueling engagement which network effects and amplifying algorithms allow. And Threads opted against this, preferring a heavily curated ecosystem: heavy content moderation, non-amplification of news or breaking stories, limits or censorship of topics like COVID that might produce misinformation, and so on. It's akin to a nature preserve for herbivores; no violence allowed, everyone play nice, don't be a jerk. This makes sense; Facebook and Instagram have similar vibes, so it is "on brand."


But the aggressive curation makes it difficult to care. The endless scrolling of content that keeps us on these apps depends on users producing new content, but Threads doesn't provide users a reason to post anything. Without breaking news, people need to come to the site with something to say, rather than log-on and then find something worth griping about as we read the news. My feed has little beyond people laughing at Twitter--an interesting topic, but not worth sticking around for. The only real activity is coming from well-established names who have enough followers that they don't need amplification--and they are mostly just re-posting stuff from other sites. And, if I scroll for more than a minute, I start getting people posting photos--effectively repurposing their Instagram posts. It has the gruesome feel of a zoo, where none of the animals are having much fun and don't really want to be there.


In the reverse direction, Bluesky decided to build a site with a "hands-off" algorithm, one that didn't tailor too much to individual users. This has a seemingly ethical vibe to it; we all wanted for companies to have less info on us, right? But, from what I can tell, the site ends up just connecting users to the people they follow, where pretty much everyone stays in their own circle and has little clue what else is happening. This is good if everyone already has a sufficient circle; if you know who you care about, and you think they'll be entertaining enough, you're good. But science Twitter, for example, depended on the algorithm introducing you to people you should know: there are always new up-and-coming researchers with great papers who need an audience, and Twitter's personalized algorithm had a good knack for figuring out what people might like.


Without an algorithm actively connecting the different circles of contacts together, however, there are likely many papers that aren't getting the audience they deserve. Blusky seems to be a number of small, isolated pools; each one might have an active, thriving ecosystem, but unless they are somehow connected to each other, they are all going to eventually grow stabilize and then grow stagnant. As ethical as this is, we're seeing its limits because there is already die off; people aren't posting much and aren't even logging on very often anymore, since nothing ever seems to be happening.


It's hard to see any of these sites succeeding, any more than Parler or Truth Social did. Pre-Musk Twitter may have sucked in numerous ways, but Twitter seemed to think about itself as mostly focused on moderating the worst impulses of the personalized algorithms--preventing them from just feeding constant clickbait and hate. The goal instead was to let people just scroll endlessly, constantly finding new and exciting things, while allowing that a percentage of it would be awful. Twitter just worked to keep that percentage low, rather than amplifying it or doing anything to remove it. That seemed to be good enough to keep people engaged--even if everyone had good reasons to complain about it, from those who regarded it as too regulated to those who demanded more regulation.


We're in an age now where all the platforms, for different reasons, aren't embracing this model: they would prefer to de-personalize the algorithms and shift what we see. It's a very strange approach, a dialectical shift away from the original design. It seems based on the wrong beliefs about the technology--effectively, that making the ecosystem was the important thing, whereas populating it--and keeping the populations happy--was secondary. This is, of course, incoherent; ecosystems are created by the interactions of users and the technology, and the dynamic feedback often is unpredictable. It's really hard to engineer these dynamics, and people tend to do it wrong and lead to perverse results (see the history of invasive species). Human conservationists typically aim at just preserving the status quo--a much saner, if still ultimately unstable, approach.


The companies seemed doomed to learning this lesson: there is a fall off in engagement pretty much everywhere, but no one wants to change course. They seem to believe the people need to change in order to use their platforms: on Twitter, people need to gain a sense of humor; on Threads, people need to stop caring so much about politics. This kind of central planning doesn't have a history of success; few leaders have been able to replace their grumbling people, no matter how they tried to re-educate them. Companies will probably need to decide whether they are willing to follow the people where they want to go, or whether to simply abandon the Twitter model as better off dead.

22 views0 comments

Recent Posts

See All

Copyright and Generative AI

The recent wave of cases concerning generative AI, such as Silverman v OpenAI, have not gone well. From a legal perspective, this isn't...

Comentarios


bottom of page