Discover more from Seeking Tribe
The Dead Internet Prophecy
bot accelerationism, the spam problem, bearish on social media clones, can you socialize with a robot?, dark hyperreality goes brrrrrr, towards a more human future
Or preferably with a friend who you think would like to get coffee with me!
This is a longer one but it’s worth it.
The Dead Internet Theory is a ‘conspiracy’ that the internet is mostly populated by bots. By certain measures, this theory is already true. With the emergence of new generative AI tools, it’s more accurately described as a prophecy than a theory. Let’s call it The Dead Internet Prophecy.
Generative AI tools are going to fundamentally change the dynamics of the platform internet. These tools will rapidly decrease the cost of creating high-fidelity spam and compelling visual content. They will also bring about new challenges and threats for users and platforms. The future of the platform internet is only going to get weirder. It’s up to us to curate our own human experience—to preserve it against those who view technology as our telos and not a tool, or, otherwise, learn to love the bots.
Do we even want to be scrolling these apps anymore?
What about a new, slightly different platform with a distinct ethos?
In 2003 (20 years ago!), Myspace launched and quickly became the most important website for most everyone who was young, cool, and online at the time. I mean, if you weren’t in anyone’s Top 8, did you even have friends?
Myspace’s peak came quickly—in 2008—when it hit 115m unique users and was quickly surpassed by Facebook. The viability and demand for various social platforms was clear—at least to some—it was the future of the internet and nothing could stop it. The biggest platforms: Facebook, Twitter, YouTube, Instagram, Twitch, Reddit, etc, etc radically reshaped, or even kickstarted, most people’s experience on the internet. To this day, many people only access the internet through the portal of an iOS app. The platforms promise content creators immense reach at the cost of homogenous design and the risk of deplatforming, or loss of support from ‘the algorithm’ (arguing about the colloquial use of this term is pedantic and a sign that you’re not fun at parties).
Since the election of Donald Trump, criticisms of social media platforms and their willingness to ‘platform’, or allow, all sorts of speech and content has been incessant. The CEOs of these platforms have been called before Congress, berated, and threatened. Many blue and red tribe users are not happy, for one reason or another, and yet these companies still continue to make immense amounts of money and retain most of their daily active users, even as their stock prices have collapsed in recent months. The debate around platform moderation is not worth digging into here—you either already know too much because you care or you don’t know because you don’t care and, and to those readers I say, I love you, please never change.
Many people’s criticisms have been accompanied by calls for a change to the regulatory regime for these platforms in the United States and often in ways that are an affront to liberal norms and the spirit of free speech. Others have chosen to take action to compete against the big platforms and try to drain their market share: Parler, Truth Social, Mastodon, Gab, Rumble, Farcaster, Nostr, and an endless long tail of companies that are, more or less, a copy of an existing platform for a niche, polarized audience or with a slightly different ethos. And then there’s Elon Musk with his decision to purchase Twitter and work to align it with his vision for the platform: more open dialogue, less dependence on advertisers, and—perhaps most notably—many fewer employees.
Niche clones were born to die
It is good that people have decided to compete against these behemoths. The existence of new platforms, especially if they find any success, will force the reigning oligopolies—otherwise feeling overly secure in their network effects—to not succumb to their most illiberal impulses and the demands of their power mad critics. Most proposed regulation would not destroy the big platforms but rather lock them in, making it much harder for new, genuinely innovative companies to enter the fray.
Truth Social—Twitter for Trump Loyalists—is not going to dethrone Twitter. Mastodon—Twitter for people who care about federation and, allegedly, those who dislike Elon Musk—is not going to dethrone Twitter. None of these platforms are going to dethrone Twitter. If you are investing in social media company but for a niche audience, you should value it as if it will at-most capture a fraction of the niche audience. It is a good strategy to first dominate a small market, and I’m sure that’s the pitch to investors, but being more-or-less a clone of an existing platform and competing on moderation, ‘community’, and ethos is not a winning strategy.
Any platform that is going to be able to compete needs to be a complete paradigm shift and one that most of your users genuinely care about. Most internet users do not care about owning their data, lightly monetizing their experience, or abstract principles around decentralization. They are using these applications to fulfill a social need, entertain themselves, or to benefit from the distribution afforded by the platform.
If you wanted to look at a company that has succeeded in taking market share—functionally screen time—away from existing platforms, there’s a clear exemplar: TikTok. TikTok was not a clone of any existing behemoth. It incorporated elements from Vine and other platforms and repackaged them into a novel, hyper addictive UX. The platform is also not exclusive to any particular niche group, although young women make up the greatest proportion of users. I am not a TikTok user. I do not recommend installing it on any of your devices. If you use it for business, you should pay a zoomer to manage your account. But there are clear reasons why it captured an immense amount of market share and forced YouTube, Instagram, Snapchat, and others to copy its features. There’s many reasons for its success but one key point that new platforms need to accept: appealing to people of various ‘tribes’, interests, etc is table stakes.
No social media platform can compete and capture a significant market share unless both friends and enemies alike are on the platform. Twitter and TikTok are more addictive and successful because they are a battlefield, a place to identify and coordinate with allies and identify and savage your enemies. They are increasingly not platforms for distributing news of events that occur in the real world but platforms where ‘news’ is generated. Someone said something on Twitter and now we need to discuss it, discuss people discussing it, they lost their job, they’re now the president, etc.
TikTok has similarly reached this level and its influence has begun to echo throughout our society. Power users likely spend more time on TikTok than they do socializing in the real, physical world. They are developing new norms and amplifying ideas which are essentially platform-native, digital first. In Q2 2022, the average TikTok user spent 95 minutes per day on the platform. 29% of users who had the application installed on their phone, used the app every day. None of this is good but it has made TikTok shareholders incredibly happy and transformed the lives of innumerable content creators.
…we see that the ability to sacrifice the out-group is a key part of Twitter’s appeal through the relative failure of its ‘Free Speech’ competitors. Even if Parler had been well-designed (and it wasn’t), it was doomed to failure. They had a selection-bias towards disaffected right-wingers and open-minded voyeurs. Without any out-group members to sacrifice, this form of social media is not even fun. Platforms like Twitter know that this conflict and demand for sacrifices drives engagement and promotes legibility.
From On Legibility and Scapegoats-as-a-Service. Any interesting reference to Rene Girard in my work is the product of my participation in the Girard Course by Geoff Shullenberger. The next cohort begins March 20th—not an affiliate link.
I wrote the above before Elon acquired Twitter with a purported vision to return its content moderation policies back to the before times: the internet culture of 2013. Unlike new platforms, allegedly created specifically to facilitate a greater range of political discourse, Twitter can effectively differentiate itself by pursuing a different content moderation policy because it already has an established network effect. It is a platform where people have already found allies and enemies. It is functionally one of the longest running mostly-text-based MMORPGs to ever exist. Certain power users have left since Elon took control but I think that was due more to the sloppy execution, rather than the principle of re-opening up the Overton Window on the platform. I would bet that many of those users will return, most of the ones who threatened to leave have stayed.
While I am not optimistic about these types of poorly differentiated platforms competing with the social media behemoths, I am not bullish on the long-term prospects of the big platforms either—with the exception of those that are primarily one-to-many in their content distribution, ie. Twitch, YouTube, etc.
Facebook.com may be a good, profitable business for years and years to come but it’s not a place you should spend your time. It will most likely stagnate and eventually die. The default future of platforms like Twitter, Instagram, etc looks more like Facebook.com than their current states. They will be filled with generally low-value, consumptive content and increasing cultural irrelevance—unless they can rapidly adapt to the present and future of generative AI.
The spam problem accelerates
Despite this being a major subject in both of my latest Substacks, I do not consider myself to be an aggressive generative AI booster or user. For coding and learning to code, it is certainly immensely valuable. In general, I believe it will become an invaluable tool for education once there are models that share facts with citations and which are provably true—at least as much as anything in a history textbook—with seamless UXs. Education has already changed forever and traditional institutions are going to face increased scrutiny and competition.
I am friendly with many people who are doomers about the implications of the future generations of AI models. Personally, I am more focused on what we already know is possible today and it seems to be more of a high-variance, mixed bag.
Why would I want to talk to a robot?
OpenAI’s GPT3+ models will forever change the nature of text-based communication on the internet. Bots and their spam have always been a problem on the internet and particularly on social media platforms. The sad truth is that while they have historically generated low quality noise, this has been sufficient for them to amass significant numbers of real-life human followers who read, interact with, and even share their content. These new large language models (LLMs)—GPT models and similar products from rivals—effectively reduce the cost of high-fidelity spam to nothing.
OpenAI just reduced the price of using their embeddings and language models by 90%. I thought they were able to do this, after a massive injection of capital from Microsoft, because it was simply worthwhile to subsidize users in exchange for their data and help refining the model. Allegedly at this price point it is already profitable, as they’ve seen huge efficiency gains since their initial releases.
People are discussing his math in the replies to this thread:
One of my twitter mutuals,has already built a product, GPTweets, which will generate tweets in your voice, using your prior tweets. From my understanding, there’s no reason that this product should not improve to the point where it would generate high-fidelity content that is as good as a lot of your original content.
Many haters would agree that a relatively simple model could already tweet better than me!
This could be a boon for people building various segmented audiences on social media but it will also mean that people could spend hours a day interacting with content that was generated by bots, or even interacting with high-fidelity bots in replies and direct messages.
Human social interaction occurs between humans. Not between humans and dogs. Not between humans and cats. Not between humans and robots. Interacting with robots and animals may have its value but it is not social.
Will it be worthwhile to stay on the platform internet when you know how plausible and cheap it is to generate new human personas?
When you know that the new, fun account that’s always tweeting, engaging, and dming you back may be a bot?
This feels like a low-value way to spend your limited time, attention, and energy. It arguably already is when you you’re interacting with real, genuine humans!
Elon Musk and Mark Zuckerberg almost certainly realize this is the case and that’s one of the reasons why they’re moving to authenticate users by requiring them to pay a monthly subscription to maximize reach on the platform and access certain features and services. At least in Elon’s Twitter, a bot would have to pay $8-11 per month to occupy people’s limited attention with its high fidelity spam. Unfortunately, this solution to the spam problem would require a human user to discriminate against all users who are unwilling to give Twitter their money or credit card information—which is likely a large proportion of all of the best new accounts. This also, allegedly, has not been going well for Twitter, a low percentage of users have subscribed and there’s high churn:
When people think about it, they might decide that this is not a game they want to play anymore. But if they don’t think about it, they may want to play it even more. The content could become better, their posts may receive greater engagement as the bots swarm, calibrate, and capture their attention for whatever ends their owners desire. The cyborg and bot internet may be even more addictive and capture a greater proportion of humanity’s energy and attention.
One early speculation about this future of persuasive bot activity was the potential for ‘heaven banning’. This idea was intended as a more elegant tactic for reducing the distribution of content from certain creators, relative to the more hamfisted ‘shadow banning,’ or regular banning. Shadow banning is when a platform systematically down ranks and limits the reach of a user to prevent their messages from spreading to a broader audience, without formally sanctioning them. This solution is inelegant because users can see that their reach is being throttled, their followers can confirm that they’re no longer being served their posts, and this tends to reinforce their beliefs as they feel justifiably persecuted.
It was confirmed in the Twitter Files that Twitter was indeed shadow banning users. They even shadow banned Jay Bhattacharya, a doctor, specialist in infectious diseases, and professor at Stanford University. His crime? Using his expertise and ability to reason to try to weigh the costs and benefits of pursuing various public health responses to the pandemic. He was particularly concerned about the long-run public health externalities of functionally ceasing children’s education indefinitely (NYT: The Pandemic Erased Two Decades of Progress in Math and Reading).
Heaven banning is a more nefarious tactic for mitigating the reach of a user in a way that is less easily noticeable. The platform would create fake profiles which would interact with the user as normal. In practice, this may also be somewhat difficult to pull off without users noticing but if they’re receiving the positive feedback that they expect, they might not think about it too much. It is at least harder to reinforce a victim narrative if the account appears to have significant engagement, growth, etc according to all readily accessible quantitative measures.
If the platform has an authentication system, like Twitter, etc. it could also give these bot users a verification badge. Right now Twitter is full of accounts with limited account history who have purchased the blue check—formerly reserved for those arbitrarily given a stamp of notoriety by employees at Twitter. These accounts are likely just lurkers or new users who want to support Elon Musk’s venture for ideological reasons, or who want to post full-length videos to the platform. But in the future, they could be platform generated and authenticated bots optimized to drive certain engagement metrics.
Engaging early adopters, generating conflict
Text content generation tools, like GPTweets, may be beneficial to new platforms seeking to bootstrap engagement on their app. It is incredibly difficult to build the foundation of a network effect. Your first few users will log onto a platform with minimal content and low engagement. This can be a cozy experience, they may feel like an VIP for getting a private invite. Being an early adopter has an allure for a certain kind of person. But most new platforms die from too much churn and anemic user growth.
Platforms with a less polarized niche, may be able to leverage these tools to generate early engagement and much needed conflict. But you do not want users to come to believe that everyone else is a bot or that you intentionally deceived them. The safest way to do this would be to use minimal automation, outside of content generation, and simply leverage LLM tools to enable a small team to operate the personas of multiple high value users and then scale other bot activity as the platform grows—with plausible deniability about the origins of those bots and a plan for removing them from the platform, or otherwise authenticating users and making spam costly.
This strategy is less plausible for the platforms with more niche audiences. I don’t think adding a few high profile liberal accounts to Truth Social is going to be believable or sufficient to get genuine liberals to join the platform or decrease churn. Likewise, I don’t expect that Ethereum maximalists or people in favor of Central Bank Digital Currencies are going to start using Nostr.
However, if platforms make it cheap to spam their new services with bots, then maybe certain interest groups or individuals will deploy them organically. You could have bot farms of liberal bots that go and raid Parler and Truth Social. You could have swarms flood onto Nostr and seek to provoke Bitcoiners and antagonize them.
Ultimately, I would guess that the greater potential for spam is more of a liability for managing a platform than a valuable tool. It depends on how users respond to the proliferation of high quality bots. It may be difficult to determine if a user is a bot and lead to false-positives where a human user is wrongfully terminated for ‘acting like a bot’. This has been a persistent issue for platforms. Even savvy users may begin to emulate the behavior of bots. In the case of gaming platforms, integrating an LLM may enable a bot to more effectively evade detection as it now more plausibly appear to be a human.
Dark hyperreality goes brrrrrr
AI tools that give users the capability to generate and augment high-fidelity images and videos may lead to even more significant changes in user behavior and problems for platforms to troubleshoot. There will certainly be a lot of great content that is directed by humans, as generating and editing photos, videos, websites, fantasy worlds, etc becomes almost as simple as typing a few commands into a box and pressing a button. But everything is going to get extremely weird. People are going to be confused, feelings will be hurt, and vulnerable people are going to be exploited.
New filters on TikTok have become nearly seamless, you can mostly tell that users are using them because of their lack of imperfections. There were several semi-viral videos over the past few weeks which showed Gen Xers publicly and emotionally coming to terms with their loss of youth and mortality after using one of these filters:
Filters on Instagram, Snapchat, and TikTok have become an arms race. People seeking to misrepresent themselves in photos and videos previously had to manually Photoshop out their ‘imperfections’, now they can use use a free, point-and-click filter, or build a compelling AI generated, photo realistic avatar (well except for the hands).
It seems undeniable that these filters are detrimental to people’s self-esteem, particularly young women who are active users of these platforms. These genuinely unrealistic beauty standards are converting into sales for surgery and other risky or directly harmful pharmaceutical interventions. These competitive intra-sexual dynamics have led to the proliferation of Instagram Face and exacerbated the perverse idea that there could be a clear, ideal look. It’s all fashion and none of it will age well. Do not get your buccal fat removed because insecure famous people are doing it.
These AI tools can also be trained on existing photos, videos, and audio which have been uploaded to platforms like Instagram, TikTok, Twitch, and YouTube. There are even darker potential applications for the use of these models. Deep fakes. Use of people’s likenesses to generate sexual content without their consent, or against their will. Mimicking people’s voices to social engineer attacks against their friends, families, etc.
If I ever make a not normal request of you or try to contact you from not-my-phone number, make me go through the ringer to authenticate that it’s actually me—ie. ask me non-public information, or simply tell me to call you from my number and hang up. My phone number may not even exactly be secure—but we can explore this further if there’s interest. I’m fairly confident that there’s enough video and audio of me on the internet for this to be a potential issue.
Will users want to continue to post photos and videos of themselves on the internet and compete for reach on these platforms?
Will we see a proliferation of V-tubers—streamers using digital avatars and voice modification technologies to preserve their pseudonymity and protect their families?
Will personas generated by enterprising writers leveraging AI tools begin to outcompete real, human models and other influencers on Instagram, YouTube, etc? This influencer with 2.8m followers on Instagram does not exist.
Will users begin to demand perfectly imperfect filters that augment their look but in a way that has an imperceptible inauthenticity—perhaps a slight flaw that they can disassociate with?
No one knows where this is going. The only certainty is that everything is going to get weird. Perhaps too weird.
Towards a more human future
There are clear social costs to the proliferation of many of these digital technologies. They are not optimized around human well-being and they never will be. With perhaps the odd bootstrapped, private exception, a for-profit social product will only be as optimized towards human well-being as it needs to be to minimize churn or mitigate the risk of harmful regulation. I don’t believe there’s a clear policy solution to this either.
Do not despair. As always, we get to decide how we want to respond to the emergence of these technologies, even if they increasingly shape the contours of our world. We need to remember that we have agency. There are far more possibilities than we will ever consider. The future is not pre-determined.
The pandemic and our fiscal and monetary response accelerated us into a weird future. Now, as is clear from our bank and retirement accounts: inflation is biting back and pushing the Fed to raise interest rates. Big Tech is engaged in mass layoffs. Start ups are running out of capital and struggling to raise (and, generally, no longer paying me to write for them, unfortunately).
Building and scaling these new technologies is going to be more difficult than it was the past few years. There are good arguments that we will see persistent inflation and these higher interest rates are our real new normal. No one really knows. At the margin, things are slowing down and we should seize on this moment to decide if we like the future that we’ve glimpsed. OpenAI and other champions will be able to barrel full-steam ahead but we can reassess our relationship with existing and emerging technologies.
If you don’t believe that telos of humanity is the creation of technology and uncontrollable digital intelligences—an emerging cyborg ideology—then you have to ask yourself:
What’s a tool and what’s a trap?
I don’t anticipate a mass exodus from these platforms in the short-run. I don’t think we’re going to return to an equilibrium that looks more like the 90s than the long 2010s. I don’t think that the problems I’ve outlined here are intractable. But many people are going to log off and, with greater certainty, prioritize their screen time on smaller, niche, authenticated groups.
Get your friends’ emails addresses. Add their phone number to your contacts. Exchange patps. Onboard your family members onto Signal and share your family photos there, a private place where they won’t be used to train the generative AI that will create the Metaverse.
You’ll always be able to connect with me, if you subscribe ;).
Personally, I want to focus more of my attention on in-person, or other verifiably human, interactions. Hosting guests in my barely furnished guest room. Meeting up with friends and Seeking Tribe subscribers when they visit Austin. Collaborating with coworkers at a full-time job (it’s not too late to hire me full-time—send me an email). Instigating parties and event series at my [rental] home.
And when I’m looking at a screen, I’d like to focus more of my screen time reading long-form content and writing here and elsewhere. I’m tired of scrolling feeds. My favorite place on the internet right now is a small Discord server full of homies from Rochester, NY. We drop into the voice channel just to chat and say hello. We play a couple games of Age of Empires II every week. I’m long and optimistic on a future of the social internet that feels more like that and my email inbox.
The rest of it?
The best way to support my work is to share it:
If I was getting paid to write this newsletter, I would feel compelled to publish more regularly. I’m not sure if I like that idea or not but feel free to tempt me: