Ragebait, shitposts, and grifting are modern manifestations of an older internet phenomenon: spam. The underlying mechanics are virtually identical. These are not new forms of online behavior but a rehash of tactics originally employed for email scams, chain letters, or even clickbait ads. They're phishing attacks and zero-days targeted at vulnerable cognitive and emotional attack surfaces. Yet there's no CVE database (https://nvd.nist.gov/vuln) for human bugs.
In traditional spam, the objective is often explicit: trick someone into clicking a link, or perhaps to relinquish personal information. With ragebait or shitposts, the aim becomes more abstract but equally exploitative. They prey on psychological vulnerabilities, hijacking attention, and mental bandwidth. Just like spam targets software vulnerabilities or user ignorance, these newer forms leverage cognitive biases, emotional triggers, or divisive social issues. The methodology is identical: exploit a weakness to gain something—be it data, attention, or social capital.
The crucial difference between spam and social media lies in the facilitation and profiteering by large corporations. In the early days of spam, Internet Service Providers and email services would actively try to filter out unwanted messages, largely to improve user experience. Now, social media companies have little incentive to mitigate these patterns because they profit through engagement, regardless of the quality of that engagement. Every click, share, or enraged comment on a piece of ragebait translates to more ad impressions, higher user engagement metrics, and ultimately, more revenue. This creates a perverse incentive structure where the platform's economic interests are aligned with the spread of intellectually or emotionally toxic content.
Moreover, the algorithms that these platforms employ exacerbate the issue. Designed to maximize user engagement, they are agnostic to the ethical implications of the content they promote. So, the high engagement garnered by ragebait, shitposts, or grifts makes them more likely to be pushed to the forefront, creating a feedback loop that encourages the creation of even more of such content.
What's important to realize is that these dynamics are not accidental byproducts but are often by design, baked into the business model of many social media platforms. These companies have the technical means to identify and curb spam-like behavior but choose not to do so comprehensively because it would interfere with profit generation.
The problem is systemic and requires systemic solutions. Consumer-level interventions like education or awareness campaigns are inadequate. It would be naïve to think the issue will resolve itself when the economic incentives for corporations are stacked against such an outcome. Legislative action targeting the business models of these platforms is a more promising avenue, but that opens a Pandora's Box of free speech and antitrust issues. Regardless, the first step to solving any problem is recognizing there is one. And make no mistake, we have a problem.
The antidote to the issue discussed is bifurcated: revamping late-2000s anti-spam strategies and reintroducing the Internet's dormant broadcasting capabilities. This returns the Internet to pure stigmergy without conspirators and secret societies.
Firstly, consider an evolved form of CRM114 (https://media.blackhat.com/bh-us-10/whitepapers/Yerazunis/BlackHat-USA-2010-Yerazunis-Confidential-Mail-Filtering-wp.pdf)
The actual presentation rather than the notes can be found on YouTube.
Unfortunately the project in unmaintained, but the official website remains online (https://crm114.sourceforge.net).
CRM114 is a discriminator that was originally designed to sift out spam emails by statistically scrutinizing text patterns using a cocktail of Bayesian filtering, Markov chains, and classifiers. Unlike its original purpose, which was to filter out malicious or irrelevant emails (or preventing accidental exfiltration of confidential data, in a language the developer didn't understand), this next iteration would focus on tooling to identify and subsequently weed out dark patterns in user-generated content using a similar cocktail of technologies.
If CRM114 could successfully classify email spam (and it did), it is beyond conceivable that an analogous system could classify ragebait, shitposts, or other forms of emotionally manipulative content based on textual or even visual cues. By targeting specific behavioral patterns that exploit cognitive biases or emotional vulnerabilities, the upgraded anti-spam mechanism could filter out the noise, leaving behind only valuable content, where value is entirely user-defined.
The second prong involves reopening the now-closed mail relays and letting information flow unimpeded again. Similarly, creating new open relays for different types of content, like RSS.
While SMTP has its own problems, the underlying technology represents the best of the Internet: open, nondiscriminatory, and ubiquitous, using the sheer volume of nodes to guarantee delivery. It represents the kind of Internet where content moves freely, without proprietary bottlenecks or gatekeeping. By introducing newer protocols on the basis of SMTP, the Internet can revert to its original design as a vast, decentralized network. Individuals would then regain the capability to broadcast pure digital signals, spam, ham, or otherwise, without reliance on intermediary platforms that are susceptible to the perverse incentives discussed earlier.
When both strategies are applied in concert, they offer a pathway to reclaim a cleaner, more direct experience on the Internet. Such a dual approach would effectively circumvent the centralization and monetization of attention that fuels the current system. For those who opt in, it means a return to an environment where information is both tailored and substantial, devoid of middlemen who distort content for profit and currency.
In short, we could end the Eternal September—the influx of new users that forever changed the landscape of the Internet. It is within reach through these measures. It would mean a return to an Internet that values constant signal over temporary noise, where users actively contribute to and curate their own information streams, free from manipulative algorithms, corporate oversight, and petty tyrants. This isn't a utopian ideal but a technically feasible objective, requiring both the will to build and adopt new systems, and the foresight to recognize the dire need for them.