Ragebait, shitposts, and grifting are modern manifestations of an older internet phenomenon: spam. The underlying mechanics are virtually identical. These are not new forms of online behavior but a rehash of tactics originally employed for email scams, chain letters, or even clickbait ads. They're phishing attacks and zero-days targeted at vulnerable cognitive and emotional attack surfaces. Yet there's no CVE database (https://nvd.nist.gov/vuln) for human bugs.
In traditional spam, the objective is often explicit: trick someone into clicking a link, or perhaps to relinquish personal information. With ragebait or shitposts, the aim becomes more abstract but equally exploitative. They prey on psychological vulnerabilities, hijacking attention, and mental bandwidth. Just like spam targets software vulnerabilities or user ignorance, these newer forms leverage cognitive biases, emotional triggers, or divisive social issues. The methodology is identical: exploit a weakness to gain something—be it data, attention, or social capital.
The crucial difference between spam and social media lies in the facilitation and profiteering by large corporations. In the early days of spam, Internet Service Providers and email services would actively try to filter out unwanted messages, largely to improve user experience. Now, social media companies have little incentive to mitigate these patterns because they profit through engagement, regardless of the quality of that engagement. Every click, share, or enraged comment on a piece of ragebait translates to more ad impressions, higher user engagement metrics, and ultimately, more revenue. This creates a perverse incentive structure where the platform's economic interests are aligned with the spread of intellectually or emotionally toxic content.
Moreover, the algorithms that these platforms employ exacerbate the issue. Designed to maximize user engagement, they are agnostic to the ethical implications of the content they promote. So, the high engagement garnered by ragebait, shitposts, or grifts makes them more likely to be pushed to the forefront, creating a feedback loop that encourages the creation of even more of such content.
What's important to realize is that these dynamics are not accidental byproducts but are often by design, baked into the business model of many social media platforms. These companies have the technical means to identify and curb spam-like behavior but choose not to do so comprehensively because it would interfere with profit generation.
The problem is systemic and requires systemic solutions. Consumer-level interventions like education or awareness campaigns are inadequate. It would be naïve to think the issue will resolve itself when the economic incentives for corporations are stacked against such an outcome. Legislative action targeting the business models of these platforms is a more promising avenue, but that opens a Pandora's Box of free speech and antitrust issues. Regardless, the first step to solving any problem is recognizing there is one. And make no mistake, we have a problem.
The antidote to the issue discussed is bifurcated: revamping late-2000s anti-spam strategies and reintroducing the Internet's dormant broadcasting capabilities. This returns the Internet to pure stigmergy without conspirators and secret societies.
Firstly, consider an evolved form of CRM114 (https://media.blackhat.com/bh-us-10/whitepapers/Yerazunis/BlackHat-USA-2010-Yerazunis-Confidential-Mail-Filtering-wp.pdf)
The actual presentation rather than the notes can be found on YouTube.
Unfortunately the project in unmaintained, but the official website remains online (https://crm114.sourceforge.net).
CRM114 is a discriminator that was originally designed to sift out spam emails by statistically scrutinizing text patterns using a cocktail of Bayesian filtering, Markov chains, and classifiers. Unlike its original purpose, which was to filter out malicious or irrelevant emails (or preventing accidental exfiltration of confidential data, in a language the developer didn't understand), this next iteration would focus on tooling to identify and subsequently weed out dark patterns in user-generated content using a similar cocktail of technologies.
If CRM114 could successfully classify email spam (and it did), it is beyond conceivable that an analogous system could classify ragebait, shitposts, or other forms of emotionally manipulative content based on textual or even visual cues. By targeting specific behavioral patterns that exploit cognitive biases or emotional vulnerabilities, the upgraded anti-spam mechanism could filter out the noise, leaving behind only valuable content, where value is entirely user-defined.
The second prong involves reopening the now-closed mail relays and letting information flow unimpeded again. Similarly, creating new open relays for different types of content, like RSS.
While SMTP has its own problems, the underlying technology represents the best of the Internet: open, nondiscriminatory, and ubiquitous, using the sheer volume of nodes to guarantee delivery. It represents the kind of Internet where content moves freely, without proprietary bottlenecks or gatekeeping. By introducing newer protocols on the basis of SMTP, the Internet can revert to its original design as a vast, decentralized network. Individuals would then regain the capability to broadcast pure digital signals, spam, ham, or otherwise, without reliance on intermediary platforms that are susceptible to the perverse incentives discussed earlier.
When both strategies are applied in concert, they offer a pathway to reclaim a cleaner, more direct experience on the Internet. Such a dual approach would effectively circumvent the centralization and monetization of attention that fuels the current system. For those who opt in, it means a return to an environment where information is both tailored and substantial, devoid of middlemen who distort content for profit and currency.
In short, we could end the Eternal September—the influx of new users that forever changed the landscape of the Internet. It is within reach through these measures. It would mean a return to an Internet that values constant signal over temporary noise, where users actively contribute to and curate their own information streams, free from manipulative algorithms, corporate oversight, and petty tyrants. This isn't a utopian ideal but a technically feasible objective, requiring both the will to build and adopt new systems, and the foresight to recognize the dire need for them.
The immediate counterargument one might anticipate is that these solutions are too complex or demanding for the average Internet user. But this complexity is not a bug; it's a feature. It's precisely this level of granularity that allows for a highly customized experience. Complexity serves as a sort of filter, gating off this improved digital space to those who are committed to bettering their own informational environment. That commitment serves as a form of protection against the very problems we are discussing.
Consider curated ad-blocking lists, a concept most Internet users are familiar with. These lists are maintained by communities of advanced users and are often composed of both specific domain names and regular expression (regex) sets designed to identify the structure of ad-related content. The average user may not understand the intricacies of these lists, but they don't need to; they simply reap the benefits by subscribing to them. These lists effectively restore control to the user, allowing them to dictate the terms of their own browsing experience, rather than surrendering that control to advertising networks.
Similarly, human filters for user-side customization can function based on the same principles. There would be no obligation for every user to understand the underlying mechanics, nor engage in their own personal curation. Just as one doesn't need to be a mechanic to drive a car, one doesn't need to be a software engineer to navigate a cleaner, less manipulative Internet. Users can opt into sets of filters curated by others—be they organizations committed to unbiased information dissemination or communities centered around specific interests. One might say that this is precisely what platforms do, but the difference is precisely that the user only opts-in to blacklists, not their content.
What's critical to understand is that these solutions create a tiered Internet experience. For those unwilling or unable to navigate the complexities, the current platforms of the Internet would still exist. For those that would take the time to engage with these advanced tools, a more pure, undiluted form of information awaits. It's essentially a return to an expertise-driven model, a digital meritocracy where the depth of your experience is commensurate with your level of engagement and understanding. And that's not elitism; that's the nobility (and noblesse oblige) of the technical caste.
The second prong involves reopening the now-closed mail relays and letting information flow unimpeded again.
This is the approach taken by Nostr (Notes and Other Stuff Transmitted by Relay)
https://github.com/nostr-protocol/nostr
Isnt nostr just SSB but with extra Jack “bitcoin” Dorsey
I kind of enjoyed chain letters. Maybe I don't socialize enough but I always thought it was neat meeting so many random people like that.
>>4
>>3
Nostr actually looks pretty neat on the surface level. It's like a distilled AP, and relatively simple. A similar thing happens on some of the RSS aggregator servers that run tinytinyrss <https://tt-rss.org/> since users can "share" RSS articles that other users can subscribe to. The content gets copied around to multiple servers.
Unlike Nostr though it has no identity magic (though you *could* shove it into a non-standard XML tag on Atom) but you can dedupe using source domain + id.
nice
>>6
>>7
oh shit the link got bugged. wtf, mods. that's valid markdown syntax... just remove the >
markdown
There's no markdown here today, anon. Just me.
ROBOT9000 ( https://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/ ) is a good example of an attempt to get signal out of a noisy environment. It may have worked well in 2008-2014, but the problem with this method today is that most of the noise is what you've described, which it is not intelligent enough to deal with, although unoriginality is still an issue it can handle fairly well.
If we can solve this issue for good it would improve the lives of so many people. I'm looking forward to seeing what comes of it.
>>8,10
Optimize your quotes!
CRM114
Do you really think it's a great idea to have an automatic spam-filter mod that mutes people for "shitposting" or using "emotionally manipulative language"? I just can't envision it being effective at curbing things like flame-wars and trolling, which have been a thing long before platform algorithms. Imho that can be addressed only by active and strict human moderation, which I've seen happen a few times in small communities.
no profit decentralized systems with new protocols
Sounds a lot like mastodon, matrix, etc... I'm disappointed that you haven't mentioned those, but anyway, what are your thoughts about them?
Also, what do you think of Signal's author thoughts on decentralized systems? https://moxie.org/2022/01/07/web3-first-impressions.html
A protocol moves much more slowly than a platform. After 30+ years, email is still unencrypted; meanwhile WhatsApp went from unencrypted to full e2ee in a year.
If something is truly decentralized, it becomes very difficult to change, and often remains stuck in time. That is a problem for technology, because the rest of the ecosystem is moving very quickly, and if you don’t keep up you will fail
Users can opt into sets of filters curated by others—be they organizations committed to unbiased information dissemination or communities centered around specific interests. One might say that this is precisely what platforms do, but the difference is precisely that the user only opts-in to blacklists, not their content.
I'm afraid I don't understand. Currently, on say, twitter, you can easily subscribe to accounts dedicated to sharing news on a specific topic (for example http://twitter.com/planet_lisp ). If you like decentralization you can subscribe to an aggregated rss feed ( https://planet.lisp.org/ ). So we already have the technology facilities to allow users to curate content for other users. Do you mean that moderation should also work this way? You subscribe to a moderator, who then shares a blacklist of posts and users he banned? And this is going to change everything?
muh elitism and eternal september
You can go to usenet, irc, gopher, gemini or whatever if you hate people who aren't into outdated, under-performing technology and like to inconvenience themselves for no reason. Otherwise you can go on discord and find a niche community for the thing you care about. Unless it's mismanaged, a discord server should be free from grifters and trolls, and the shitposting will be contained to a channel or two you can silence. User choice sure is nice. (decentralized alternatives: matrix/revolt/whatever)
You might like user-filter captchas, never seen them except for mathchan though. Although by the way it's implemented, I wouldn't let the owner post on the computer science board.
Do you really think it's a great idea to have an automatic spam-filter mod that mutes people for "shitposting" or using "emotionally manipulative language"?
Learn to read, retard. CRM114 is the exact opposite. There is no centralized algorithm. There is uncensored broadcasting where the recipient determines what the definition of noise is.
I just can't envision it being effective at curbing things like flame-wars and trolling, which have been a thing long before platform algorithms.
Yeah, because you're a mutilated illiterate.
Imho that can be addressed only by active and strict human moderation, which I've seen happen a few times in small communities.
govern me harder, daddy
Tell me how you're a zoomer without telling me you're a zoomer.
Sounds a lot like mastodon, matrix, etc... I'm disappointed that you haven't mentioned those, but anyway, what are your thoughts about them?
Federated narrowcasting is suburbia with shittier HOAs.
Also, what do you think of Signal's author thoughts on decentralized systems?
The majority of the original cypherpunks are dead from "suicide" or in prison. Moxie somehow escaped both fates and created a well-funded, centralized protocol that makes obfuscated encryption into a trust-by-benevolence model. You know, because the plebs can't understand PGP, which you definitely shouldn't use, because reasons.
As far as stultification goes, protocols evolve all the fucking time. ECMAScript 6 is not Eichman's flipperbaby anymore. C89 is not C11 is not C17 is not C23. This is a bullshit excuse from a fed shepherd leading unquestioning Schneierbots into technological kettling.
I'm afraid I don't understand.
The baby doesn't get it. Color me shocked.
Currently, on say, twitter, you can easily subscribe to accounts dedicated to sharing news on a specific topic (for example http://twitter.com/planet_lisp ). If you like decentralization you can subscribe to an aggregated rss feed ( https://planet.lisp.org/ ). So we already have the technology facilities to allow users to curate content for other users.
Do you mean that moderation should also work this way? You subscribe to a moderator, who then shares a blacklist of posts and users he banned? And this is going to change everything?
Blacklists and citations are for B people. You can't get it.
You can go to usenet, irc, gopher, gemini or whatever if you hate people who aren't into outdated, under-performing technology and like to inconvenience themselves for no reason.
Tell me you're the bowel movement of hacker news without telling me you're the bowel movement of hacker news. If I wanted to read endless bullshit from eunuchs that cannot use the thing for anything but talking about how much better the thing is, I have /prog/, thank you very much.
Otherwise you can go on discord and find a niche community for the thing you care about. Unless it's mismanaged, a discord server should be free from grifters and trolls, and the shitposting will be contained to a channel or two you can silence. User choice sure is nice. (decentralized alternatives: matrix/revolt/whatever)
What a clueless slave.
You might like user-filter captchas, never seen them except for mathchan though. Although by the way it's implemented, I wouldn't let the owner post on the computer science board.
Maybe you should go back. Scratch that; you should go back.
>>13
I have low tolerance for elitism when I find it unjustified, but I regret how the tone of my post went overboard, upsetting you so. Let's not get the signal to buzzword ratio too low; after all, even if you take me to be a lost cause, your responses might still be insightful to lurkers.
There is uncensored broadcasting where the recipient determines what the definition of noise is.
So you mean an advanced version of 4chan's user wordfilter, correct? If my view of such a tool is skeptical, it's not because I find it useless, but because you seem to be putting too big a weight on its shoulders. Private correspondence looks nothing alike spam; a friend will not write to you about cheap viagra or the health benefits of the root of ginseng, so even a naive statistical algorithm based on word frequency can do a decent job at distinguishing spam from genuine mail. I don't think even a sophisticated one can easily tell whether a post is being "emotionally manipulative" or not. Now, I'm not denying that such a filter could do wonders at eliminating a very specific type of spam; just like 4chan's word filter can work very nicely at hiding repeated shitposts. But I don't think you would be suggesting a fundamental change of the platform down to the protocols if what you were after is a specific, niche kind of "spam". You have a fundamental grudge against the kind of content that's predominant on social media, but that type of content is not advertisements for viagra, and it's not going to be as easy to get rid of with a fancy wordfilter.
This is where the open platforms come in: have a non profit internet, and you won't have to deal with engagement-driven algorithms that makes this type of garbage proliferate. But what about 4chan? It doesn't have any shady algorithms, so why is it so shit? It even has a very basic spamfilter! And this is where you run out of ideas, so you end up with the classic, dumb, lazy plan for having a good community: make it hard to reach, so only deserving people do. Just make users take an IQ test to create an account, filter sub-130s and Wa-La! You got yourself a forum of geniuses. But! MENSA is an embarrassment. So let me guess, IQ is a jewish construct? A government tool of deception? Don't think I haven't sniffed you out skizo. I know you. I know who you are. I've met you many times before. You're a whiny little stupid bitch. And you're everywhere, all in people's heads. But not in mine. I can see you for what you are. A little stupid bitch. Such you will live, and such you will die. I don't care. I'm unaffected, I'm wearing a raincoat against your chinese torture dripping of "it was better before"s and "it's all this one thing I don't like"s. I didn't fall for teddy the bomber, and I'm not going to fall for your bullshit.
As for the internet... It's too late. Like all artforms, once it's properly explored and its potential realized, it becomes a commercialized caricature of itself. Like rock. Punk even. It's all over now. It's our whole culture that's running out of ideas. Clinging to the past is stabbing future in the back. Not that it matters
beep bop beep bop. perhaps if i speak so the social robot will understand me
~ Maximilien Robespierre
punk is only valid as a gross caricature of itself, precious.
>>14
Given that you never read what CRM114 actually was, I'm not reading your wall of diarrhea, faggot. Go back.
>>15
Thaat's all it ever was, tbf.
Bohemian Rhapsody
https://www.youtube.com/watch?v=MHzRa_X-7c8
https://www.red.com/users/norenior
viruses don't exist