[ prog / sol / mona ]

sol


On Neospam

1 2023-10-03 22:27

Ragebait, shitposts, and grifting are modern manifestations of an older internet phenomenon: spam. The underlying mechanics are virtually identical. These are not new forms of online behavior but a rehash of tactics originally employed for email scams, chain letters, or even clickbait ads. They're phishing attacks and zero-days targeted at vulnerable cognitive and emotional attack surfaces. Yet there's no CVE database (https://nvd.nist.gov/vuln) for human bugs.

In traditional spam, the objective is often explicit: trick someone into clicking a link, or perhaps to relinquish personal information. With ragebait or shitposts, the aim becomes more abstract but equally exploitative. They prey on psychological vulnerabilities, hijacking attention, and mental bandwidth. Just like spam targets software vulnerabilities or user ignorance, these newer forms leverage cognitive biases, emotional triggers, or divisive social issues. The methodology is identical: exploit a weakness to gain something—be it data, attention, or social capital.

The crucial difference between spam and social media lies in the facilitation and profiteering by large corporations. In the early days of spam, Internet Service Providers and email services would actively try to filter out unwanted messages, largely to improve user experience. Now, social media companies have little incentive to mitigate these patterns because they profit through engagement, regardless of the quality of that engagement. Every click, share, or enraged comment on a piece of ragebait translates to more ad impressions, higher user engagement metrics, and ultimately, more revenue. This creates a perverse incentive structure where the platform's economic interests are aligned with the spread of intellectually or emotionally toxic content.

Moreover, the algorithms that these platforms employ exacerbate the issue. Designed to maximize user engagement, they are agnostic to the ethical implications of the content they promote. So, the high engagement garnered by ragebait, shitposts, or grifts makes them more likely to be pushed to the forefront, creating a feedback loop that encourages the creation of even more of such content.

What's important to realize is that these dynamics are not accidental byproducts but are often by design, baked into the business model of many social media platforms. These companies have the technical means to identify and curb spam-like behavior but choose not to do so comprehensively because it would interfere with profit generation.

The problem is systemic and requires systemic solutions. Consumer-level interventions like education or awareness campaigns are inadequate. It would be naïve to think the issue will resolve itself when the economic incentives for corporations are stacked against such an outcome. Legislative action targeting the business models of these platforms is a more promising avenue, but that opens a Pandora's Box of free speech and antitrust issues. Regardless, the first step to solving any problem is recognizing there is one. And make no mistake, we have a problem.

The antidote to the issue discussed is bifurcated: revamping late-2000s anti-spam strategies and reintroducing the Internet's dormant broadcasting capabilities. This returns the Internet to pure stigmergy without conspirators and secret societies.

Firstly, consider an evolved form of CRM114 (https://media.blackhat.com/bh-us-10/whitepapers/Yerazunis/BlackHat-USA-2010-Yerazunis-Confidential-Mail-Filtering-wp.pdf)

The actual presentation rather than the notes can be found on YouTube.

Unfortunately the project in unmaintained, but the official website remains online (https://crm114.sourceforge.net).

CRM114 is a discriminator that was originally designed to sift out spam emails by statistically scrutinizing text patterns using a cocktail of Bayesian filtering, Markov chains, and classifiers. Unlike its original purpose, which was to filter out malicious or irrelevant emails (or preventing accidental exfiltration of confidential data, in a language the developer didn't understand), this next iteration would focus on tooling to identify and subsequently weed out dark patterns in user-generated content using a similar cocktail of technologies.

If CRM114 could successfully classify email spam (and it did), it is beyond conceivable that an analogous system could classify ragebait, shitposts, or other forms of emotionally manipulative content based on textual or even visual cues. By targeting specific behavioral patterns that exploit cognitive biases or emotional vulnerabilities, the upgraded anti-spam mechanism could filter out the noise, leaving behind only valuable content, where value is entirely user-defined.

The second prong involves reopening the now-closed mail relays and letting information flow unimpeded again. Similarly, creating new open relays for different types of content, like RSS.

While SMTP has its own problems, the underlying technology represents the best of the Internet: open, nondiscriminatory, and ubiquitous, using the sheer volume of nodes to guarantee delivery. It represents the kind of Internet where content moves freely, without proprietary bottlenecks or gatekeeping. By introducing newer protocols on the basis of SMTP, the Internet can revert to its original design as a vast, decentralized network. Individuals would then regain the capability to broadcast pure digital signals, spam, ham, or otherwise, without reliance on intermediary platforms that are susceptible to the perverse incentives discussed earlier.

When both strategies are applied in concert, they offer a pathway to reclaim a cleaner, more direct experience on the Internet. Such a dual approach would effectively circumvent the centralization and monetization of attention that fuels the current system. For those who opt in, it means a return to an environment where information is both tailored and substantial, devoid of middlemen who distort content for profit and currency.

In short, we could end the Eternal September—the influx of new users that forever changed the landscape of the Internet. It is within reach through these measures. It would mean a return to an Internet that values constant signal over temporary noise, where users actively contribute to and curate their own information streams, free from manipulative algorithms, corporate oversight, and petty tyrants. This isn't a utopian ideal but a technically feasible objective, requiring both the will to build and adopt new systems, and the foresight to recognize the dire need for them.

2 2023-10-03 22:28

The immediate counterargument one might anticipate is that these solutions are too complex or demanding for the average Internet user. But this complexity is not a bug; it's a feature. It's precisely this level of granularity that allows for a highly customized experience. Complexity serves as a sort of filter, gating off this improved digital space to those who are committed to bettering their own informational environment. That commitment serves as a form of protection against the very problems we are discussing.

Consider curated ad-blocking lists, a concept most Internet users are familiar with. These lists are maintained by communities of advanced users and are often composed of both specific domain names and regular expression (regex) sets designed to identify the structure of ad-related content. The average user may not understand the intricacies of these lists, but they don't need to; they simply reap the benefits by subscribing to them. These lists effectively restore control to the user, allowing them to dictate the terms of their own browsing experience, rather than surrendering that control to advertising networks.

Similarly, human filters for user-side customization can function based on the same principles. There would be no obligation for every user to understand the underlying mechanics, nor engage in their own personal curation. Just as one doesn't need to be a mechanic to drive a car, one doesn't need to be a software engineer to navigate a cleaner, less manipulative Internet. Users can opt into sets of filters curated by others—be they organizations committed to unbiased information dissemination or communities centered around specific interests. One might say that this is precisely what platforms do, but the difference is precisely that the user only opts-in to blacklists, not their content.

What's critical to understand is that these solutions create a tiered Internet experience. For those unwilling or unable to navigate the complexities, the current platforms of the Internet would still exist. For those that would take the time to engage with these advanced tools, a more pure, undiluted form of information awaits. It's essentially a return to an expertise-driven model, a digital meritocracy where the depth of your experience is commensurate with your level of engagement and understanding. And that's not elitism; that's the nobility (and noblesse oblige) of the technical caste.

20


VIP:

do not edit these