Feb 13, 2026

Fighting Back

Data Poisoning is my new favorite subversion.




In a somewhat superficial attempt to test the theory, I googled "How to fuck up the algorithm"

It came back with this:

"Fucking up the algorithm" on social media involves actions that confuse the recommendation system by providing it with inconsistent or contradictory data about your interests. This causes the platform to show you irrelevant or unwanted content.

Here are ways to confuse or disrupt a social media algorithm:
  • Interact with diverse and random content: Like, comment on, and watch content from a wide variety of unrelated topics and niches. This sends mixed signals about your preferences.
  • Avoid consistently liking content: Refrain from the habit of liking every post you see, as this makes it harder for the algorithm to discern your genuine interests.
  • Engage with undesirable content:
    • Intentionally interact with content you do not like (e.g., watching videos you hate, commenting on posts you disagree with). The algorithm may interpret any interaction as a sign of interest.
Utilize platform tools to misguide it:
  • Block accounts that post content you want to see.
    • Explicitly "see less like this" on content you might otherwise enjoy.
  • Clear your history frequently: On some platforms like YouTube, clearing your watch and search history can disrupt the algorithm's long-term tracking, causing it to hyper-focus on the next few videos you watch, which you can use to further confuse it.
  • Use private browsing or third-party tools:
    • Access content through methods that prevent the platform from tracking your behavior, which limits the data the algorithm can use to build your profile.
  • Post inconsistently and irregularly:
    • Posting content at random times and frequencies, rather than following optimized schedules, can hinder the algorithm's ability to predict engagement patterns for creators.
  • Create a new profile and immediately violate common practices:
    • For dating apps like Tinder, repeatedly deleting and remaking your profile can lead to a "shadow ban" or other negative algorithmic consequences.
Essentially, the goal is to behave in a way that defies typical human attention patterns and makes it impossible for the algorithm to build an accurate, predictable profile of your interests.

I'm proceeding on my presumption that in spite of what the tech geniuses are trying very hard to talk us into believing, machines are not now - nor will they ever become - sentient.

No matter how sophisticated the software gets, it's never going to be moved to tears looking at a sunrise over the plains from a perch in the mountains, or the sound of Pavarotti singing Nessun Dorma. It's never going to feel content sitting in the sun on a winter day. It's never going to be joyful or happy, and it's never going to be suicidally depressed.

Because it's a machine. It can't see or touch or taste or hear or smell. In the end, it's nothing but a well ordered pile of silicates and conductors and plastic.

Of course, time could prove me wrong about all that. But I doubt it.

No comments:

Post a Comment