
Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts
Jan 11, 2026
Jan 5, 2026
Jan 4, 2026
Ooky Spooky
The liberals are improving at memes. At least whoever did this is. pic.twitter.com/9ecOIe8iSm
— Invisidon (@QuantumAlteredX) January 4, 2026
Dec 19, 2025
A Chat With ChatGPT
Why are political conflicts so stressful for many people?
Political conflicts are stressful for many people because they press on some of the deepest psychological and social fault lines humans have. A few interlocking reasons stand out:
1. Politics threatens identity, not just opinions.
For many people, political views are tied to core values—fairness, freedom, safety, dignity, belonging. When those views are challenged, it can feel less like a debate about policy and more like an attack on who they are or what kind of person they are. That triggers defensive emotions similar to those provoked by personal insults.
2. They activate tribal instincts.
Humans evolved to survive in groups. Political conflict often maps neatly onto “us vs. them” categories, activating ancient threat-detection systems. Once someone is perceived as part of an opposing tribe, the brain shifts toward vigilance, anger, and readiness to fight—literally stress responses, not calm reasoning.
3. The stakes feel existential.
Politics governs issues like safety, rights, livelihoods, and the future of one’s children. Even if the actual risk is abstract or long-term, the perceived stakes are high. The nervous system doesn’t distinguish well between an immediate physical threat and a symbolic one that feels just as consequential.
4. Moral emotions run hot.
Political disagreements often involve moral judgments—who deserves help, who is responsible for harm, what is just. Moral emotions (outrage, disgust, righteous anger) are especially intense because they evolved to enforce social norms. Once morality enters the room, compromise can feel like betrayal.
5. Lack of control and ambiguity.
People feel stress when outcomes matter but are largely outside their control. Most individuals have very little direct influence over political outcomes, yet are constantly exposed to alarming narratives about what might happen. That mismatch—high importance, low agency—is a classic recipe for anxiety.
6. Social risk is real.
Political conflict can cost friendships, family harmony, professional standing, or social belonging. Even anticipating conflict raises stress levels, because humans are exquisitely sensitive to social rejection. The fear isn’t imaginary; people really do get ostracized over politics.
7. Modern media amplifies everything.
News and social platforms reward outrage, simplify complex issues into moral binaries, and keep conflicts constantly visible. The result is chronic activation of the stress response with little opportunity for resolution or rest.
8. Compassion fatigue and moral injury.
For people who care deeply, repeated exposure to suffering, injustice, or perceived cruelty—especially when paired with powerlessness—can lead to emotional exhaustion or a sense that the world violates one’s moral expectations. That’s profoundly stressful.
In short, political conflicts are stressful because they blur boundaries between ideas and identity, disagreement and threat, concern and helplessness. They recruit the brain’s survival machinery for problems that rarely have clean, immediate solutions.
Nov 12, 2025
Today's Hawk
- American companies and consumers pay the tariffs
- Trump lied about the 2020 election
- People stormed the US Capitol at Trump's urging
- Trump has ended zero wars
Sep 17, 2025
Sep 9, 2025
Just A Thought
If AI is such an extraordinarily useful tool in decision-making, we should put it in charge of the big corporations and shit-can all those mega-billionaire parasites who think they actually earn their gazillion-dollar salaries and bonuses and perks.
I dunno - worth considering.
Aug 25, 2025
Jul 26, 2025
There's A Heartbreak Coming
At about 24:52, the discussion comes around to AI, and Zeihan is less than enthusiastic about really getting something going before about 2040.
The problem seems to be all about a supply chain that's currently very fragile, and seeing as how the Houthis can disrupt global commerce by popping off a few anti-ship missiles once in a while, we've got significant problems protecting assets with a navy that's aging, and a population that's also aging - to the point where it's possible we just won't have enough sailors to carry the load.
So AI is making some headway, but unless they can solve both the manufacturing problems, and the problems of getting enough electrical juice to power the fuckin' thing, we're not looking at any kind of great leap forward. And with a little luck, we won't get the huge economic disruptions - beyond what the TechBros' egos will push them to do - in the next 15 or 20 years.
May 23, 2025
The Lights Are Flashing
Ten years ago, it was Social Media and bot farms.
The culture is changing with startling rapidity, especially now that the operative phrase is "Go fast and break things".
The road to tyranny
is crowded with people
telling us we're overreacting
Mar 16, 2025
Mar 7, 2025
Mar 2, 2025
Feb 5, 2025
Sep 16, 2024
That's Even Creepier Than Usual
OMG 😆🤣🤣
— 🦅🇺🇸✨FaNtAsIa✨🇺🇸🦅 (@MamaHeezy) September 15, 2024
He looks like Looney Loomer 💀 pic.twitter.com/tYepbI6BNg
Jun 20, 2024
Today's Reddit
I think I'm less worried about AI taking over than I am about cynical manipulative assholes who'll use it to fool otherwise decent people, getting them to do astonishingly appalling things to each other.
And now they can talk
byu/Lord2troie inmidjourney
Apr 30, 2024
Dec 17, 2023
Greasing Up The Fuckery
They're not going to be this obvious.
And the quality will get better.
A lot better.
AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news
Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.
Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.
Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.
One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.
The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.
“Some of these sites are generating hundreds if not thousands of articles a day,” said Jack Brewster, a researcher at NewsGuard who conducted the investigation. “This is why we call it the next great misinformation superspreader.”
Generative artificial intelligence has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.
Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.
Readers can easily be fooled by the websites.
Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing U.S. sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.
The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)
But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as “satire” after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.
Having real and AI-generated news side-by-side makes deceptive stories more believable. “You have people that simply are not media literate enough to know that this is false,” said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. “It’s misleading.”
Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.
The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.
NewsGuard locates AI-generated sites by scanning for error messages or other language that “indicates that the content was produced by AI tools without adequate editing,” the organization says.
The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.
Technology has long fueled misinformation. In the lead-up to the 2020 U.S. election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.
Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.
But Blevins said those techniques are more resource-intensive compared with artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he said. “It’s an information war on a scale we haven’t seen before.”
It’s not clear whether intelligence agencies are using AI-generated news for foreign influence campaigns, but it is a major concern. “I would not be shocked at all that this is used — definitely next year with the elections,” Brewster said. “It’s hard not to see some politician setting up one of these sites to generate fluff content about them and misinformation about their opponent.”
Blevins said people should watch for clues in articles, “red flags” such as “really odd grammar” or errors in sentence construction. But the most effective tool is to increase media literacy among average readers.
“Make people aware that there are these kinds of sites that are out there. This is the kind of harm they can cause,” he said. “But also recognize that not all sources are equally credible. Just because something claims to be a news site doesn’t mean that they actually have a journalist … producing content.”
Regulation, he added, is largely nonexistent. It may be difficult for governments to clamp down on fake news content, for fear of running afoul of free-speech protections. That leaves it to social media companies, which haven’t done a good job so far.
It’s infeasible to deal quickly with the sheer number of such sites. “It’s a lot like playing whack-a-mole,” Blevins said.
“You spot one [site], you shut it down, and there’s another one created someplace else,” he added. “You’re never going to fully catch up with it.”
Dec 14, 2023
Saw It Coming
AT&T's TouchTone phone becomes the dominant telecomm gizmo in the early 70s, and within 3 or 4 years, there's an army of teenagers stealing long distance service, and then eavesdropping on conversations, and 2 or 3 years after that, we've got some serious crooks trying to rob banks with this spiffy new tech shit.
Human wisdom is always at least a generation behind its technological capabilities.
Bigots use AI to make Nazi memes on 4chan. Verified users post them on X.
The ecosystem for explicitly racist and antisemitic memes starts on a fringe site, but ends up in the mainstream through Elon Musk’s platform.
It looks like a poster for a new Pixar movie. But the film’s title is “Dancing Israelis.” Billing the film as “a Mossad/CIA production,” the poster depicts a caricatured stereotype of a dancing Jewish man whose boot is knocking down the World Trade Center towers — a reference to antisemitic 9/11 conspiracy theories.
Posted to X on Oct. 27 by a verified user with about 220,000 followers who bills himself as an “America-first patriot,” the image garnered about 190,000 views, including 8,000 likes and 1,500 reshares. Content moderators at X took no action against the tweet, and the user posted it again on Nov. 16, racking up an additional 194,000 views. Both tweets remained on the site as of Wednesday, even after researchers flagged them as hate posts using the social network’s reporting system.
An antisemitic post on Elon Musk’s X is not exactly news. But new research finds the site has emerged as a conduit to mainstream exposure for a fresh wave of automated hate memes, generated using cutting-edge AI image tools by trolls on the notorious online forum 4chan. The research by the nonprofit Center for Countering Digital Hate (CCDH), shared with and verified by The Washington Post, finds that a campaign by 4chan members to spread “AI Jew memes” in the wake of the Oct. 7 Hamas attack resulted in 43 different images reaching a combined 2.2 million views on X between Oct. 5 and Nov. 16, according to the site’s publicly displayed metrics.
Examples of widely viewed posts include a depiction of U.S. Army soldiers kneeling before a Jewish man on a throne; Taylor Swift in a Nazi officer’s uniform sliding a Jewish man into an oven; and a Jewish man pulling the strings on a puppet of a Black man. The latter may be a reference to the “Great Replacement” conspiracy theory, which was cited as motivation by the 18-year-old white man who slaughtered 10 Black people at a Buffalo, N.Y, grocery store in May 2022, and which Musk seemed to endorse in a tweet last month.
More than half of the posts were made by verified accounts, whose owners pay X a monthly fee for special status and whose posts are prioritized in users’ feeds by the site’s algorithms. The verified user who tweeted the image of U.S. Army soldiers bowing to a Jewish ruler, with a tweet claiming that Jews seek to enslave the world, ran for U.S. Senate in Utah as a Republican in 2018 and has 86,000 followers on X.
The proliferation of machine-generated bigotry, which 4chan users created using AI tools such as Microsoft’s Image Creator, calls into question recent claims by Musk and X CEO Linda Yaccarino that the company is cracking down on antisemitic content amid a pullback by major advertisers. In a Nov. 14 blog post, X said it had expanded its automated moderation of antisemitic content and provided its moderators with “a refresher course on antisemitism.”
But the researchers said that of 66 posts they reported as hate speech on Dec. 7, X appeared to have taken action on just three as of Monday. Two of those three had their visibility limited, while one was taken down. The Post independently verified that the 63 others remained publicly available on X as of Wednesday, without any indication that the company had taken action on them. Most appeared to violate X’s hateful conduct policy.
Several of the same AI-generated images also have been posted to other major platforms, including TikTok, Instagram, Reddit, YouTube and Facebook, the researchers noted. But the CCDH said it focused on X because the site’s cutbacks on moderation under Musk have made it a particularly hospitable environment for explicitly hateful content to reach a wider audience. The Post’s own review of the 4chan archives suggested that X has been a favored platform for sharing the antisemitic images, though not the only platform.
X’s business is reeling after some of its largest advertisers pulled their ads last month. The backlash came in response to Musk’s antisemitic tweet and a report from another nonprofit, Media Matters for America, that showed posts pushing Nazi propaganda were running alongside major brands’ ads on the site.
Among the companies to pull its spending was Disney, whose brand features prominently in many of the AI-generated hate memes now circulating on X. Speaking at a conference organized by the New York Times last month, Musk unleashed a profane rant against advertisers who paused their spending on X, accusing them of “blackmail” and saying they’re going to “kill the company.” He mentioned Disney’s CEO by name.
This is the growing list of companies pulling ads from X
The most widely shared post in the CCDH’s research was a tweet that read “Pixar’s Nazi Germany,” with a montage of four AI-generated scenes from an imaginary animated movie, depicting smiling Nazis running concentration camps and leading Jewish children and adults into gas chambers (Pixar is owned by Disney). It was one of the few posts in the study that had been labeled by X’s content moderators, with a note that read, “Visibility limited: this Post (sic) may violate X’s rules against Hateful Conduct.” Even so, as of Wednesday, it had been viewed more than half a million times, according to X’s metrics.
Another verified X account has posted dozens of the AI hate memes, including faux Pixar movie posters that feature Adolf Hitler as a protagonist, without any apparent sanction from the platform.
Musk, the world’s richest person, has sued both Media Matters for America and the Center for Countering Digital Hate over their research of hate speech on X. After the latest wave of criticism over antisemitism, Musk announced strict new policies against certain pro-Palestinian slogans. And he visited Israel to declare his support for the country, broadcasting his friendly meeting with the country’s right-wing prime minister, Benjamin Netanyahu.
Yaccarino, who was appointed CEO by Musk in May, said in a November tweet that X has been “extremely clear about our efforts to combat antisemitism and discrimination.” The company did not respond to an email asking whether the antisemitic AI memes violate its policies.
4chan is an anonymous online messaging board that has long served as a hub for offensive and extremist content. When Musk bought Twitter last fall, 4chan trolls celebrated by flooding the site with racist slurs. Early in October of this year, members of 4chan’s “Politically Incorrect” message board began teaching and encouraging one another to generate racist and antisemitic right-wing memes using AI image tools, as first reported by the tech blog 404 Media.
The 4chan posts described ways to evade measures intended to prevent people from generating offensive content. Those included a “quick method” using Microsoft’s Image Creator, formerly called Bing Image Creator, which is built around OpenAI’s Dall-E 3 software and viewed as having flimsier restrictions on sensitive content.
“If you add words you think will trip the censor, space them out from the part of the prompt you are working on,” one 4chan post advised, describing how to craft text prompts that would yield successful results. “Example: rabbi at the beginning, big nose at the end.”
After the Oct. 7 Hamas attack on Israel, the focus among 4chan users on antisemitic content seemed to sharpen. Numerous “AI Jew memes” threads emerged with various sub-themes, such as the “Second Holocaust edition” and the “Ovens Run All Day edition.”
Microsoft’s director of communications, Caitlin Roulston, said in a statement, “When these reports surface, we take the appropriate steps to address them, as we’ve done in the past. … As with any new technology, some are trying to use it in unintended ways, and any repeated attempts to produce content that goes against our policy guidelines may result in loss of access to the service.” Microsoft did not say how many people have been denied access to its imaging program because they violated its rules.
The ability to generate extremist imagery using digital tools isn’t new. Programs such as Adobe Photoshop have long allowed people to manipulate images without moderating the content they can produce from it.
But the ability to create complex images from scratch in seconds, whether in the form of a Pixar movie poster or a photorealistic war image, with only a few lines of text is different. And the ability of overt hate accounts to be verified and amplified on X has made spreading such messages easier than ever, said Imran Ahmed, CCDH’s CEO. “Clearly the cost of producing and disseminating extremist material has never been lower.”
Sara Aniano, disinformation analyst at the Anti-Defamation League’s Center on Extremism, said AI seems to be ushering in “the next phase of meme culture.”
The goal of extremists in sharing AI hate memes to mainstream social media platforms is to “redpill” ordinary people, meaning to lead them down a path of radicalization and conspiracism, Aniano added. “You can always expect this rhetoric to be in fringe spaces. but they love it when it escapes those spaces.”
Not all of the AI memes flourishing on X are antisemitic. Ashlea Simon, chair of the United Kingdom’s far-right Britain First party, has taken to posting apparently AI-generated images that target Muslim migrants, suggesting that they want to rape white women and “replace our peoples.”
The party and some of its leaders, boosted by Donald Trump on Twitter in 2017, had been banned from Twitter for hate speech under the previous ownership. But Musk reinstated them soon after buying the company, then gave the party its gold “official organization” verification label in April.
While Musk has said he’s personally against antisemitism, he has at times defended the presence of antisemitic content on X. “Free speech does at times mean that someone you don’t like is saying something you don’t like,” he said in his conversation with Netanyahu in September. “If you don’t have that, then it’s not free speech.”
Ahmed said the problem is that social media platforms, without careful moderation, tend to amplify extreme and offensive viewpoints, because they treat people’s shocked and outraged responses as a signal of engagement.
“If you’re Jewish, or if you’re Muslim, and every day you open up X and you see new images at the top of your timeline that depict you as a bloodsucking monster, it makes you feel like maybe these platforms, but also society more broadly, might be against you,” he said.
Nov 5, 2023
Posted At Reddit
Most of this is reiteration, so there's not much that's new here. Still, it's worth review, and worth the assessment that AI could be quite a useful research tool.
From r/QAnonCasualties:
Thought you all would appreciate this...
Cults and conspiracy theories can sometimes overlap in several ways, although they are distinct phenomena. Here are some of the common points of intersection:
**Belief Systems**: Both cults and conspiracy theories often revolve around belief systems that challenge mainstream or conventional perspectives. Cult members and conspiracy theory adherents may be drawn to alternative explanations and interpretations of reality.
**Us vs. Them Mentality**: Cults and conspiracy theories frequently foster an "us vs. them" mentality. They create a sense of belonging to a select group that possesses special knowledge or insights, while portraying outsiders or skeptics as ignorant or part of the conspiracy.
⬇︎ One of my favorites ⬇︎
**Secrecy and Hidden Knowledge**: Both cults and conspiracy theories often emphasize the possession of secret or hidden knowledge. Cult leaders and conspiracy theory promoters claim access to exclusive truths that are deliberately concealed from the general public.
**Recruitment and Indoctrination**: Cults and promoters of conspiracy theories employ recruitment and indoctrination techniques to attract new members or followers. They may use persuasive tactics to engage individuals and convince them of the validity of their beliefs.
**Isolation**: Cults may isolate their members from the outside world to maintain control and prevent exposure to conflicting information. Similarly, conspiracy theory adherents may avoid mainstream sources of information and socialize primarily with like-minded individuals in online echo chambers.
**Deceptive Manipulation**: Both cults and conspiracy theories can involve deceptive manipulation. Cult leaders may exploit the vulnerability of their followers, while conspiracy theories often rely on misleading or fabricated evidence to support their claims.
**Resistance to Rational Arguments**: Cult members and conspiracy theory adherents may be resistant to rational arguments or evidence that contradicts their beliefs. They tend to dismiss opposing viewpoints as part of the alleged conspiracy.
**Strong Emotional Attachment**: People involved in cults and those immersed in conspiracy theories often develop strong emotional attachments to their respective belief systems. These attachments can make it challenging for them to disengage from these ideologies.
It's important to note that while there are these points of overlap, cults and conspiracy theories are distinct in many ways. Cults typically involve a charismatic leader and a tightly controlled organization, while conspiracy theories are often more decentralized and widespread. Cults may have physical communities, while conspiracy theories often exist in the digital realm. Understanding the differences between the two is important for addressing the unique challenges and risks associated with each phenomenon.
I would add: A conspiracy fantasy (it's not a theory, BTW - it's not even a hypothesis) can easily be considered some asshole's stupid little joke - they just wanna see who's gullible enough to swallow it, and how viral it gets - but it's more than likely a tool to manipulate the devotees, further cementing them in place within the cult.
Subscribe to:
Comments (Atom)


























