Slouching Towards Oblivion

Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, December 18, 2023

Greasing Up The Fuckery

They're not going to be this obvious.
And the quality will get better.
A lot better.


The rise of AI fake news is creating a ‘misinformation superspreader’

AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news


Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.

One AI-generated article recounted a made-up story about Benjamin Netanyahu’s psychiatrist, a NewsGuard investigation found, alleging that he had died and left behind a note suggesting the involvement of the Israeli prime minister. The psychiatrist appears to have been fictitious, but the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English and Indonesian, and spread by users on TikTok, Reddit and Instagram.




The heightened churn of polarizing and misleading content may make it difficult to know what is true — harming political candidates, military leaders and aid efforts. Misinformation experts said the rapid growth of these sites is particularly worrisome in the run-up to the 2024 elections.

“Some of these sites are generating hundreds if not thousands of articles a day,” said Jack Brewster, a researcher at NewsGuard who conducted the investigation. “This is why we call it the next great misinformation superspreader.”

Generative artificial intelligence has ushered in an era in which chatbots, image makers and voice cloners can produce content that seems human-made.

Well-dressed AI-generated news anchors are spewing pro-Chinese propaganda, amplified by bot networks sympathetic to Beijing. In Slovakia, politicians up for election found their voices had been cloned to say controversial things they never uttered, days before voters went to the polls. A growing number of websites, with generic names such as iBusiness Day or Ireland Top News, are delivering fake news made to look genuine, in dozens of languages from Arabic to Thai.

Readers can easily be fooled by the websites.

Global Village Space, which published the piece on Netanyahu’s alleged psychiatrist, is flooded with articles on a variety of serious topics. There are pieces detailing U.S. sanctions on Russian weapons suppliers; the oil behemoth Saudi Aramco’s investments in Pakistan; and the United States’ increasingly tenuous relationship with China.

The site also contains essays written by a Middle East think tank expert, a Harvard-educated lawyer and the site’s chief executive, Moeed Pirzada, a television news anchor from Pakistan. (Pirzada did not respond to a request for comment. Two contributors confirmed they have written articles appearing on Global Village Space.)

But sandwiched in with these ordinary stories are AI-generated articles, Brewster said, such as the piece on Netanyahu’s psychiatrist, which was relabeled as “satire” after NewsGuard reached out to the organization during its investigation. NewsGuard says the story appears to have been based on a satirical piece published in June 2010, which made similar claims about an Israeli psychiatrist’s death.

Having real and AI-generated news side-by-side makes deceptive stories more believable. “You have people that simply are not media literate enough to know that this is false,” said Jeffrey Blevins, a misinformation expert and journalism professor at the University of Cincinnati. “It’s misleading.”

Websites similar to Global Village Space may proliferate during the 2024 election, becoming an efficient way to distribute misinformation, media and AI experts said.

The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.

NewsGuard locates AI-generated sites by scanning for error messages or other language that “indicates that the content was produced by AI tools without adequate editing,” the organization says.

The motivations for creating these sites vary. Some are intended to sway political beliefs or wreak havoc. Other sites churn out polarizing content to draw clicks and capture ad revenue, Brewster said. But the ability to turbocharge fake content is a significant security risk, he added.

Technology has long fueled misinformation. In the lead-up to the 2020 U.S. election, Eastern European troll farms — professional groups that promote propaganda — built large audiences on Facebook disseminating provocative content on Black and Christian group pages, reaching 140 million users per month.

Pink-slime journalism sites, named after the meat byproduct, often crop up in small towns where local news outlets have disappeared, generating articles that benefit the financiers that fund the operation, according to the media watchdog Poynter.

But Blevins said those techniques are more resource-intensive compared with artificial intelligence. “The danger is the scope and scale with AI … especially when paired with more sophisticated algorithms,” he said. “It’s an information war on a scale we haven’t seen before.”

It’s not clear whether intelligence agencies are using AI-generated news for foreign influence campaigns, but it is a major concern. “I would not be shocked at all that this is used — definitely next year with the elections,” Brewster said. “It’s hard not to see some politician setting up one of these sites to generate fluff content about them and misinformation about their opponent.”

Blevins said people should watch for clues in articles, “red flags” such as “really odd grammar” or errors in sentence construction. But the most effective tool is to increase media literacy among average readers.

“Make people aware that there are these kinds of sites that are out there. This is the kind of harm they can cause,” he said. “But also recognize that not all sources are equally credible. Just because something claims to be a news site doesn’t mean that they actually have a journalist … producing content.”

Regulation, he added, is largely nonexistent. It may be difficult for governments to clamp down on fake news content, for fear of running afoul of free-speech protections. That leaves it to social media companies, which haven’t done a good job so far.

It’s infeasible to deal quickly with the sheer number of such sites. “It’s a lot like playing whack-a-mole,” Blevins said.

“You spot one [site], you shut it down, and there’s another one created someplace else,” he added. “You’re never going to fully catch up with it.”










Thursday, December 14, 2023

Saw It Coming


The bad guys are out there. The bad guys are always fucking out there.

AT&T's TouchTone phone becomes the dominant telecomm gizmo in the early 70s, and within 3 or 4 years, there's an army of teenagers stealing long distance service, and then eavesdropping on conversations, and 2 or 3 years after that, we've got some serious crooks trying to rob banks with this spiffy new tech shit.

Human wisdom is always at least a generation behind its technological capabilities.


Bigots use AI to make Nazi memes on 4chan. Verified users post them on X.

The ecosystem for explicitly racist and antisemitic memes starts on a fringe site, but ends up in the mainstream through Elon Musk’s platform.


It looks like a poster for a new Pixar movie. But the film’s title is “Dancing Israelis.” Billing the film as “a Mossad/CIA production,” the poster depicts a caricatured stereotype of a dancing Jewish man whose boot is knocking down the World Trade Center towers — a reference to antisemitic 9/11 conspiracy theories.

Posted to X on Oct. 27 by a verified user with about 220,000 followers who bills himself as an “America-first patriot,” the image garnered about 190,000 views, including 8,000 likes and 1,500 reshares. Content moderators at X took no action against the tweet, and the user posted it again on Nov. 16, racking up an additional 194,000 views. Both tweets remained on the site as of Wednesday, even after researchers flagged them as hate posts using the social network’s reporting system.

An antisemitic post on Elon Musk’s X is not exactly news. But new research finds the site has emerged as a conduit to mainstream exposure for a fresh wave of automated hate memes,
generated using cutting-edge AI image tools by trolls on the notorious online forum 4chan. The research by the nonprofit Center for Countering Digital Hate (CCDH), shared with and verified by The Washington Post, finds that a campaign by 4chan members to spread “AI Jew memes” in the wake of the Oct. 7 Hamas attack resulted in 43 different images reaching a combined 2.2 million views on X between Oct. 5 and Nov. 16, according to the site’s publicly displayed metrics.

Examples of widely viewed posts include a depiction of U.S. Army soldiers kneeling before a Jewish man on a throne; Taylor Swift in a Nazi officer’s uniform sliding a Jewish man into an oven; and a Jewish man pulling the strings on a puppet of a Black man. The latter may be a reference to the “Great Replacement” conspiracy theory, which was cited as motivation by the 18-year-old white man who slaughtered 10 Black people at a Buffalo, N.Y, grocery store in May 2022, and which Musk seemed to endorse in a tweet last month.

More than half of the posts were made by verified accounts, whose owners pay X a monthly fee for special status and whose posts are prioritized in users’ feeds by the site’s algorithms. The verified user who tweeted the image of U.S. Army soldiers bowing to a Jewish ruler, with a tweet claiming that Jews seek to enslave the world, ran for U.S. Senate in Utah as a Republican in 2018 and has 86,000 followers on X.

The proliferation of machine-generated bigotry, which 4chan users created using AI tools such as Microsoft’s Image Creator, calls into question recent claims by Musk and X CEO Linda Yaccarino that the company is cracking down on antisemitic content amid a pullback by major advertisers. In a Nov. 14 blog post, X said it had expanded its automated moderation of antisemitic content and provided its moderators with “a refresher course on antisemitism.”

But the researchers said that of 66 posts they reported as hate speech on Dec. 7, X appeared to have taken action on just three as of Monday. Two of those three had their visibility limited, while one was taken down. The Post independently verified that the 63 others remained publicly available on X as of Wednesday, without any indication that the company had taken action on them. Most appeared to violate X’s hateful conduct policy.

Several of the same AI-generated images also have been posted to other major platforms, including TikTok, Instagram, Reddit, YouTube and Facebook, the researchers noted. But the CCDH said it focused on X because the site’s cutbacks on moderation under Musk have made it a particularly hospitable environment for explicitly hateful content to reach a wider audience. The Post’s own review of the 4chan archives suggested that X has been a favored platform for sharing the antisemitic images, though not the only platform.

X’s business is reeling after some of its largest advertisers pulled their ads last month. The backlash came in response to Musk’s antisemitic tweet and a report from another nonprofit, Media Matters for America, that showed posts pushing Nazi propaganda were running alongside major brands’ ads on the site.

Among the companies to pull its spending was Disney, whose brand features prominently in many of the AI-generated hate memes now circulating on X. Speaking at a conference organized by the New York Times last month, Musk unleashed a profane rant against advertisers who paused their spending on X, accusing them of “blackmail” and saying they’re going to “kill the company.” He mentioned Disney’s CEO by name.

This is the growing list of companies pulling ads from X

The most widely shared post in the CCDH’s research was a tweet that read “Pixar’s Nazi Germany,” with a montage of four AI-generated scenes from an imaginary animated movie, depicting smiling Nazis running concentration camps and leading Jewish children and adults into gas chambers (Pixar is owned by Disney). It was one of the few posts in the study that had been labeled by X’s content moderators, with a note that read, “Visibility limited: this Post (sic) may violate X’s rules against Hateful Conduct.” Even so, as of Wednesday, it had been viewed more than half a million times, according to X’s metrics.

Another verified X account has posted dozens of the AI hate memes, including faux Pixar movie posters that feature Adolf Hitler as a protagonist, without any apparent sanction from the platform.

Musk, the world’s richest person, has sued both Media Matters for America and the Center for Countering Digital Hate over their research of hate speech on X. After the latest wave of criticism over antisemitism, Musk announced strict new policies against certain pro-Palestinian slogans. And he visited Israel to declare his support for the country, broadcasting his friendly meeting with the country’s right-wing prime minister, Benjamin Netanyahu.

Yaccarino, who was appointed CEO by Musk in May, said in a November tweet that X has been “extremely clear about our efforts to combat antisemitism and discrimination.” The company did not respond to an email asking whether the antisemitic AI memes violate its policies.

4chan is an anonymous online messaging board that has long served as a hub for offensive and extremist content. When Musk bought Twitter last fall, 4chan trolls celebrated by flooding the site with racist slurs. Early in October of this year, members of 4chan’s “Politically Incorrect” message board began teaching and encouraging one another to generate racist and antisemitic right-wing memes using AI image tools, as first reported by the tech blog 404 Media.

The 4chan posts described ways to evade measures intended to prevent people from generating offensive content. Those included a “quick method” using Microsoft’s Image Creator, formerly called Bing Image Creator, which is built around OpenAI’s Dall-E 3 software and viewed as having flimsier restrictions on sensitive content.

“If you add words you think will trip the censor, space them out from the part of the prompt you are working on,” one 4chan post advised, describing how to craft text prompts that would yield successful results. “Example: rabbi at the beginning, big nose at the end.”

After the Oct. 7 Hamas attack on Israel, the focus among 4chan users on antisemitic content seemed to sharpen. Numerous “AI Jew memes” threads emerged with various sub-themes, such as the “Second Holocaust edition” and the “Ovens Run All Day edition.”

Microsoft’s director of communications, Caitlin Roulston, said in a statement, “When these reports surface, we take the appropriate steps to address them, as we’ve done in the past. … As with any new technology, some are trying to use it in unintended ways, and any repeated attempts to produce content that goes against our policy guidelines may result in loss of access to the service.” Microsoft did not say how many people have been denied access to its imaging program because they violated its rules.

The ability to generate extremist imagery using digital tools isn’t new. Programs such as Adobe Photoshop have long allowed people to manipulate images without moderating the content they can produce from it.

But the ability to create complex images from scratch in seconds, whether in the form of a Pixar movie poster or a photorealistic war image, with only a few lines of text is different. And the ability of overt hate accounts to be verified and amplified on X has made spreading such messages easier than ever, said Imran Ahmed, CCDH’s CEO. “Clearly the cost of producing and disseminating extremist material has never been lower.”

Sara Aniano, disinformation analyst at the Anti-Defamation League’s Center on Extremism, said AI seems to be ushering in “the next phase of meme culture.”

The goal of extremists in sharing AI hate memes to mainstream social media platforms is to “redpill” ordinary people, meaning to lead them down a path of radicalization and conspiracism, Aniano added. “You can always expect this rhetoric to be in fringe spaces. but they love it when it escapes those spaces.”

Not all of the AI memes flourishing on X are antisemitic. Ashlea Simon, chair of the United Kingdom’s far-right Britain First party, has taken to posting apparently AI-generated images that target Muslim migrants, suggesting that they want to rape white women and “replace our peoples.”

The party and some of its leaders, boosted by Donald Trump on Twitter in 2017, had been banned from Twitter for hate speech under the previous ownership. But Musk reinstated them soon after buying the company, then gave the party its gold “official organization” verification label in April.

While Musk has said he’s personally against antisemitism, he has at times defended the presence of antisemitic content on X. “Free speech does at times mean that someone you don’t like is saying something you don’t like,” he said in his conversation with Netanyahu in September. “If you don’t have that, then it’s not free speech.”

Ahmed said the problem is that social media platforms, without careful moderation, tend to amplify extreme and offensive viewpoints, because they treat people’s shocked and outraged responses as a signal of engagement.

“If you’re Jewish, or if you’re Muslim, and every day you open up X and you see new images at the top of your timeline that depict you as a bloodsucking monster, it makes you feel like maybe these platforms, but also society more broadly, might be against you,” he said.

Sunday, November 05, 2023

Posted At Reddit


Most of this is reiteration, so there's not much that's new here. Still, it's worth review, and worth the assessment that AI could be quite a useful research tool.


I have been playing around with ChatGPT and asked it about the overlap between cults and conspiracy theorists

Thought you all would appreciate this...

Cults and conspiracy theories can sometimes overlap in several ways, although they are distinct phenomena. Here are some of the common points of intersection:

**Belief Systems**:
Both cults and conspiracy theories often revolve around belief systems that challenge mainstream or conventional perspectives. Cult members and conspiracy theory adherents may be drawn to alternative explanations and interpretations of reality.

**Us vs. Them Mentality**: Cults and conspiracy theories frequently foster an "us vs. them" mentality. They create a sense of belonging to a select group that possesses special knowledge or insights, while portraying outsiders or skeptics as ignorant or part of the conspiracy.

⬇︎ One of my favorites ⬇︎

**Secrecy and Hidden Knowledge**: Both cults and conspiracy theories often emphasize the possession of secret or hidden knowledge. Cult leaders and conspiracy theory promoters claim access to exclusive truths that are deliberately concealed from the general public.

**Recruitment and Indoctrination**: Cults and promoters of conspiracy theories employ recruitment and indoctrination techniques to attract new members or followers. They may use persuasive tactics to engage individuals and convince them of the validity of their beliefs.

**Isolation**: Cults may isolate their members from the outside world to maintain control and prevent exposure to conflicting information. Similarly, conspiracy theory adherents may avoid mainstream sources of information and socialize primarily with like-minded individuals in online echo chambers.

**Deceptive Manipulation**: Both cults and conspiracy theories can involve deceptive manipulation. Cult leaders may exploit the vulnerability of their followers, while conspiracy theories often rely on misleading or fabricated evidence to support their claims.

**Resistance to Rational Arguments**: Cult members and conspiracy theory adherents may be resistant to rational arguments or evidence that contradicts their beliefs. They tend to dismiss opposing viewpoints as part of the alleged conspiracy.

**Strong Emotional Attachment**: People involved in cults and those immersed in conspiracy theories often develop strong emotional attachments to their respective belief systems. These attachments can make it challenging for them to disengage from these ideologies.

It's important to note that while there are these points of overlap, cults and conspiracy theories are distinct in many ways. Cults typically involve a charismatic leader and a tightly controlled organization, while conspiracy theories are often more decentralized and widespread. Cults may have physical communities, while conspiracy theories often exist in the digital realm. Understanding the differences between the two is important for addressing the unique challenges and risks associated with each phenomenon.

I would add: A conspiracy fantasy (it's not a theory, BTW - it's not even a hypothesis) can easily be considered some asshole's stupid little joke - they just wanna see who's gullible enough to swallow it, and how viral it gets - but it's more than likely a tool to manipulate the devotees, further cementing them in place within the cult.

Sunday, July 30, 2023

Out Of Balance


I admit that I spent a lot of time and energy in the 70s and 80s hating on unions. I was convinced they'd outlived their usefulness. And maybe, at the time, I was at least partly right. There was plenty of corruption and feather-bedding going on.

But now I think I was getting suckered into believing it was just the unions that were fucked up, and that the companies had practically nothing to do with any of it - they were being victimized by union members who were just looking for a free ride.

As per usual, there's some truth to it, but mostly, it's PR - purely cynical and manipulative bullshit put out by coin-operated politicians and their plutocratic paymasters.

There's some pretty unsavory history to be written about American Economic Evolution over the last 50 years, and about Labor Relations in particular during that time. And if we survive the next few election cycles, I hope to see a little muckraking that would do Upton Sinclair and Ida Tarbell proud.

Anyway, the Hollywood strike may well be in the process of telling an important story that we all need to internalize.


Opinion --Joseph Gordon-Levitt

If artificial intelligence uses your work, it should pay you

Actors love residuals. Don’t worry, this is not some annoying Hollywood thing like quinoa or crystal healing. I actually do like quinoa — sorry. Anyway, what are residuals?

Residuals are an important part of how actors get paid. They’re checks you get long after you’re done shooting, because audiences are still watching the movie, perhaps on a new channel or in a new country. Some residual checks can be for pennies, some for thousands of dollars or more.

At the moment, Hollywood’s labor unions for writers and actors, the WGA and SAG-AFTRA, are on strike, and streaming residuals are one of the main deal points being negotiated. I’m a proud member of both of these unions, and I strongly support their demands. But zooming out, so to speak, I think there’s a larger conversation to be had about residuals that goes well beyond Hollywood.

Actors, writers and the rest of us film and television professionals might be some of the first (but not the last) to have our jobs threatened by so-called artificial intelligence — technology that ingests enormous amounts of “training data,” crunches the numbers and spits out new combinations of the data, following the patterns it was trained on. Tech giants, entertainment giants and every other profit-hungry giant will soon claim that AI can do human-level work at an astonishingly small fraction of the cost.

However, this claim will be hiding something, like the Wizard of Oz bellowing, “Pay no attention to that man behind the curtain!” What’s behind the curtain of AI? The cost of the human labor it took to produce the training data. “Generative AI” cannot generate anything at all without first being trained on massive troves of data it then recombines. Who produces that training data? People do.

And those people deserve residuals.

For example, let’s say a new AI program could generate a complete feature film at the click of a button. (Some industry insiders think this could actually happen pretty soon.) Building that AI will require a bazillion past movies to be used as training data. Therefore, anytime this AI yields revenue, the people who made those past movies will deserve a substantial piece. And I’m not just talking about actors and writers. I also mean people who don’t get residuals today: the camera operators, the costume designers, the sound mixers, everyone whose work the AI will be ingesting, mashing up and mimicking.

This principle holds true whether we’re talking about entertainers, doctors, engineers or pretty much anyone whose work involves a computer. AI can’t do our jobs yet, but it might be able to soon. And people whose jobs are threatened by AI will be the same people who produced the data used to train it. A new kind of residuals for these human data producers could potentially provide some much-needed economic relief.

And by the way, I’m not the only one advocating this kind of thing. Renowned technologists and economists, including Jaron Lanier and E. Glen Weyl, have long argued that Big Tech should not be allowed to monetize people’s data without compensating them. This concept of “data dignity” was largely responding to the surveillance advertising business models of companies such as Google and Facebook, but Lanier and Weyl also pointed out, quite presciently, that the principle would only grow more vital as AI rose to prominence.

Technologically speaking, the type of residuals program I have in mind would be a tall order. An AI system would have to track every piece of its training data, and then be able to determine which of those pieces influenced any given output generated and to what degree. On top of that, each of those pieces of data would have to be attributed to a verified human or set of humans, and there would need to be a channel of payment for those humans to receive their residuals. We did something a little bit like this at my company, HitRecord, producing pieces of art and entertainment with sometimes thousands of online collaborators and paying each contributing artist. Residuals from AI, in comparison, would require an astronomically huge scale, keeping track of zillions of bits and bytes, as well as billions of people, dollars, cents and microcents. (Disclosure: My wife serves on the board of directors of the OpenAI Nonprofit.)

So, now that I’ve said how hard the tech will be, let’s talk about the really hard part.

Tech companies won’t be incentivized all by themselves to build the necessary software. The law will have to make them do it. The U.S. Copyright Office recently issued an official statement that completely AI-generated works are not eligible for registration, saying, “These technologies ‘train’ on vast quantities of preexisting human-authored works and use inferences from that training to generate new content.” I imagine the copyright question will force tech companies to come up with a solution to attribute some sort of human authorship to an AI’s outputs. But here’s the question: Who owns the copyright to the training data? Who will therefore reap the benefits?

The answer to that key question, in the case of the film and television industry, is not the writers, actors, camera operators, costume designers, sound mixers or any of the other people who made the creations. The copyright is owned by the big studios. See the problem?

When I do a movie, and I sign my contract with a movie studio, I agree that the studio will own the copyright to the movie. Which feels fair and non-threatening. The studio paid to make the movie, so it should get to monetize the movie however it wants. But if I had known that by signing this contract and allowing the studio to be the movie’s sole copyright holder, I would then be allowing the studio to use that intellectual property as training data for an AI that would put me out of a job forever, I would never have signed that contract.

And again, this doesn’t apply just to the film and television industry but to many, many others as well. Take the medical industry. A doctor’s work produces lots of intellectual property in the form of medical records — and many doctors sign contracts allowing hospitals (or whoever) to take sole ownership of those medical records. But if the doctors had known this intellectual property would be used as training data for an AI that would put them out of a job forever, they, too, would never have signed those contracts.

So, if residuals from AI training data are based on existing copyright, then the aforementioned profit-hungry giants might indeed get to stop paying humans for their labor. How do we deal with this? Both of the Hollywood unions on strike today have expressed grave concern over AI, but their demands regarding AI will not address the pitfall I’m describing here. Nor do I think the problem can be solved by the courts. A number of independent visual artists have sued the AI companies Midjourney and Stability AI, claiming their work was used as training data without consent or compensation. Similar lawsuits are cropping up more and more. I hope these artists win these lawsuits. But I’m not sure they will without new intellectual-property law on their side.

I’m well aware that implementing this kind of residuals program would take a ton of work. It would require technology that doesn’t exist yet. It would also require new public policy, and most governments don’t exactly have a stellar track record when it comes to regulating new tech. Still, there are encouraging signs out there that the White House and Congress are moving toward regulating AI. So here’s my request: Residuals from AI training data should be made mandatory and intellectual-property law amended to make sure those residuals go to the people who deserve them. The seemingly miraculous outputs of generative AI should not be allowed to fill the coffers of tech giants or any other giants without paying the people whose data made those outputs possible.

AI is going to change things faster and harder than our intuition can predict. Will those changes be for the better or worse? I believe we still have a real shot at a world in which our careers are both more productive and more meaningful, and in which our compensation is both more honorable and more equitable than any we’ve ever known. But that kind of bright future won’t arrive automatically. As always, it’ll be us doing the work.

Thursday, June 08, 2023

Tuesday, June 06, 2023

Today's Reddit

AI won't be the reason for the collapse of human civilization. AI is a tool - like a hammer - you can build a house with it, or you can bash in your neighbor's face with it.

Midjourney is a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc.

Midjourney generates images from natural language descriptions, called "prompts", similar to OpenAI's DALL-E and Stable Diffusion. Wikipedia


I think we need to talk about what is going on at Hobby Lobby... won't someone please think of the children?!
by u/Dead_Inside512 in midjourney



 

Sunday, April 30, 2023

Thursday, April 27, 2023

More AI


There won't be any conscious toasters. No matter how cute and cuddly the engineers can get it to seem like it is, your technology will not be a living thing. Ever. Anything we build is - and will always remain - an inanimate and unfeeling thing. A gizmo. A gadget.

In the end:
  • your house is a well-ordered pile of sticks and rocks
  • your car is a well-ordered pile of steel and plastic
  • your laptop is a well-ordered pile of silicon and underpaid nerd sweat
  • Artificial Intelligence is a clever attempt to imitate thought
There is no Blue Fairy to turn your wooden puppet into a real boy.

Machinery simply cannot achieve sentience.

That, of course, is a bold statement. I usually hold back on the absolutes because no probability is ever either 0 or 100. But until we produce something that is a living thinking being, and not a facsimile, it's aways going to be what the name implies - artificial.

And we already know how to make real live things - we call it reproduction - breeding - birth - goal-oriented copulation - fucking with purpose.

Anyway.


Opinion
The wizards of AI can’t give it a brain, or heart, or consciousness

ChatGPT is a fascinating piece of software based on artificial intelligence and built by a company called OpenAI. Chat’s specialty is reading and writing the English language, which is no easy task. Specify a topic and Chat will produce a short essay in any form you like, including rhymed verse. If research is needed along the way, Chat turns straight to the internet, which is swarming with information — some of it even true.

This is clever and impressive software, and might be useful to many people. Several first-rate software builders have told me recently how well Chat draws and, separately, how well it composes new software. A new Chat-written app might produce a video game, a browser or whatever else you’ve specified as the new software’s task.

ChatGPT has been celebrated most of all, however, for its ability to converse and write essays in good, clear English. So, how well does Chat do at this difficult, ambitious task? A task that, pre-Chat, only human beings have been able to handle?

Not well. There are several reasons why not.

This software — like all other software — is unconscious. Of course, building conscious software wasn’t the goal of this project— which is a good thing, because it can’t be done. To speak of a “conscious computer” (except metaphorically) is nonsense, like speaking of a “conscious toaster.” Both objects are machines designed and built by humans, capable of being assembled or disassembled — but not of living or dying. Long and interesting philosophical arguments have been made on these points, but none changes the common-sense conclusion. Those who put their trust in unconscious writers, assistants or colleagues ought to be careful.

Being unconscious, Chat feels nothing. It can’t feel. Therefore, it can’t understand English, or human beings, or anything at all. How is it supposed to understand the words “I feel happy” when it has never felt anything, and never will? “Understand” is not even a word that applies to machines. To say that “a computer understands” is like saying that your car is losing its sense of humor or mooning over an old girlfriend.

Consciousness seems to be inseparable from the physical body. And because we are conscious, we can feel and have emotions — physical emotions and feelings, which might (unpredictably) change our entire worldview and state of mind. Real physical emotions are required, not a data-processing analog. We must feel our emotions directly, not become aware of them as if we were reading a watch. We don’t say, “How do I feel? Let me check.” We experience our feelings, whether we want to or not.

Many impressive ChatGPT conversations have been posted online, and they speak for themselves. Sometimes Chat can actually chat. But if you challenge its assertions, it has no intuitions about whether you are wrong or bluffing. It’s eager to concede the point and backpedal. If you ask it to explain something again, because you didn’t understand the first time, it has no feeling for what might have confused you. It tends to repeat what it said before, with the phrases juggled and slightly changed. Chat is like a person who is barely paying attention, which is understandable because — being unconscious — Chat is definitely not paying attention. It couldn’t possibly be paying attention.

Its lack of consciousness, and its consequent lack of intuition or feeling, limits Chat’s ability to judge the quality of its own work. Here is Chat explaining how to make a success of your job. “By establishing trust and rapport, you will create a support network … ”

Stop right there, Chat. Phrases such as “trust and rapport” and “support network” are cliches. If Chat catches on (and it’s already catching), look for a deluge of bad prose in a nation that is already half-choked on it.

Chat once again: “Today, spoons are an essential part of our daily lives, and are used in a wide variety of settings, from the kitchen to the dining room.”

Why not, “Today, we need spoons in the kitchen, the dining room and in many other places”? Chat depends on canned phrases and writes like a bureaucrat: It is formal and officious. Naturally, an unconscious machine has no ear for language.

The strangest thing of all about Chat is that some intelligent people are actually afraid of it. Some demand a worldwide pause in AI research, although they know nothing about the field. This is ridiculous. Somehow, they believe, Chat might jump out of its computer and go rampaging across the globe. As a seasoned professional, I believe this to be unlikely. Chat, in my judgment, will stay quietly enclosed in its own computer and its own network — and behave itself.

In short, I would relax, at least for the time being. AI technology has been around since the 1950s, yet Chat isn’t remotely smart enough to do much damage. But if you are too scared to relax, try smashing your computer to little pieces and tossing them in the trash. Vanquishing Chat might be the least of your rewards.