Slouching Towards Oblivion

Thursday, August 04, 2022

Not Fooled



It's an old theme - when does something become someone?

When is a being (of any kind) considered "conscious"?

Can a computer ever be considered a "sentient being" - a discreet separate entity?


But as AI improves, how do we know who's on the other end of an online chat, or a comments section flame war?


Bot or Not tests if you know you are talking to a human or machine
Can you tell a human from a bot when you are using instant messaging?


We use chatbots often to answer simple questions when using customer service or booking a doctor's appointment. But with the rise of chatbots, are we certain that the bot we are speaking to is a valid bot, or one designed to phish for our vital information and use it with bad intentions?

The rise of chatbots means that the internet is expected to evolve from a space primarily for use by humans to an ecosystem in which humans and non-humans interact in complex ways.

Although bots tend to be simplistic, as AI grows more sophisticated, you never quite know whether you are chatting with, a human or not.

Chatbots live inside popular apps like Facebook Messenger and Slack. In this new environment, are you confident you can tell a human from a bot?

Formed "by four foreigners at MIT," New York, NY-based design and research studio Foreign Objects has launched Bot or Not.
The tool aims to raise awareness about the risks unchecked chatbots pose for society -- from more powerful forms of surveillance to increased scams and exploitation. The central idea behind Bot Or Not, is an AI that mimics real humans.

The online game engages people in thinking critically about artificial agents that pretend to be human. Potentially matched to either a bot or a person, players are forced to question not only the human-ness of their opponent, but also themselves as they engage in a two-way guessing game.

Foreign Objects has developed the game with a grant from Mozilla's Creative Media Awards, which uses art and advocacy to explore how AI is increasingly reshaping society in both good and bad ways.

Mozilla is trying to improve internet health, and build a world of helpful -- not harmful -- AI. Many do not yet fully understand how AI regularly touches our lives and feel powerless in the face of these systems.

Using Bot Or Not, users can chat in real time over the course of three minutes, asking and answering a series of questions. At the conclusion of three rounds, each player guesses if they were chatting with a fellow human or an AI.

Bot Or Not is a Turing Test — a test devised by Alan Turing to determine if a machine can pass as a human. Turing asked a simple question: can machines think? Or, could a machine convince a human, through conversation, that it was actually human?

In this relationship, the Bot Or Not game updates the Turing Test with a contemporary concern: humans must also perform their human-ness in order to be trusted.

More trustworthy AI means holding the tech industry accountable when AI causes harm -- to ensure that consumers have more control over the AI in their lives.

They can then learn about chatbots -- their history, their dangers, and how to spot and fool them -- by exploring a guide for the bot curious.

Chatbots might not be so smart, but many people are lonely. New chatbots will be designed to be playful, entertaining, and personal, encouraging new markets in artificial companions.

Freelance hardware and software developer Agnes Cameron Foreign Objects collaborator said: "The increasing presence of bots in both the domestic sphere and in the workplace presents a huge risk to privacy, so long as personal data remains the primary business model for most major tech platforms.
In addition, the ease with which bots can be made and deployed presents the perfect opportunity to scammers, who use social media platforms as an easy context to exploit and manipulate." You never quite know who you're chatting with online. But can you always tell when you're chatting with a bot?

Sophisticated bots are becoming more prevalent both online and off, and it is getting harder to tell who is human. Although bot technology can be useful, it can also be used to create scam bots on Tinder and Instagram, or corporate bots that could steal your data.

Chatbots are programmed so that we treat them as if they are people. Expressions like "OK Google" or "Hey Siri!" aim to weave these "virtual assistants" seamlessly into our everyday lives, while simultaneously collecting our data and logging our every interaction onto the cloud.

Spotting and fooling a chatbot is one way to stop bots being normalised as a form of surveillance over our lives

No comments:

Post a Comment