And that nightmare is currently playing out in MAGA World as people are finding it harder and harder to figure out what's true and what's not.
How to Use ChatGPT and Still Be a Good Person
It’s a turning point for artificial intelligence, and we need to take advantage of these tools without causing harm to ourselves or others.
The past few weeks have felt like a honeymoon phase for our relationship with tools powered by artificial intelligence.
Many of us have prodded ChatGPT, a chatbot that can generate responses with startlingly natural language, with tasks like writing stories about our pets, composing business proposals and coding software programs.
At the same time, many have uploaded selfies to Lensa AI, an app that uses algorithms to transform ordinary photos into artistic renderings. Both debuted a few weeks ago.
Like smartphones and social networks when they first emerged, A.I. feels fun and exciting. Yet (and I’m sorry to be a buzzkill), as is always the case with new technology, there will be drawbacks, painful lessons and unintended consequences.
People experimenting with ChatGPT were quick to realize that they could use the tool to win coding contests. Teachers have already caught their students using the bot to plagiarize essays. And some women who uploaded their photos to Lensa received back renderings that felt sexualized and made them look skinnier, younger or even nude.
We have reached a turning point with artificial intelligence, and now is a good time to pause and assess: How can we use these tools ethically and safely?
For years, virtual assistants like Siri and Alexa, which also use A.I., were the butt of jokes because they weren’t particularly helpful. But modern A.I. is just good enough now that many people are seriously contemplating how to fit the tools into their daily lives and occupations.
The Rise of OpenAI
With careful thought and consideration, we can take advantage of the smarts of these tools without causing harm to ourselves or others.
Understand the limits (and consequences).
First, it’s important to understand how the technology works to know what exactly you’re doing with it.
ChatGPT is essentially a more powerful, fancier version of the predictive text system on our phones, which suggests words to complete a sentence when we are typing by using what it has learned from vast amounts of data scraped off the web.
It also can’t check if what it’s saying is true.
If you use a chatbot to code a program, it looks at how the code was compiled in the past. Because code is constantly updated to address security vulnerabilities, the code written with a chatbot could be buggy or insecure, Mr. Christian said.
Likewise, if you’re using ChatGPT to write an essay about a classic book, chances are that the bot will construct seemingly plausible arguments. But if others published a faulty analysis of the book on the web, that may also show up in your essay. If your essay was then posted online, you would be contributing to the spread of misinformation.
“They can fool us into thinking that they understand more than they do, and that can cause problems,” said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute.
In other words, the bot doesn’t think independently. It can’t even count.
A case in point: I was stunned when I asked ChatGPT to compose a haiku poem about the cold weather in San Francisco. It spat out lines with the incorrect number of syllables:
Similarly, A.I.-powered image-editing tools like Lensa train their algorithms with existing images on the web. Therefore, if women are presented in more sexualized contexts, the machines will recreate that bias, Ms. Mitchell said.
Prisma Labs, the developer of Lensa, said it was not consciously applying biases — it was just using what was out there. “Essentially, A.I. is holding a mirror to our society,” said Anna Green, a Prisma spokeswoman.
A related concern is that if you use the tool to generate a cartoon avatar, it will base the image on the styles of artists’ published work without compensating them or giving them credit.
Know what you’re giving up.
A lesson that we’ve learned again and again is that when we use an online tool, we have to give up some data, and A.I. tools are no exception.
When asked whether it was safe to share sensitive texts with ChatGPT, the chatbot responded that it did not store your information but that it would probably be wise to exercise caution.
Prisma Labs said that it solely used photos uploaded to Lensa for creating avatars, and that it deleted images from its servers after 24 hours. Still, photos that you want to keep private should probably not be uploaded to Lensa.
“You’re helping the robots by giving them exactly what they need in order to create better models,” said Evan Greer, a director for Fight for the Future, a digital rights advocacy group. “You should assume it can be accessed by the company.”
Use them to improve, not do, your work.
With that in mind, A.I. can be helpful if we’re looking for a light assist. A person could ask a chatbot to rewrite a paragraph in an active voice. A nonnative English speaker could ask ChatGPT to remove grammatical errors from an email before sending it. A student could ask the bot for suggestions on how to make an essay more persuasive.
But in any situation like those, don’t blindly trust the bot.
“You need a human in the loop to make sure that they’re saying what you want them to say and that they’re true things instead of false things,” Ms. Mitchell said.
And if you do decide to use a tool like ChatGPT or Lensa to produce a piece of work, consider disclosing that it was used, she added. That would be similar to giving credit to other authors for their work.
Disclosure: The ninth paragraph of this column was edited by ChatGPT (though the entire column was written and fact-checked by humans).
Welcome to the Age of Artificial Intelligence
It’s a turning point for artificial intelligence, and we need to take advantage of these tools without causing harm to ourselves or others.
The past few weeks have felt like a honeymoon phase for our relationship with tools powered by artificial intelligence.
Many of us have prodded ChatGPT, a chatbot that can generate responses with startlingly natural language, with tasks like writing stories about our pets, composing business proposals and coding software programs.
At the same time, many have uploaded selfies to Lensa AI, an app that uses algorithms to transform ordinary photos into artistic renderings. Both debuted a few weeks ago.
Like smartphones and social networks when they first emerged, A.I. feels fun and exciting. Yet (and I’m sorry to be a buzzkill), as is always the case with new technology, there will be drawbacks, painful lessons and unintended consequences.
People experimenting with ChatGPT were quick to realize that they could use the tool to win coding contests. Teachers have already caught their students using the bot to plagiarize essays. And some women who uploaded their photos to Lensa received back renderings that felt sexualized and made them look skinnier, younger or even nude.
We have reached a turning point with artificial intelligence, and now is a good time to pause and assess: How can we use these tools ethically and safely?
For years, virtual assistants like Siri and Alexa, which also use A.I., were the butt of jokes because they weren’t particularly helpful. But modern A.I. is just good enough now that many people are seriously contemplating how to fit the tools into their daily lives and occupations.
The Rise of OpenAI
- The San Francisco company is one of the world’s most ambitious artificial intelligence labs. Here’s a look at some recent developments.ChatGPT: The new cutting-edge chatbot is inspiring awe, fear, stunts and attempts to circumvent its guardrails, our technology columnist writes.
- DALL-E 2: The system lets you create digital images simply by describing what you want to see. But for some, image generators are worrisome.
- GPT-3: With mind-boggling fluency, the natural-language system can write, argue and code. The implications for the future could be profound.
With careful thought and consideration, we can take advantage of the smarts of these tools without causing harm to ourselves or others.
Understand the limits (and consequences).
First, it’s important to understand how the technology works to know what exactly you’re doing with it.
ChatGPT is essentially a more powerful, fancier version of the predictive text system on our phones, which suggests words to complete a sentence when we are typing by using what it has learned from vast amounts of data scraped off the web.
It also can’t check if what it’s saying is true.
If you use a chatbot to code a program, it looks at how the code was compiled in the past. Because code is constantly updated to address security vulnerabilities, the code written with a chatbot could be buggy or insecure, Mr. Christian said.
Likewise, if you’re using ChatGPT to write an essay about a classic book, chances are that the bot will construct seemingly plausible arguments. But if others published a faulty analysis of the book on the web, that may also show up in your essay. If your essay was then posted online, you would be contributing to the spread of misinformation.
“They can fool us into thinking that they understand more than they do, and that can cause problems,” said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute.
In other words, the bot doesn’t think independently. It can’t even count.
A case in point: I was stunned when I asked ChatGPT to compose a haiku poem about the cold weather in San Francisco. It spat out lines with the incorrect number of syllables:
Fog blankets the city,OpenAI, the company behind ChatGPT, declined to comment for this column.
Brisk winds chill to the bone,
Winter in San Fran.
Similarly, A.I.-powered image-editing tools like Lensa train their algorithms with existing images on the web. Therefore, if women are presented in more sexualized contexts, the machines will recreate that bias, Ms. Mitchell said.
Prisma Labs, the developer of Lensa, said it was not consciously applying biases — it was just using what was out there. “Essentially, A.I. is holding a mirror to our society,” said Anna Green, a Prisma spokeswoman.
A related concern is that if you use the tool to generate a cartoon avatar, it will base the image on the styles of artists’ published work without compensating them or giving them credit.
Know what you’re giving up.
A lesson that we’ve learned again and again is that when we use an online tool, we have to give up some data, and A.I. tools are no exception.
When asked whether it was safe to share sensitive texts with ChatGPT, the chatbot responded that it did not store your information but that it would probably be wise to exercise caution.
Prisma Labs said that it solely used photos uploaded to Lensa for creating avatars, and that it deleted images from its servers after 24 hours. Still, photos that you want to keep private should probably not be uploaded to Lensa.
“You’re helping the robots by giving them exactly what they need in order to create better models,” said Evan Greer, a director for Fight for the Future, a digital rights advocacy group. “You should assume it can be accessed by the company.”
Use them to improve, not do, your work.
With that in mind, A.I. can be helpful if we’re looking for a light assist. A person could ask a chatbot to rewrite a paragraph in an active voice. A nonnative English speaker could ask ChatGPT to remove grammatical errors from an email before sending it. A student could ask the bot for suggestions on how to make an essay more persuasive.
But in any situation like those, don’t blindly trust the bot.
“You need a human in the loop to make sure that they’re saying what you want them to say and that they’re true things instead of false things,” Ms. Mitchell said.
And if you do decide to use a tool like ChatGPT or Lensa to produce a piece of work, consider disclosing that it was used, she added. That would be similar to giving credit to other authors for their work.
Disclosure: The ninth paragraph of this column was edited by ChatGPT (though the entire column was written and fact-checked by humans).
- The new generation of chatbots could eventually change how we learn and find information online. But they do not always tell the truth.
- Images generated with Lensa AI, an app that uses artificial intelligence to create portraits in a variety of styles, are all over social media.
- We created our own artificial-technology system to understand how easy it is for a computer to generate fake faces. Do these people look real to you?
- Generative A.I. is Silicon Valley’s new craze. But no project has created as much buzz — or as much controversy — as Stable Diffusion.
- We asked an artificial intelligence system to come up with original Thanksgiving recipes, which we then prepared. The results were … interesting.
- Robots can’t think or feel, despite what the researchers who build them want to believe. So why do they say A.I. is sentient?
No comments:
Post a Comment