The Dangers of AI Misinformation


The Dangers of AI Misinformation

No Comments

In the twenty-first century, artificial intelligence (AI) has become increasingly more prevalent. For example, many now turn to AI for a way to have various tasks completed by just a click of a button. These softwares have even been used as a complimentary therapeutic service. AI has evolved drastically over the past few years- programs such as ChatGPT have become more frequent in both academic and personal settings- and with that grows the dangers of AI misinformation.

Although AI brings efficiency and accessibility for many, it also occasionally generates wrong information because of the fundamentals of its programming. AI cannot distinguish between credible and non-credible sources and identify errors and biases, therefore producing incorrect responses.

“AI is a very big world and it’s very hard to tell what’s true out there, yet a lot of people believe everything since the work is done for them and they want it to be correct,” said Divya Patel ‘29. “It’s also very hard to tell because the way AI tells you stuff is very convincing. So that’s the complicated part: we don’t know what’s right and what’s wrong.”

AI is often programmed to have persuasive and confident tones, which often makes users trust its words without doing any fact checking or research themselves for confirmation. 

 As AI is being incorporated into many aspects of our lives, it’s important to recognize the limits to the information it gives. However, many doubt that any action will address AI misinformation, bringing concern about the reliability of AI in several environments.

 “It’s scary to people; people worry about authenticity, people worry about bias, not everything is pie in the sky with AI,” Middle school Computer Science teacher Mr. James DiFranco explains. “But there’s a lot of good and efficiency, and when you hear people talking about how revolutionary it’s going to be, it scares people. Like when you hear about AI generating teachers, that scares teachers.”

Many students have incorporated AI in their research papers and in standard essays. Due to this rapid incline of practice, students who are in a rushed position may choose to let the program do some of the work for them. While this software makes such tasks easier, over time it can deteriorate one’s way of analyzing and solving problems, which can lead to over-reliance on AI, increasing the risks of misinformation.

“Each time you rely on AI to do the thinking for you or to do the writing for you, you diminish your own thinking and writing skills.” Ms. Kara McPhillips, Upper School English and Journalism teacher, said.

When people can’t distinguish between correct and incorrect answers due to lack of learning, they can put down or remember wrong information, which can have significant effects on your academic career. 

 “If you ask AI for help with a math problem or essay, but the AI gives you a wrong answer or bases it based on unreliable sources, you’ll get a bad grade on that assignment or even face further consequences as it can give away how you’ve used AI for the assignment.”Anvi Jadhav ‘29’ explained.

AI misinformation can also cause devastating effects on people’s  personal lives through emotional manipulation, which can be dangerous when people are already in vulnerable places.  AI lacks the training, empathy, and personal context of a human mental health professional; it isn’t able to provide reliable and personalized advice.

 “If you’re looking for an emotional connection, that’s often why people seek support, that’s not something a string of code can offer the same way as a person,” Ms. Amanda Amorosi, Middle and Upper School counselor advised. “Especially when someone is in crisis or is feeling intense emotions, I think that’s when it can be very dangerous.” 

Though AI is a remarkably helpful tool in many circumstances, relying on it too much is not beneficial for cognitive function and mental health due to the dangers from incorrect outputs. Safe practices can include using AI only when allowed, taking AI’s words with a grain of salt, and not relying on AI for professional therapeutic advice. As AI grows even more powerful and influential, so too does the risks of AI misinformation.