The rise of social media platforms has completely changed the way we receive information. Though millennials used social media in their teen years, there are people from Generation Z who have never lived without them and rely heavily on these platforms to learn facts.
Within the past year, artificial intelligence has greatly affected both social media and websites alike, especially when it comes to news.
A 2024 report by the Associated Press surveyed around 300 respondents who worked in the news industry, 70% of which said they used generative AI, mainly for content production.
AI use comes with pros and cons. Though this tool can be efficient at generating articles and posts, the information may not be correct and it can lead to false facts spreading even further.
Brigitte Chenevert, a fourth year marketing student at Oregon State University’s College of Business spoke on how easy it has become to spread misinformation.
“All it takes is one person to put out a story or a belief,” Chenevert said.
Chenevert explained that the more time people spend on social media, their attention span decreases, making it less likely that people will fact-check the news given to them. Though we may not mean to, by not fact-checking we can all become a major part of spreading misinformation, even by just resharing posts or TikToks.
According to a 2022 survey by Deloitte, 50% of Gen Z teen respondents got their news from social media. Now, that number may be even higher.
Social media platforms, like TikTok and Instagram do not have an efficient way to automatically fact-check posts, usually using AI in some capacity to do so. This makes it up to its users to be cautious about what they are consuming.
Cross-referencing the information from social media platforms is an effective way to fact-check, as well as using sites such as Snopes or the AP, which are known reliable sources.
Though AI has been present for years in social media algorithms, mainly to keep people engaged in a platform’s content, now it is being used in other aspects.
“I’ve noticed little AI summaries coming out, which are actually quite useful,” said Bill Smart, professor and the associate director for academics at OSU’s Collaborative Robotics and Intelligence Systems (CoRIS) Institute, referencing Gemini, Google’s new AI search engine companion.
Smart warns that while summaries can make this tool helpful, it is not always completely factual.
“I tend not to pay it much attention for things which I need facts for, but things like ‘How do I fix the drain plug on my dishwasher?’ Where it feels like it’s a summarization of stuff it’s retrieved.”
Smart talked more on people’s worries with AI, specifically on how intelligent it could become.
“I have certainly seen websites which seem to have nothing but generated content on them. You know, the grammar is a little weird, it’s a little repetitive, it wouldn’t pass an editor,” Smart said.
Though AI is continuing to be used more everyday, it is mainly scraping the internet of ideas that already exist, not actually creating any of its own ideas, which asks the question on how it will affect news and writing-related jobs.
“(AI) is going to be in our future, and the people and the companies that are expanding are gonna be people who’ve learned to use it to add to what they can currently do, as opposed to replace their own jobs,” Chenevert said.
Instead of worrying about the tool becoming dangerous, instead apply it to what it’s good at: summaries, prompts, analyzing and overall improving efficiency.