Editor’s Note: This column does not represent the opinion of Beaver’s Digest. This column reflects the personal opinions of the writer.
Artificial intelligence is evolving every day and it is omnipresent. Learning about AI through a neutral approach and how to navigate it, can help people dispel unfamiliarity. Though valid, the fear of AI that some people are clinging to is a festering wound. We must learn to harness AI’s potential for growth and positive change.
Fearing intelligence is understandable. The good news is we have the power to dispel our fears. I’m not asking people to look at what tech bros are doing in Silicon Valley and praise them. I’m not even asking people to love AI. It’d just help if when the subject came up most people’s understanding of it would go further than world-ending, human-replacement rhetoric.
To advance meaningful learning, Oregon State University offers a guide using Bloom’s Taxonomy to faculty about integrating AI into the classroom, if they choose. The guide is a resource to help assess what level of learning is needed based on course-level outcomes and, relatedly, to align appropriate activities and assessments to support student learning.
OSU is the first, and currently the only, institution in the United States to offer an AI graduate program. Within the College of Engineering, the program allows students to shape the future with the power of AI.
AI has been with us and will continue to grow. It makes sense to strip it down to learn as much as possible, not just to advance AI but also to advance us as humans.
While concerns about AI disrupting jobs and livelihoods are undeniable, especially in the creative arts, one reason for the writer’s strike in May 2023 was how AI was implemented.
The Writers Guild of America went on strike demanding higher royalties, mandatory staffing of TV writing rooms and safeguards to their jobs from AI. In September 2023, the WGA reported they won their contract. They also highlighted their terms for AI usage in projects. Safeguards and limitations are important, especially when protecting jobs.
AI, however, can help and has helped advance new methods in healthcare, technology, medicine, safety and more. As it stands now, OSU is at a frontier of sorts because AI is being studied broadly and precisely. After all, it intersects with so much of our lives.
Frank Hodges is a graduate research assistant in the Ramsey Lab at OSU. Hodges’ focus is AI in healthcare.
Hodges’s research is on language learning models and how to use them to help expedite a diagnosis for patients accurately.
“AI is any computer or device that takes an input and gives an intelligent output,” Hodges said.
AI is any machine that can perform tasks that normally require human intelligence. I know asking everyone to embrace AI is impossible, and frankly, naïve. However, I cling to the adage that knowledge is power.
Also, I want to clarify; I know AI is being used to strip people of their livelihoods. Artists, writers, photographers and other creatives have had their works stolen for training AI models and have lost jobs and job opportunities. I condemn this.
I’m a writer and journalist. I have seen the first-hand impact AI has had on my community.
Last year, Futurism published an article that reported Sports Illustrated had published articles under bylines with accompanying headshots that had been AI-generated.
The report details how Sports Illustrated outsourced to a third party and in the end, people didn’t want to take accountability for their screw-up.
Reading Futurism’s article left me feeling hollow and dismal. The issue with AI in journalism is put best in this quote from Futurism’s article:
“Undisclosed AI content is a direct affront to the fabric of media ethics, in other words, not to mention a perfect recipe for eroding reader trust. And at the end of the day, it’s just remarkably irresponsible behavior that we shouldn’t see anywhere — let alone normalized by a high-visibility publisher.”
AI has a place when reporting stories. When a journalist needs to break down complicated stats and data or understand a difficult concept, tools and software are available to help make this information easier to report. The way The Associated Press uses AI is one example.
The AP has been using AI since 2014. They have used AI for articles about financial earnings reports and in some sports stories. It’s noted at the end of the articles, “The Associated Press created this story using technology provided by Data Skrive and data from Sportradar.”
I realize there is bad and good here. The WGA winning its contract is a stride forward, but writers outside of the WGA still need to be protected. Sports Illustrated may have faced a public setback, but some journalists have had a tightrope as a career foundation and continue to balance in the uncertainty of an industry unwilling to cast a safety net.
I believe along with solid unions, creatives can find ways to build tools with AI to protect our work or assist in different ways, depending on the field.
Creatives aren’t the only ones hurting under the wire hands of AI. It’s negatively impacting the climate, and academics have expressed disdain for language learning models like ChatGPT.
I know that’s scary because artificial intelligence theoretically can’t die, and that compromises humans, however, I say everything is forever on a planet where everything dies.
Quintin Pope, a computer science Ph.D. student studying AI at OSU, wrote an essay titled Evolution provides no evidence for the sharp left Turn – LessWrong, arguing evolution was a bad analogy for AI development.
Pope’s inspiration for the essay is a response to those who feel AI is a nefarious entity that can gain consciousness.
“My default reaction to any evolutionary analogy about AI alignment is skepticism,” Pope said.
What about how AI is used day-to-day? Regardless of industry. People fear what they don’t know and sensationalize scenarios based on proximity to what they want to perceive as reality.
For instance, pop culture plays a big role in how people form their relationships with AI. I get why watching a movie like “The Terminator” would get people’s gears spinning about evil corporations doing bad things and unleashing robots to do their bidding.
“People try to forecast the future of AI,” Pope said. “But they don’t have, in my opinion, a good reference model. They have poor points of comparison for how to do that.”
After all, it’s not hard to imagine corporations as evil. However, the money, time and resources to create an army of T-800s are still far off, according to the US Government Accountability Office. The technology hasn’t reached Terminator status.
Jose Aguilar is a second-year Ph.D student researching theoretical safe systems in AI at OSU. His research is ensuring we understand as much about AI safety as possible.
“I work on the safety side of things,” Aguilar said. “I can tell you with sincerity that there’s a lot of work to do, especially from a theoretical perspective of AI.”
Aguilar explained that researchers still don’t fully understand how AI can cause harm through its algorithms and what can cause it to behave strangely, leaving room for further in-depth looks.
Aguilar suggested one way to approach learning about AI (besides watching sci-fi movies): is to watch condensed educational videos.
Aguilar recommended Kurzgesagt-In a Nutshell, a YouTube channel with over 22 million subscribers, specifically a video titled AI- Humanity’s Final Invention?
Aguilar said the video explains AI in basic terms and helps answer what role AI plays in people’s daily lives and how the recent advancements impact the future.
To those curious, Hodges’s advice is to go straight to the source.
“(Go to ChatGPT) and talk to ChatGPT about what it is,” Hodges said.
Another option is signing up for the AI at OSU newsletter and attending events on campus, to name a few more options.
Confronting AI is a great way to learn about it. This isn’t an entity forcing you into a Faustian bargain, humans have control here. The dogma of good and bad when AI advancement is discussed is constant. Broadly speaking, what do we sacrifice when we advance? Who loses, who wins?
“I think as humans, we have this mentality that (AI has) a potential for abuse,” Hodges said. “But almost everything can be abused and has potential for bad.”
Hodges and Aguilar both referenced the Manhattan Project as an example of a research and development program, which led to the creation of the first nuclear weapons during World War II and also created potential for abuse.
“I don’t condone nuclear bombs or warfare,” Hodges said. “However, we now have nuclear power, for electricity. That never would have happened without that. So, there are good and bad.”
AI isn’t an overlord of destruction wielding a power that will replace humans. AI will be a part of our expedited digitalized revolution, and people will suffer from this.
It’s inevitable.
At the same time, learning about AI through a lens of neutrality is a way to help people understand what is happening, what has happened before and what will happen again. I’m reminded of the Ouroboros. In short, it’s the symbol of the snake eating its tail. It is represented across many cultures, and it symbolizes destruction and rebirth.
“(AI is) going to change a lot of things,” Aguilar said. “It’s not the first time that this has happened. When computers came out, the same thing happened, and people freaked out.”
Just like the Industrial Revolution transitioned from creating goods by hand to machines, AI will remove humans from some roles, yes. However, humans will never be removed completely.
“I don’t think AI will take jobs and leave people unemployed,” Hodges said. “I think AI is going to change the landscape of how we do our jobs.”
Hodges stated that though AI might cause issues for certain industries it will also create needs and new jobs in different sectors. Jobs that involve AI oversight and training that would involve creative collaboration.
At the end of the day AI is saturated in our lives and is here to stay. For every AI tool forged to hurt people, another is created to help. That is precisely why we need to continue to learn AI’s capabilities and how to use it.
You have to have a high degree of uncertainty to create something. You have to have an even higher degree of curiosity to want to solve a problem. That’s human.
Learning about AI in all its facets can help prepare for the future. Practicing judicious goodwill should be standard when teaching, talking and learning about AI to help ease periods of revolutionary transitions. I truly hope so.
Neutrality is as much a part of artificial experience as the human one.