The Culver Public Policy Center hosted a panel discussion on artificial intelligence (AI) last Thursday, bringing together experts from information technology, cybersecurity and legislation.
The event, moderated by Simpson College alumnus Kyle Hauswirth (‘15), IT security engineer & information security officer at Ames National Corporation, provided an in-depth exploration of AI’s potential benefits and risks.
Panelists acknowledged AI’s potential in fields such as healthcare, climate modeling and business operations. However, they also raised significant ethical concerns, such as the misuse of predictive analysis and threats to data privacy.
Shanna Van Mersbergen, director of information technology at Parker’s Kitchen, emphasized the need for responsible AI usage.
“We need to be very cognizant as AI becomes utilized in a lot of spots to have better policies or regulations around what we’re feeding it, what we’re using it for and how that information is getting disseminated after it’s being generated,” Mersbergen said.
Despite AI’s promise, experts underscored serious risks, including predictive modeling being misused in law enforcement or healthcare. Brandon Blankenship, chief information security officer at ProCircular, warned AI predicting criminal behavior could lead to biased and unethical outcomes.
“The real value in AI is this much larger predictive modeling to anticipate a behavior or anticipate something than a problem, and doctors can use it as a genius personal assistant that could aggregate all this data and predict a possible diagnosis for somebody,” Blankenship said.
Culver fellow Serymar Matias McMillan attended the panel.
“I did not even know about the ways AI is being used in the medical field and how we need professionals constantly working to better our servers and what is inputted into the AI’s knowledge,” Matias McMillan said. “These AI machines are coming up with solutions and equations that would take us years to do. I think that’s incredible.”
Steve Billingsley, director of enterprise architecture at ITA Group, Inc., spoke about AI’s impact on human behavior through targeted advertising and ethical hacking.
“The number one edit is, how does this impact a person? What is that individual impact? That, to me, that’s the number one thing above all the others that we have to be focused on,” Billingsley said.
Alexis Diediker, offensive cyber operations consultant at ProCircular, elaborated on AI’s security risks, particularly deepfakes and data leaks. She stressed the importance of ethical considerations in AI development, questioning, “Was this tool made with the right intention?”
The second half of the panel featured legislators discussing the complexities of AI regulation. Todd Little, associate professor of computer science at Simpson College, highlighted the challenge of creating adaptable laws for emerging technologies.
“Legislation needs, in my opinion, to be beyond just a specific tech item,” Little said. “It’s the impact that the tech has on an individual, not necessarily the one specific type of tech.”
Matias McMillan shared a similar sentiment.
“I think conversations around AI are so important because it is part of our future. The discussion around AI needs to be more focused on the good it can do if we make it good,” Matias McMillan said.
Representative Ray Sorensen, chair of the House Committee on Economic Growth and Technology, acknowledged the difficulty of regulating AI.
“AI touches literally everything. It’s hard to legislate on something like AI because it touches every single industry,” Sorensen said.
He also expressed concerns about AI advancing beyond human control, stating, “The terrifying thing is, is AI going to get so advanced so quickly that we lose the human in the loop, or the human at the helm?”
Senator Liz Bennett, ranking member of the Senate Technology Committee, underscored the need for a federal data privacy framework akin to the European Union’s General Data Protection Regulation (GDPR).
The GDPR sets strict guidelines on how organizations collect, store, process and share personal data, giving individuals more control over their information. It also includes provisions for transparency, security and the right to request data deletion.
Bennett posed a critical question about AI’s role in society: “We have to think about what is the highest good. Does innovation serve us? Or do we serve innovation?”
Throughout the discussion, panelists emphasized the need for responsible AI development and increased public education. They called for stronger data governance regulations, ethical AI practices and legislative efforts to ensure that AI benefits society.
The event concluded with a reflection on AI’s growing influence and the necessity for humans to remain at the center of decision-making.
As Billingsley put it, “Technology exists to serve people.” The conversation left attendees with much to consider about AI’s role in shaping the future.
AI Panel at Simpson College Explores Innovation, Ethics and Regulation
by Hannah Rosenfeld, Staff Writer
February 26, 2025
AI Panelist Experts – From left to right: Kyle Hauswirth, Alexis Diediker, Steve Billingsley, Shanna van Mersbergen, Brandon Blankenship
0
More to Discover
About the Contributor

Hannah Rosenfeld, Staff Reporter