3 Iowa CIOs discuss business, policy implications of AI at TAI forum

Artificial intelligence is seen as a copilot and tool to help employees be more efficient, a panel of chief information officers said at the Technology Association of Iowa’s AI Public Policy Forum on Jan. 24.

All three panelists — Kathy Kay of Principal Financial Group, Ryan Schaap of Wells Enterprise and Laura Smith of UnityPoint Health — emphasized the need to keep a “human in the middle” in the areas where their businesses are using or testing AI-based tools or solutions.

Members of the technology industry and Iowa legislators attended the event to learn more of these companies’ views on AI related to workforce, privacy and security, and the role of public policy.

Legislators who represent the Senate Technology Committee and House Economic Growth and Technology Committee addressed the committees’ stances on legislating AI and how they seek to collaborate with the business community.

Sen. Chris Cournoyer, chair of the Senate Technology Committee, said given AI’s potential to both improve lives and pose challenges, Iowa needs to be aware of where “guidelines and boundaries” are needed while not inhibiting companies’ ability to innovate.

“It’s very important to us as policy makers that we’re working with you as the innovators and the experts in that space; make sure that we’re working together in terms of providing those types of policies; and make sure we understand that no one-size-fits-all. … We are keeping an eye on what’s being done at the global level and the federal level to see what we need to do at the state level to support what you do, or basically just stay out of your way so you can innovate,” Cournoyer said.

Rep. Ray Sorensen, who chairs the House’s technology committee, echoed the need for guardrails and limited government involvement, adding that the Legislature can’t keep up with the pace at which AI is advancing and companies are innovating.

A priority for Sen. Izaah Knox, who represents the north side of Des Moines, is establishing a foundation for future generations focused on technology access and safety.

“It’s really important for me and my constituents and the people that I represent, especially Des Moines Public Schools, to make sure that we have a pipeline into the community, and there’s access, and there’s safety, and there’s accountability, and we’re going to make sure that happens at the Statehouse,” Knox said.

Below is some of what Kay, Schapp and Smith shared during the forum.

AI applications in finance, manufacturing and health care
Kay: 
Whenever we put content out, our compliance organization needs to review it to make sure it’s compliant to any of the regulations that we have. We’re able to run it through a model now and make suggestions before the compliance people review it. They still have to review it. Again, we won’t take the human out of the middle, but it speeds up their ability to go through text and content much faster than they have been in the past.

Schaap: We’ve had a process automation team at Wells [Enterprise] and I think we’re going on year No. 4 with them where we’re using robotic process automation with AI capabilities embedded. We’ve touched almost all of the areas of our back office and we’ve recently moved into procurement functions as well. That’s been a huge win. I also have a team member here today who’s been responsible for our translation AI. We have a huge disparity in the languages spoken by our workforce, so we have an artificial intelligence engine that we feed our documents to and we teach it to translate them into Spanish in the way that makes sense for Wells. Remembering terms that are important to keep for ice cream manufacturing or names of machines. We’ve been training that model over the course of the years. That’s been highly successful in helping us produce Spanish documentation for our workforce.

Smith: 
I’m not going to be a futurist when it comes to fully autonomous surgery at some point.
I do think that is many, many years away, but I think there are very specific use cases that might be right in front of us, and again, keeping that human, or in our case, the clinician, in the middle of that to test and validate those use cases as they come to be and doing that in a very safe and responsible way is really important as they arise. In the generative AI space, we’re probably focusing more in some of those back office-type areas, or lower-risk use cases, but I think there is great possibility in more clinical areas as well.

Strategies for a workforce equipped for advanced AI
Schaap: 
I remember sitting around with some members of my leadership team a couple of months ago, and we were talking about how we were going to create a few dedicated roles on our team that just 100% focused on AI solutions. We talked for a little bit and we’re like, “That’s a really great idea.” And then we all look around the room like, “Where are these people going to come from? How are we going to find these people? How are we going to hire these people?” And I think the consensus we came up with was AI expertise and AI people are going to be people who have broad technology skills with an aptitude for process automation or an aptitude for learning about AI and curiosity about it.

But I think for the state of Iowa in general, a lot of the same themes that apply to generating a great technology staff, generate great cyber security team members, they’re going to apply to AI as well. I think you could continue to build on those programs with a slant toward AI workforce, competitive advantages to being in the state, all of those things. I think it just accelerates the need for more and more there. I would also say I think some of the best partner companies that I have working at Wells here in Iowa are Iowa startups that have come [up] over the last 20, 30 years either in software development or cybersecurity. If we had incentives in Iowa to kick-start more startups in the AI space, I think that could be a big advantage to our state as well.

Kay: I’m reading a lot about certain universities not allowing students to leverage generative AI. I would say we need to teach them how to leverage it constructively. I think about how can generative AI help our people almost become superhuman — do their job much more effectively with assistance, not to replace them, but to leverage it to do their job more effectively. Instead of preventing people from using it, I think teaching how to use it effectively, securely, to make sure you’re looking for bias, things like that is really important, and I think it’ll help prepare anybody because whatever company you’re going to be in in 10 years, generative AI is going to be behind the scenes everywhere.

Developing an organizational culture around AI
Schaap: 
We’re trying to keep this culture of creativity and curiosity around it, especially when it comes to manufacturing. When we get together with our manufacturing leaders and we talk to them about real business problems … we get in the room with them on those use cases and it’s a very collaborative environment when we’re working on what the problem is we’re trying to solve. I think if you take that approach with it, and it’s not just, “Hey, we’re bringing in AI to do this for us,” which is a much different message, I think you’ll have much different reaction from your workforce.

Smith: 
The other word that comes to mind is literacy for executive teams when we’re trying to create culture within organizations. Just having a basic level of literacy as it relates to what AI is, what it isn’t, and what it means within your organization, I think that’s a little secret sauce, perhaps.

Kay: 
I have a philosophy of when there’s something new, and generative AI is one of them, instead of me trying to determine what the answer is in terms of how to try it out, I would rather unleash it to those that are most interested. We started at Principal with opening up a study group. If you’re interested in generative AI, if you want to know about it, if you want to try it out, come to this group. We have over 100 employees who’ve been members of the study group, and included in there was our legal team, our privacy people, compliance people, cyber people, and what they were all doing is learning. They’re closest to the problems. Let’s see if we can leverage something and then make sure as we’re learning, we’re putting the right guard rails around how to use it. … I think you learn a lot more by having teams come together all over the company to try it and then, as you’re learning, continue to evolve the policies and how to leverage it.

Keeping up with bad actors’ use of AI
Smith: 
We have to be curious, we have to learn how we’re putting it to use within our organizations or in our personal lives, but we also have to be thinking about how are the bad guys maybe leveraging this as well and strengthening our skepticism and our defenses in response to that technological change.

Kay: 
Bad actors are using generative AI to generate new threats, so what’s happening is cyber companies, defense companies are leveraging generative AI to be able to defend, so it’s being used both ways. I think we just have to stay close to it because it is evolving very quickly.

The business perspective on AI legislation
Schaap: 
I sat through a demo of a vendor; it was a company I had never heard of. They had this engine that was scraping Pinterest and Instagram and then all of the food service restaurant venues out there and putting together this large language model of what’s the most popular trending flavors. … You can ask it questions like, “What flavor of ice cream would I make that would sell well with this demographic?” And it had this huge expansive data that it was using.

Whatever we put in place from a legislative perspective, we have to be able to react to these new opportunities as they come up because they’re just going to come up fast like that. I can’t imagine this company even had this capability even a year ago. It’s just moving so fast, and I feel like the privacy and protection and those aspects of legislation are important. But it has to keep us in a position as businesses to really take advantage of the fast moving things that are happening out there and not restrict us too much.

Kay: 
We’re all trying to do the right thing and learn and make sure that we’re being ethical in how AI is being leveraged. We’re governing it, we’re complying to all the laws and the regulations that we need to. [Businesses and legislators] working together I think would help. As you’re thinking of different types of legislation and getting feedback, like you have historically, from some of us to provide a perspective of, here’s what it could mean if that happens. Here’s some of the outcomes that you might not have thought of. I think if we can partner on that, we’ll get to some good places.