Submitted by Patrick Fan, professor of business analytics, University of Iowa Tippie College of Business
ChatGPT has arrived, bringing artificial intelligence directly into the workplace, classroom and other parts of our lives to create what is essentially a fourth industrial revolution.
But is it something to worry about? While artificial intelligence will only play a greater role in our futures, it is not something to fear. It is a tool, and like all tools, it’s imperative we learn to understand how it works and the best ways to use it.
Like calculators and CD players before them, AI-enabled tools can make our lives more productive, fun and entertaining. They can write poems and informative essays, summarize texts, and answer questions. They can be used to help organize thoughts and take notes. Imagine a ChatGPT-enabled companion robot that can read stories and write jokes for elderly or lonely people.
They will have a significant impact on the workplace, too, making employees more productive and creative. Companies like Microsoft and Google are already incorporating ChatGPT tools into their office products to help people work more efficiently.
Like any new tool, AI presents a set of ethical, legal and economic challenges that must be addressed. Students can use these tools to cheat on homework or tests. It raises intellectual property questions and the work it produces often contains factual inaccuracies. These tools are also unable to make ethical decisions.
But even with these challenges, we need to embrace and leverage these tools to help us be more productive, finding new business usage processes and applications to benefit society. In schools, for instance, there are already calls to ban student use of ChatGPT in tests and writing assignments. Instead, we should be teaching students how to use them as learning tools. Businesses are already using ChatGPT, so schools are doing a disservice to their students by prohibiting it.
We can take steps to develop regulations, awareness and training. We’ll need to build detection tools to flag cheating, and laws will need to be adjusted to prevent people from stealing others’ intellectual property. Liability will be an issue, too. For instance, who is legally responsible when something written by a ChatGPT command causes someone to be injured?
Individually, we need to use our intelligence and perform our own due diligence to fact-check and gatekeep when we use ChatGPT. Always take what it produces with a grain of salt. And don’t be afraid to educate yourself on how to better leverage these tools to create new applications, usages and job opportunities.
The future is wide open with these AI tools. Whoever sees ChatGPT as a friend and not an enemy can grasp the opportunity to build next-generation creative applications and have the advantage.