Submitted by John Osako, president and CEO, Informatics Inc.

It’s 2023 and artificial intelligence is suddenly churning out content everywhere you look. These days, you can have a computer create sophisticated copy (ChatGPT), original musical scores (MuseNet, SoundRaw), and even cute cat images (the highly recommended The Cat Does Not Exist).

Welcome to the future. Have you begun thinking through the implications to your business?

That’s right — your business. Whatever industry you are in, know that AI is absolutely going to change how you do business in the years ahead. It’s going to lower the cost and effort needed to produce human-acceptable content, product documentation and customer service. It will free up your teams to focus on more complex issues that move your business forward. It might even add some laughs to your day (punchlines.ai, but not quite yet).

The exciting thing for technologists and sci-fi fanatics is the fact that AI-assisted business is no longer “five years away” — it’s here today. Your employees are excited to experiment with it, your competitors are implementing it, and you’ll soon be seeing and hearing AI-generated content everywhere.

Don’t believe me? Consider Microsoft’s recent announcement that it had incorporated ChatGPT technology, licensed from OpenAI, into its Bing search engine — a perennial doormat in the search market. That move earned Bing a surge in downloads within Apple and Google app stores, and Google was forced to announce its own plans to add AI technology to search.

We are only in the first few steps of this race, and it’s already changing how we work in tech and nontech fields alike. Here are just a few examples of how innovative entrepreneurs are using AI to redefine their work

  • Time-strapped Realtors, always focused on sales, have begun using ChatGPT to write listings descriptions and video tour scripts.
  • Coders are using AI chatbots like ChatGPT to create usable code in a variety of languages on the fly.
  • News outlets from the Associated Press to CNET are using AI technology to automate the creation of rote topic explainers and earnings reports.
  • Marketers (including those here at Informatics) are using AI generation tools to develop persona sketches, chatbot scripts and even images for use in video storyboards.

OK, but how does it all work?
For the sake of simplicity, we’ll focus here on ChatGPT and other experimental chatbots like Google’s LaMDA, which are examples of large language models (LLMs), but know that all AI generation tools are built in much the same way.

LLMs are complex algorithms that can recognize, summarize, predict and generate text, based on patterns gleaned from giant data sets of existing content. Their data sets span the sea of content available online —the good, the bad, and the ugly (more on this in a second).

Once the algorithms are “trained,” they can use those patterns to create text in response to user prompts, from the short paragraphs of ChatGPT to the free-flowing conversations in LaMDA. LLMs have now reached a point of sophistication where their responses generally sound like natural, human-generated content, even when presented with truly unique prompts.

For example, ask ChatGPT to explain quantum mechanics in the style of Snoop Dog or write a super embarrassing college essay. Go ahead, do it — if you haven’t seen it in action before, you’re likely to be amazed.

But that doesn’t mean these models are always right (or even mostly right).

Human skills, judgment still needed
Because LLMs can only mimic words and patterns they have previously spotted online, you should know they are susceptible to a variety of errors.

Journalists have uncovered a variety of AI-written articles filled with inaccurate information, plagiarism and even outright fabrications (called “hallucination” by researchers), as the models try to construct a convincing response. Since LLMs are essentially a mirror to all the content floating around the internet, they can be susceptible to reproducing hateful or offensive language, although researchers are working hard to implement safeguards to keep it out. OpenAI itself warns those using ChatGPT that it may “produce harmful instructions or biased content,” emphasizing that it remains a research tool.

The bottom line: Before you use AI technology to generate content, know you must have a human available to interpret, edit, and potentially correct its output.
If you’re not willing or able to do this, you may want to wait until these AI tools are more advanced and able to catch bad content before it’s published — otherwise you’re just contributing to the sea of bad content.

Likewise, the norms around AI-generated content are still being debated and formalized in the workplace, so we’ve found that it’s always best to disclose when you’re using it. Some people may see it as “cheating,” while others consider it a form of “working smarter.” Tread carefully and lightly, but don’t let that scare you off from trying. The world will catch up soon enough.