Don’t Have an AI Internal Policy? Make It a Priority Today or Face Risk Tomorrow

Everybody in media is having conversations about AI. It will be one of the primary topics on the agenda at the 2024 MFM Annual Conference next month. . There’s lots to talk about, from content creation to automation. As a writer, many people ask me if AI will render me obsolete. My response is, “Have you read what ChatGPT creates? I’m not worried.”

However, there is something every company should have some concerns about: how are your internal teams using AI? And do you even know if they are? That’s why I believe developing a policy around its use is critical. It’s not unlike the social media policies that many implemented 15 years ago. There had to be rules about what people could post associated with the brand because no one wanted rogue employees speaking for their organizations and having that amplified.

AI has the same type of implications. I’m not anti-AI at all. I see its value, especially in crunching and analyzing data or eliminating manual tasks. These use cases are part of your technology, so it’s vetted.

What media companies or others need to focus on is whether their employees are using tools like generative AI tools like ChatGPT to create content—whether that’s something for your website or emails salespeople use to prospect. Adoption has been fast, and marketers are heavy users. A survey found that 48% use it to generate content, and another 10% plan to do so.

This is where things get a little messy.

OpenAI Has Multiple Legal Challenges

First, all of this is still a very gray legal area. There are many pending copyright lawsuits against OpenAI, ChatGPT’s inventor. Its creators argue that without it, there was no way to “train” models. They don’t deny that the technology has ingested copyrighted material.

Some of those suing include the New York Times, the Authors Guild of America and other authors. OpenAI earned a partial win earlier this year, with a judge dismissing part of the lawsuit relating to infringement on the copyright to unjust enrichment.

Media companies are very aware of copyright regulations. You’re unlikely to get in legal trouble for employees using AI to draft emails, but you can hardly call the output of prompts submitted to ChatGPT original.

You should be more concerned about reputational harm or mistrust from audiences and customers. It’s evident to me when I get a cold email or read articles online that they are AI-generated. They are typically awkwardly formal, lacking a human touch.

ChatGPT Isn’t Always a Reliable Source

The second big issue with ChatGPT is that it literally hallucinates. It can make things up or provide inaccurate or false sources. An analysis from Stanford University discovered that it did so at least 3% of the time. The researchers also concluded that it’s getting less accurate, not more so. In other words, it’s not a trustworthy source.

Adding to this issue is that its “knowledge” only includes what it ingested, with the cut-off being September 2021. It’s not connected to the internet, so it’s not continuously learning new things. If employees use the tool to research or create content that relies on facts, you again open yourselves up to concerns of spreading misinformation.

What Should Your Internal AI Policy Cover?

I’ve discussed the legal, ethical and reputational concerns. These factors should influence how you develop a policy. You want to talk to legal and compliance experts from the start to determine what applicable laws may apply. They can advise on data privacy considerations, as well.

The rest of the policy is more about how people can responsibly use generative AI. Here are a few suggestions that can help you craft this part:

  • Determine your generative AI needs: What areas in your company where AI could be an asset? You’ll need to consider multiple use cases.
  • Define what tools you’ll make available to employees: It’s important to clarify this versus people using different platforms that may or may not be acceptable.
  • Build a governance framework: These are your do’s and don’ts. You have rules about technology usage and many other things that describe potential risks. You need this for AI, too.
  • Talk about generative AI usage as a company: Many of your staff need education on its benefits and limitations. If you roll out a policy, be prepared to present it and to answer questions.
  • Create controls and guardrails for AI usage: Part of a policy is enforcing it, so these measures are necessary to ensure employees are following it.
  • Review and update the policy regularly: Generative AI is changing rapidly, so today’s policy may not align with tomorrow’s environment. You should review it and update it at least bi-annually.

AI Internal Policies Are Foundational for Modern Media Companies

The internal usage of AI may not be on your radar, but it should be. Creating and implementing a policy keeps you from falling into legal, ethical or reputational risk. It also ensures consistency and guidelines for effectively using this advanced technology. Don’t neglect this crucial step of leveraging AI to support human intelligence. That’s how it works best: as a supplement and productivity tool for human creativity and work.

This article was written by Beth Osborne who leads content strategy and execution at Marketron. She’s been a professional writer for over 20 years, helping tech and software companies tell inspiring stories.

Posted at MediaVillage through the Thought Leadership self-publishing platform.

Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.org/MyersBizNet.