Mr. Altman Goes to Washington

By But Wait, That's Not All... Archives
Cover image for  article: Mr. Altman Goes to Washington

In the first of what promises to be many hearings in Congress around AI, a topic that was described as "a printing press moment," OpenAI CEO Sam Altman, IBM's Chief Privacy/Trust Officer and AI Ethics Chair Christina Montgomery, and NYU's Professor Emeritus Gary Marcus testified this week in front of the Senate Judiciary Subcommittee on Privacy, Technology and the Law. This hearing was a welcome change in both tonality and approach from previous tech hearings, but the discussions also provided insights to which the advertising and media industries should pay close attention.

In a dramatic example of the power of AI, Senator Richard Blumenthal (D-CT) opened the hearing by stating, "Too often we have seen what happens when technology outpaces regulation, the unbridled exploitation of data, the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine the public's trust."

Except it was not  Senator Blumenthal that wrote or said those words. It was an AI that had been trained on Blumenthal's speeches for his perspective and voice; an example of how as AI learns and improves, the threats to political misinformation and what can be considered as acceptable evidence in our judicial system will undermine decades of precedent.

In their frequent visits to Congress over the past three years, Meta's Mark Zuckerberg, Amazon's Jeff Bezos, Google's Sundar Pichai and Microsoft's Satya Nadella were subjected by our elected officials to confrontational interrogations that were illustrative of their large lack of tech knowledge and at times down-right nonsensical and embarrassing. On the other end, the tech CEOs' responses were evasive, overly pedantic and decidedly and precisely "corporate" in their non-answer answers. In last month's disastrous testimony by TikTok CEO's Shou Zi Chew, the sub-committee's demeanor and questions bordered on McCarthyism and Chew's evasiveness did not help ease any concerns about the national security risk that TikTok represents.

In contrast, Altman, Montgomery and Marcus were transparent, open, judicious and thoughtful in their responses to questions that were equally curious, intelligent and wide-ranging. From the impact of AI on jobs (which Professor Marcus doesn't think is immediate at a large scale, but will be when AGI is achieved) to political misinformation to autonomous targeting with drones to copyright protection for creators, these important topics were discussed not with hype or bombasity, but with civility and a true desire to begin these significant conversations in earnest.

That's not to say the three witnesses were always on the same page. Professor Marcus in his opening statement questioned OpenAI's adherence to its original mission statement which says, "Our goal is to advance AI in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

"Seven years later they are largely beholden to Microsoft ... embroiled, in part, in an epic battle of search engines which routinely make things up," Marcus continued. "That's forced Alphabet to rush out products and de-emphasize safety. Humanity has taken a back seat."

Also, when the discussion turned on whether or not a new federal agency should be established to "license" and audit compliance of more-advanced models, IBM's Montgomery appeared to be the sole dissenter.

In a data-driven advertising world, AI and the ad industry are getting ever closer to a symbiotic relationship. Advertising needs the AI to create more targeted audiences and to drive less biased media mix models and create more predictive models around business actions and outcomes. AI needs advertising to continue to generate human action to feed those models. So, it's axiomatic that any policy or regulation around AI is destined to have an impact on how we can continue down the data-driven communications path. And make no mistake, the one very clear message of this week's hearing had less to do with AI and more to do with the lessons learned from the past laissez faire, self-regulatory approach to social media which Congress has no intention of repeating -- especially with a technology that some, including its creators, have called the biggest threat to humanity since the nuclear bomb.

Need proof of how pissed Congress is about their missteps on social media? Senator Dick Durbin (D-IL) brought up the wide bi-partisan support that recent privacy-centric legislation has received, noting that in his time in the Senate he has never seen bills passed by unanimous roll calls. Those bills are:

Mr. Altman was repeatedly questioned on whether or not OpenAI would claim Section 230 protection, to which he strongly agreed that Section 230 did not apply. "I believe that people should have the right to sue companies for damages caused by AI," Altman said. "It will also help to deter companies from developing AI that is harmful."

Altman believes that any regulation of AI should require transparency around how the AI was trained and how they are used, but he also emphasized that consumers should be informed if what they are watching or using was generated by AI. Advertisers like Coca-Cola -- which recently released the mesmerizing ad "Coca Cola Masterpiece" created, in part, using AI -- would be required to either add a disclaimer or some "yet to be created" industry standard logo that would inform a consumer that AI was involved. It would also mean that any number of journalists or press organizations that are using AI to write or produce "text to video" content would likely be required to do the same with various levels such as "This article was written entirely by ChatGPT," "ChatGPT wrote a section of this article" or "This article was researched using ChatGPT, but written by a human."

After watching the hearing, I do believe that a new commission similar to the FCC will be created for AI and data. In particular, most participants seemed to agree that AIs that are more powerful than today's GPT4 will likely need to be regulated. It's likely that you will need to state your use-case for those models and obtain a license which can be cut off if the use of the AI is misused or deemed harmful.

But Wait, That's Not All …

This week's hearing was Day One of our national work session on how we, as a country, will deal with the reality of AI in our lives. Notice I didn't say conversation. A conversation implies all talk, no action. Action here is inevitable, and I believe it will be swift. It has to be, as every single second AI is evolving and learning. All aspects of the ad world from the 4As and the ANA to the IAB must be active participants in the process. While, of course, those organizations should advocate for the best interests of their members, they should also keep at the forefront that this is not simply a privacy or typical advertising lobbying effort, but one that has immense implications for the future course of how we live and provide for ourselves and our families -- a truly next stage of human evolution. And let's use this week's hearing as a guide on how we advocate for the needs of our industry, with civility, seriousness and a macro-level perspective of what this moment really means. Whether you like it or not, science fiction is becoming science fact. Let's commit ourselves to delivering on the optimistic path and not the dystopian option.

Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of

Copyright ©2023 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.