Introduction
On 5/16/2023 a handful of senators sat down with Professor Gary Marcus, Christina Montgomery and Sam Altman. Although if you read the news about it there is very little mention of the former, and much attention given to the latter. This dynamic played out within the testimony itself, and even you are likely asking…who are those other randos? More on that in a moment.
I’m going to say something that I’m pretty positive has never been uttered. That congressional testimony was damn good television! You know something historic is happening when Lindsey Graham is nodding his head along with Sam Altman, even allowing him to finish a full sentence before interrupting! Truly a moment for the history books. Let’s give a quick background on Prof. Marcus and Christina Montgomery before diving into the testimony.
Christina is the Vice President and Chief Privacy & Trust Officer for IBM, and chairs their AI ethics board. Definitely a leader in the AI ethics and safety space, but not one I had previously heard mentioned before. She’s a lawyer by trade (a Hahvad law grad), and it shows with the litany of policy related advisory roles she has. Also great to see a woman in this space, there aren’t many!
Prof Marcus is definitely my people. He is currently at NYU studying and teaching about the intersection of cognitive psychology, neuroscience and AI. He’s a huge proponent of AI safety and literacy, as well as a thoughtful and nuanced human being. As I said, my kind of people.
TESTIFY!
Now for the testimony. The group barely scratched the surface in the over 2 hours runtime, but when you compare this testimony to let’s say the one with Mark Zuckerberg, or TikTok’s CEO Shou Zi, the vibe was completely different. In both of those testimonies, it was clear the Senators largely had no idea what they were talking about. The tact was unabashedly adversarial on both sides. Pointing fingers, snide remarks, not so subtle character assassination, and CEOs offloading the responsibility of their tools they created to the users.
This testimony had a different flavor, though, since it was clear most if not all of the senators had some experience playing with ChatGPT. I also think the Senators were generally quite taken with Sam, who held court the day before. I think that one of the magical things about this technology is how accessible and natural it is to use, even if you have no prior experience with it. If you can type, you can use it.
In fact, many of the senators specifically tested out some prompts in order to use in their questioning. Rep. Blackburn of Tennessee asked ChatGPT if congress should regulate AI, and she was pleased with the nuanced answer, giving her both sides of the argument.
Many of the concerns throughout the testimony revolved around 3 topics: Misinformation, Impact on the economy (and creators), and the regulations that congress should consider.
For misinformation nothing new here, although it is funny when the senators kind of blame OpenAI for things like the fake pictures of Donald Trump getting arrested, when that was not their technology. A lot of fears about the upcoming election, and how this technology will be used to influence it. Many sitting senators obviously scared they might fall victim to the impacts of this technology in disrupting the election. Worries about China, sure, but worries domestically as well.
The economy was also a concern, one no one really had an answer to beyond Sam’s optimism that we will figure it out. The majority of the conversation around the economy, though, focused on creators and protecting their work.
They were not able to get into more serious concerns, such as the use of AI in military application, mental health (although it was touched on slightly), or the more serious concerns around alignment, but this topic is so huge you have to start somewhere.
Solutions
There were several solutions to the coming problems of AI discussed, many of which were incisively and succinctly articulated by the good Prof. Sam, the start of the show, also was ready with answers…at least when the senators would let him get to the end of his point. Christina, unfortunately, was a bit of a robot, repeating herself and not really providing anything particularly insightful…that being said very few questions were actually directed toward her so she didn’t exactly have the same opportunities Marcus and Sam got.
Here are a few of the discussed solutions.
Create a Regulating Body
The idea behind this should come as no surprise. Think of the FCC or the FDA, but for AI. Everyone seemed more or less in agreement with this, although Christina from IBM leaned more towards letting companies create the governance, much to Rep Graham’s interrupting chagrin. Marcus and Sam were all in on one, and one of the congressmen essentially offered Sam the job of leading it, which was pretty hilarious. He denied the informal job offer, stating he liked where he worked.
Transparency
An oldy but a goody. That being said, transparency is starting to become a bit of a buzzword. Within the context of the testimony, the focus on transparency was mostly on the training data of these systems. As many of us know, there has been quite a lot of bias in these systems because of the data they are trained on. The idea would be to have companies disclose exactly the data they were training models on, so they could be effectively regulated and cleared of bias.
No mention of interpretability, though, or needing to have a deeper understanding of how these models actually make decisions for their outputs.
Lawsuits
Several times, perhaps deliberately, the conversation around Section 230 came up to compare against this new wave of AI technology. For those who don’t know, this is the law which strips social media companies of any culpability around the impacts of their technology. It’s why you can’t sue facebook if your child gets cyberbullied to the point of attempted suicide or beyond. It’s a similar argument to gun laws, where the gun manufacturer or seller cannot be held liable for how the gun is used after a purchase.
Rep. Hawley of Missouri (which Sam and he shared a funny aside about both being from St.Louis), brought up the extreme of this case, whereby they repeal that law, and create legal pathways for responsibility of companies for their products.
Interestingly, Prof Marcus who again is heavily on the side of regulation and AI safety, actually cautioned against this as a solution, ironically because lawsuits would take too long! Also there’s the problem of…do any of the laws we have actually apply to AI right now? TBD.
Licensing
I think one of the strongest and most actionable strategies delivered during the testimony was licensing larger models. The general idea is that you treat LLMs like you would a package store - you need a liquor license from the government to operate. This would allow the government to place clearer boundaries around pre-release of an LLM, as well as take away the license (and therefore a company's ability to operate) if it causes enough harm, or otherwise breaks whatever laws or regulations are in place.
A concern with any regulation is that you end up screwing the little guy, because they don’t have the means to remain in compliance, and therefore stifle innovation and inadvertently focus power into the hands of a few profitable companies. Sam had an answer for this, stating the licenses should ONLY be placed on sufficiently advanced LLMs, such as GPT4, and coming up with whatever those parameters might be to determine what was advanced enough (e.g. compute power). He even had the presence of mind to mention that even something like compute would not be a good indicator, since those barriers might shift drastically with advancing technology.
Conclusion
Although I doubt you will watch the entire testimony, I recommend you at least check out the highlights. This was a fairly nuanced and respectful conversation that was actually very open and interesting. Yeah there were some questionable parts and questions, but at the end it gave me hope that we might actually be on the right path toward regulation of these technologies.
The most wild aspect of all of this is how much everyone was more or less in agreement, both across party lines and the industry. I mean I know Sam is a bit different than most, but you know it’s a serious matter when the people creating this technology are almost begging for guardrails. My concern is…will we be able to work fast enough before regulatory capture from private interests, or continuing the theme of accelerating faster than we can control.
Watch the testimony here, and tell me what you think!