Congress is holding hearings on how to regulate emerging AI technology
LEILA FADEL, HOST:
Another thing lawmakers are focused on today - how to regulate artificial intelligence. After a dinner with members of the House, the CEO of the company behind ChatGPT, Sam Altman, is appearing before a Senate Judiciary panel. We called up Democratic Senator Richard Blumenthal of Connecticut, who chairs that subcommittee. Good morning, Senator.
RICHARD BLUMENTHAL: Good morning. Thanks for having me.
FADEL: Thank you for being here. So Sam Altman has said he's, quote, "a little bit scared" of AI and the tech his own company has created with ChatGPT and what it would do to jobs. Does AI scare you?
BLUMENTHAL: AI has some very scary potential consequences. And we've seen some of them already. The powerful spread of disinformation, harassment of women, the voice cloning software that can impersonate and falsely mimic individual voices. But there's also immense promise for new cures for cancers, developing understandings of physics and biology. It has a lot of potential for both good and ill. And one of the nightmares is that it will replace a lot of jobs. Of course, it could create a lot of jobs, as well. And what's needed is a framework, some rules of the road and protections, such as we have so far failed to do for social media, which also involved the algorithm.
FADEL: Right. I wanted to ask you about that, I mean, because there is still a lot of talk about how to regulate social media companies years after it's been a part of daily life. What - with this being an emerging technology, what kind of guardrails do you want to see?
BLUMENTHAL: I think there is a lot of agreement around a number of needed guidelines for transparency. AI companies ought to be required to test their systems, disclose known risks and allow independent researcher access. It's limitations on use. There are places where AI simply should not be permitted because the risk is so high of disinformation or falsehood and criminal use. And then responsibility and accountability - AI companies and their clients should be held responsible if they cause harm that could be foreseen and perhaps was foreseen. We shouldn't repeat the mistakes of Section 230.
FADEL: Now, you're convening today's hearing with Republican Senator Josh Hawley of Missouri. And he said, for him right now, the power of AI to influence elections is a huge concern. Do you share that concern?
BLUMENTHAL: I think there's a lot of potential for bipartisan cooperation here, just as we've done on social media. The bill that Senator Marsha Blackburn, Republican in Tennessee, and I have authored with 31 co-sponsors to protect children from toxic content on the Internet is bipartisan. I think there's also the potential for that kind of cooperation on AI. In fact, I think there is an absolute need for it because there are such huge consequences for both good and bad.
FADEL: Now, Congress does actually lag behind the European Union when it comes to regulating AI. Has Congress already dropped the ball when it comes to even starting the talk now instead of earlier?
BLUMENTHAL: There is a need for presidential leadership here - no question about it - because there are international implications. For example, we are potentially involved in an AI race with China. There are issues of national security. As a member of the Armed Services Committee, I worry a lot about the question of whether our military is moving quickly enough to adopt AI and its potential. So I think that the president has to be a part of it. And he has, in fact, proposed an AI bill of rights. Senator Schumer has suggested a framework. And I think there will be a lot of cooperation here, both bipartisan and among the branches of government.
FADEL: Senator Richard Blumenthal, Democrat of Connecticut. Just to be clear, ChatGPT did not write my questions for you today. Thank you so much.
BLUMENTHAL: Thank you. Transcript provided by NPR, Copyright NPR.