Artificial Intelligence comes with risks. How can companies develop AI responsibly?
A MARTÍNEZ, HOST:
Vice President Kamala Harris is set to meet today with tech leaders to discuss artificial intelligence. CEOs from Alphabet, Microsoft and OpenAI have also been invited to the White House to talk about how to develop AI responsibly. So what does responsible AI look like? Joining us to talk about this is Ifeoma Ajunwa. She's a law professor at University of North Carolina, Chapel Hill, where she directs an artificial intelligence research program. Professor, welcome back. There's been a lot of speculation about the dangers of artificial intelligence taking away jobs maybe, promoting misinformation. Even Drake got dragged into this. So how much of this is hype?
IFEOMA AJUNWA: (Laughter) Well, I mean, some of it is hype. But we also want to be careful about the capabilities of artificial intelligence, especially its potential to be used for discriminatory purposes and also the effect that it can have on the labor market and also the workplace.
MARTÍNEZ: So there are actual risks here. I know that we're having fun with this, but you actually have identified actual risks.
AJUNWA: Yes, of course. There are risks. I mean, the writers' strike that's going on now is proof of that, because really at the crux of the matter of the strike is the idea that writers want to be able to curtail the use of AI in writing shows and therefore essentially excluding human writers. And so that is something as a society that we have to grapple with. And we have to come to an understanding that we still want to carve out a space for human workers and not, you know, just replace everybody with robots.
MARTÍNEZ: So in a capitalistic system, professor, how can companies develop AI responsibly?
AJUNWA: I think, you know, in a capitalist system, companies will always still want to make profit. But you still want to ask yourself, if workers are not able to make good wages and if, you know, jobs are taken away from workers, then who will be your consumers? So we still have to keep the bottom line of making sure that the economy works for the humans.
MARTÍNEZ: What about regulating AI? I know the Federal Trade Commission is interested in doing that. To how much do you think, or maybe at all, should the government be involved in something like this?
AJUNWA: Oh, I think the government should definitely directly be involved with something like this because what is happening is that technology is developing at a fast pace. We don't yet know really the full capability, the full potential of how AI can grow. And without the government there as a, you know, safeguard, as a check and balance, we could have AI that potentially gets too big to really regulate, essentially. And we don't want that. We want to be at the forefront of making sure that AI is being developed responsibly, and that the way that AI is being designed is with fairness in mind for humans.
MARTÍNEZ: Thing is, though, professor, I mean, we've seen with social media that the government typically is very slow to keep up with tech. I mean, just last June, you and I were on this show speaking about how the law might apply to AI if it ever becomes self-aware. It felt unlikely then, but now it's inside a year and we're here. So do you think that people have a clear understanding of what artificial intelligence can and cannot do?
AJUNWA: Yes, I think it's remarkable, right? In just a few short months, we've seen ChatGPT and various editions of it already. So I think we do want to be really cognizant that AI will continue to develop. And although the law is slow to change, we do have to also try to attempt to get the law to keep up with it.
MARTÍNEZ: Ifeoma Ajunwa is a law professor at the University of North Carolina, Chapel Hill, where she directs an artificial intelligence research program. Professor, thanks.
AJUNWA: Thank you so much for having me. Transcript provided by NPR, Copyright NPR.