I spent five days last week in Kyoto, Japan, at the 18th annual United Nations Internet Governance Forum. I was there moderating one of several panels on Generative AI (GAI) because, just like the tech industry, policy experts and government officials around the world seem to be fixated on how artificial intelligence can be used to create content as well as the risks it introduces.
My session was titled Exploring the Risks and Rewards of Generative AI, and featured Brittan Heller, senior fellow at the Atlantic Council and affiliate at Stanford Cyber Policy Center, Janice Richardson, senior advisor to Insight, a European internet safety education organization, Zoe Darme, a senior manager at Google and Daniel Castaño, a law professor from the Universidad Externado de Colombia. Heller and Castano joined remotely.
Heller talked about how conversational AI systems, such as chatbots, can inadvertently propagate moral and ethical biases, potentially eroding trust and leading to public dissatisfaction.” She also worries about “lack of accountability in environments where it could be difficult to authenticate content or indicate provenance,” leading to misinformation or bias.
Heller also warned about malware and cybersecurity risks, foreign interference, deep fakes and “profound economic and societal consequences, affecting public safety, causing panic and disrupting critical services.”
Richardson talked about both the opportunities and challenges that AI brings to K-12 education. On the positive side are its abilities to enhance personalized learning, overcome physical and other disabilities, provide data-based feedback to students and educators, better understand group vulnerabilities, reduce skill gaps, automate repressive tasks and help create what she called “smart content.”
Richardson also worries about negative geopolitical impacts of AI along with its potential to exacerbate systemic racism and inequality and the possibility for bias in AI algorithms that can negatively impact marginalized communities.
Both Heller and Richardson raised the issue of AI having a negative impact on human rights, though there is also the possibility that it could be used to expand and enforce human rights.
Darme demonstrated how seemingly logical heuristic shortcuts for validation are often not reliable. There was a time, for example, when security experts advised consumers to beware of unsolicited emails with spelling and grammatical errors, but spell checkers and generative AI now empower scam artists to more easily create authentic looking messages.
She pointed to research showing that people are not great at discerning accuracy and authenticity. To prove her point, she showed the audience two images and challenged them to discern which was created by humans and which was generated or edited by AI. Most people got it wrong.
Even something as simple as labeling content as AI generated doesn’t indicate whether or not it’s trustworthy. There is plenty of misinformation created by humans with no help from AI, as well as AI-generated content that is accurate.
And there’s nothing new about people jumping to conclusions based on what appears to be convincing evidence. She pointed to a study by Dartmouth computer scientist Hany Farid who used modern forensic computer technology to analyze a famous photo of Lee Harvey Oswald holding a gun, which had inconsistences in lighting and shadows that many believed prove that the photo was manipulated. But Farid concluded that the shadow “is consistent with the 3-D geometry of the scene and position of the sun.”
Castano talked about how AI could affect the digital divide for global majority countries. There is a danger that it could further increase inequality both within and between countries as well as the distinct possibility algorithms could favor wealthy countries and negatively affect people living in developing countries.
In my remarks, I tried to put our concerns and excitement over generative AI into a historical perspective. I have no doubt that GAI will have both positive and negative consequences. But, as someone who has been involved in internet policy for decades, I’ve seen numerous moral panics and exaggerated fears along with plenty of hype over technologies that didn’t deliver nearly what their advocates had promised. It remains to be seen how much impact GAI has on our society. I suspect it will be significant, but it’s too early to know for sure. Likewise, while I have my concerns, I do not agree with the contention by some experts that it poses an “extinction level threat” akin to nuclear war, climate change and pandemics.
Controversy over next year’s IGF
The IGF is held each year in a different country. This year’s IGF in Japan was one of several I have attended and spoken at in person or virtually in Kenya, France, Switzerland, Lithuania, Azerbaijan, Germany, Turkey, Indonesia, Mexico, Brazil, India, Poland and Ethiopia. But next year, I probably won’t attend, because for reasons I am having trouble understanding, the UN decided to hold it in Saudi Arabia. I realize that some of the countries where it’s been previously held have authoritarian leaders, but I agree with the Committee to Protect Journalists “and more than 70 digital and human rights organizations” who are urging United Nations Secretary-General Antonio Guterres to reverse that decision.
The letter points to Saudi Arabia’s criminalization of same-sex relations, denial of women’s rights and “censorship, surveillance, forced disappearance, extrajudicial killings, detention, and torture,” along with the murder of Saudi journalist and U.S. resident and human rights advocate Jamal Khashoggi.
Larry Magid is a tech journalist and internet safety activist. Contact him at [email protected].