{shortcode-26cfa12a3489ac9241b4f4d0f5d3b6a8ffb4fa87}
The co-chairs of the United Nations AI Advisory Board, Carme Artigas and James M. Manyika, discussed the geopolitics of developing and governing artificial intelligence at a Harvard Institute of Politics forum on Wednesday evening.
The talk, moderated by Director of the Belfer Center for Science and International Affairs Meghan L. O’Sullivan, centered around the findings of the AI Board’s recent report, “Governing AI for Humanity.”
Artigas, who served as Spain’s Secretary of State for digitalization and artificial intelligence from 2020 to 2023, and Manyika, the Senior Vice President of Research Technology and Society at Google, worked with a coalition of 39 AI experts to produce both an internal draft answering baseline questions about the significance of AI and a final brief that offered recommendations for global governance.
“The first internal report was just answering the two questions: ‘Why do we need to govern AI globally?’” Artigas said, “and ‘What exactly do we want to govern?’ And then what we have to gain consensus on in the last nine months — which has been probably the hardest — is ‘Okay, now we know why and what. How do we do it?’”
In the report, the board offered seven recommendations to close AI governance gaps, with an emphasis on channeling AI benefits toward the UN’s Sustainable Development Goals.
“The reason we liked anchoring a lot of the benefits around the SDGs is at least it gave us a list of things that the world has generally agreed on already, so you didn’t have to go renegotiate ‘What are those benefits? What are the problems to solve?’” Manyika said.
Still, Manyika said it was a “daunting” task for him and Artigas, who were chosen in October 2023 to co-chair the board of 39 members representing 33 countries. The board conducted briefings with the 193 member states, consulted over 2,000 experts, ran numerous surveys, and received multiple written submissions with feedback on the drafted report.
“The work itself was very complicated because, as you might imagine, the world’s a very big place. We needed to listen to all the members, their different views. We also had to consult with all the member states,” Manyika said.
Artigas said it was “enriching” to integrate different perspectives and ideas in the board’s work, saying that each member offered their “own principles, values, and beliefs.”
“The important thing is that we have all the freedom to work independently with no interference in our work,” Artigas added.
Given the diversity of individuals on the board, Manyika said he and Artigas were both concerned that any agreement would be “watered-down.” Still, he said the group was able to achieve “substantive” consensus in the report.
“We were able to get agreement with some foundational principles that everybody kind of agreed to,” Manyika said. “Things like this technology should be grounded in the public interest — that, in fact, it should benefit everybody inclusively.”
Manyika and Artigas also discussed the board’s inclusion of perspectives from the Global South when assessing the future of AI governance, identifying gaps in capacity and governance that they say should be addressed in developing countries.
“We had to pay attention to the Global South,” Manyika said. “We’re worried that there’s already a digital divide, regardless of AI. What we didn’t want is for the digital divide to turn also into an AI divide.”
Tom E. Nelligan ’27, who attended the forum, said he appreciated the chance to better understand AI from the geopolitical perspective.
“It seems very hopeful for the future, which is kind of nice to hear, because I think a lot of the discussion on AI is doom and gloom without a lot of sunshine and rainbows,” Nelligan said.
—Ariadna Cinco contributed reporting.