{shortcode-ec1e07bfb636e0ec15460fb4b4d10843a83fb320}
Yoshua Bengio, a prominent computer scientist who is widely regarded as one of the “Godfathers of AI,” discussed how increased computing power can be harnessed to improve aspects of artificial intelligence safety at a Monday afternoon webinar.
Bengio is a professor at the Université de Montréal and the founder and scientific director of Mila - Quebec AI Institute. He received the 2018 Association for Computing Machinery’s A.M. Turing Award — described on the ACM’s website as “the Nobel Prize of Computing” — for his pioneering work in deep learning alongside Geoffrey E. Hinton and Yann A. LeCun.
The talk, titled “Towards AI Safety that Improves with More Compute (rather than the other way around),” was co-organized by the AI Safety Student Team and the Harvard Machine Learning Foundations Group, which is comprised of faculty, graduate students, and postdoctoral fellows at the Harvard School of Engineering and Applied Sciences.
Bengio opened the talk by delving into the overarching concerns related to mitigating risk within AI and the considerations surrounding AI safety amid the rapidly advancing landscape.
“Technology is generally dual-use. It could be used for good. It could be used with malicious intentions. And the more powerful it is, the more this dual-use aspect is something we should worry about,” he said.
With escalating computational capabilities, Bengio said there may soon be an AI agent that would pose potential dangers “for which we need democratic oversight.”
“When we build very powerful tools — and we will build more and more powerful AI — whoever controls those tools gets to have more power, because intelligence gives power, ” he said.
“Democracy is the opposite of power concentration,” Bengio added.
Bengio proceeded to outline the three primary recommendations he presented to the U.S. Senate last July, encompassing “regulation,” “research investment,” and “countermeasures.”
First, he highlighted the importance of establishing regulatory measures and boundaries around AI access to ensure they “behave according to the democratic will.”
He then advocated for increased investment in AI safety research by both governments and companies, proposing that “companies should have the burden to demonstrate that their systems are safe before those systems can be deployed, or even potentially built.”
Finally, he urged governments to prepare countermeasures “against the possibility that there will be some dangerously powerful AI out there,” even in the presence of regulations and treaties.
Bengio said throughout his career he has supported open source software, citing the major advantage of speeding up development and making systems safer, especially in the area of cybersecurity. Open source refers to computer software or projects where source code is publicly available, allowing users to collaboratively view, modify, and distribute it.
“It is a form of democratization in the sense that more people can have those tools,” he said.
But Bengio said open source software can also raise important ethical questions.
“Who decides whether we release a system open source or not? What kinds of systems do we want to share because the benefits outweigh the risks?” Bengio asked. “And what kinds are not socially acceptable? Who should take those decisions, corporations or governments?”
Ultimately, for effective regulation and democratic decision-making regarding AI usage, Bengio emphasized the need for governments to deepen their understanding of AI.
“In order to regulate AI, to be able to have a democratic process to decide how we use it, governments need to acquire the capability to understand it to master it — to even use it to defend themselves, ” he said.
—Staff writer Camilla J. Martinez can be reached at camilla.martinez@thecrimson.com. Follow her on X @camillajinm.