The federal government must move quickly to regulate artificial intelligence, said a top AI pioneer, who warned that the technology’s current trajectory poses huge societal risks.
Yoshua Bengio, known as the “godfather” of AI, told members of Parliament on Monday that Ottawa should immediately enact a law, even if that law is not perfect.
The scientific director of the Quebec AI Institute, Mila, said a “superhuman” intelligence that is as smart as a human could be developed in the next two decades – or even the next few years.
“We are not ready,” Bengio said.
He said one short-term risk of AI is the use of deepfake videos to spread disinformation. Such videos use AI to make it appear as if a public figure is saying something they didn’t say, or doing something that never happened.
“This technology can also be used to interact with people through text or dialogue,” Bengio said, “thus fooling a social media user and making them change their mind on political questions.” May go.”
“There are real concerns about the use of AI in politically oriented ways that go against the principles of our democracy.”
A year or two later, there is concern that more advanced systems could be used for cyber attacks.
AI systems are getting better and better at programming.
“When these systems become strong enough to defeat our existing cybersecurity and our industrial digital infrastructure, we are in trouble,” Bengio said.
“Especially if these systems fall into the wrong hands.”
Bill proposes to regulate AI systems
The House of Commons Industry Committee, where Bengio testified on Monday, is studying a Liberal government bill that would update privacy law and begin regulating some artificial intelligence systems.
The drafted bill would give the government time to develop rules, but Bengio said some provisions should take effect immediately.
“With the current approach, it will take about two years to implement,” he said.
He said one of the initial rules he would like to see implemented is a registry that would require systems with a specified level of capability to report to the government.
Bengio said the responsibility and cost of demonstrating safety would be placed on the big tech companies developing these systems rather than on taxpayers.
Bill C-27 was first drafted to aim for what were described as “high-impact” AI systems by 2022.
Bengio said the government should change the law’s definition of “high-impact” to include technology that poses national security and societal threats.
This can include any AI systems that bad actors can use to design dangerous cyberattacks and weapons, or systems that find ways to self-replicate despite programming instructions to the contrary.
Generative AI systems like ChatGPT, which can generate text, images, and video, emerged for widespread public use after the bill was first introduced.
The government said it planned to amend the law to reflect this.
The Liberals say they aim to force companies to take steps to ensure that the content they create can be identified as AI-generated.
“It is very important to cover general-purpose AI systems because these are the systems that can be most dangerous if misused,” Bengio said.
Professor Catherine Regis of the Université de Montréal also told the committee meeting on Monday that the government needs to take immediate action, citing recent “meteorological developments in AI that we are all familiar with.”
Speaking in French, he explained that AI regulation is a global effort and if Canada is to have a voice it will have to figure out what to do at a national level.
“Decisions will be taken at the global level which will impact all countries,” he said.
“Establishing a clear and concrete approach at the Canadian level is one of the prerequisites for building a credible structure and playing an influential role in global governance,” he said.
Look AI pioneer Yoshua Bengio shares a primary concern: