London, Dec 11 – The deal on rules regulating artificial intelligence (AI), struck by EU lawmakers and member states on Friday, has been welcomed by British academic experts, who see it as a positive step towards broader global regulation but emphasize the need for a global standard and cooperation.
Alison Lui, associate dean in global engagement and reader in corporate and financial law at Liverpool John Moores University (LJMU), referred to the deal as the first attempt in the world to regulate artificial intelligence (AI), which is “definitely a good start.”
“In relation to AI, there’s just so many risks and many of them are unknown to us, so the general public and consumers need to be protected … The advantage of having regulation in the form of legislation is that it gives clarity to everybody. We would know what are the consequences of breaching, for example, any safety standard,” she said.
It is now likely to be several months before the provisionally agreed measure passes the European Parliament.
Jennifer Graham, lecturer in law and legal technology at LJMU, told Xinhua that the regulation has the potential to provide a really great blueprint for other nations around the world.
“We might see some nations taking the same approach that the EU has taken, or other countries perhaps taking inspiration from the EU’s approach and adapting it to fit their needs or their desires or their interests more specifically,” she said.
“One of the real issues that lawmakers are faced with is how to future proof AI and how to future proof legislation to make sure the world can continue to adapt them as the tech develops,” Graham said, noting the rapid advance of the AI technology would possibly make whatever rules approved quickly become obsolete.
Lui, who has spent years researching AI controls at an academic level, said worldwide regulation is inevitable, but the big question will now be whether the world sees the EU law as a global gold standard.
“The EU kept saying this is the first piece of legislation to regulate AI … But it is very much politically driven,” said Lui.
“If we put aside self-interest and look at what is AI trying to achieve for global humanity, then there should be a global strategy for the world, and that’s really important … AI can do a lot of good for humankind, but yes, they need to look at fundamentally what are the main risks and how we’re going to regulate them,” Lui added.
Graham, who specializes in the law, ethics, and regulation of AI, pointed out that widely accepted AI legislation would be difficult to achieve.
Still, she thought there was potential to come to some globally recognised principles and standards for AI.
What is needed is harmonization between nations and organizations, said Graham, noting it is better to work together than to work separately, and any collaboration “should be seen as beneficial.”