International regulatory experts weigh up AI opportunities and risks

 International regulatory experts weigh up AI opportunities and risks

Rapid innovation in AI has presented policymakers with the complex challenge of balancing
regulation that facilitates and encourages innovation whilst reducing risk. The scope for AI
technology to benefit society, for example through healthcare technology innovation or through
driving economic growth, is matched by its potential negative impacts.

Analysis by the IMF at the start of this year showed that in advanced economies AI could
impact up to 60 percent of jobs. Equally, in a year with 64 elections, many have raised
concerns about the ways AI could undermine democracy. Valerie Wirtschafter, a fellow at The
Brookings Institution wrote earlier in the year that “generative AI content can act more as an
amplifier for the spread of disinformation.”

These two areas are instructive of the kind of landscape regulators must consider when creating
frameworks for AI governance. Similarly to any transformative technology, this requires
continued open discourse between government, industry and civil society to track an effective
course.

Against this backdrop, RAID convened a panel of experts from industry and government in April
to share key insights on these issues from various perspectives. Chaired by Meta’s Privacy
Policy Manager, Nicolas de Bouville the panel included:

  • Dragoș Tudorache MEP who chairs the Special Committee on Artificial Intelligence in a Digital
    Age, European Parliament
  • Yordanka Ivanova from the AI Policy Development and Coordination Unit within the
    European Commission
  • Dr. Anna Christmann, Member of the Bundestag, Federal Government Coordinator for
    Aerospace and BMWK representative for the digital economy and start-ups, Federal Ministry
    for Economic Affairs and Climate Action, Germany
  • Laurent Gobbi, KPMG France

Nicolas opened by situating the discussion within the central challenge for global AI regulation: “Over
the past two years, we’ve seen countries around the world have been designing and adopting AI
governance frameworks. They’ve taken different approaches including comprehensive legislation,
focussed legislation for specific use cases, national AI strategies and even voluntary guidelines.”

“One of the key challenges with these different approaches has been to find the right balance
between innovation on the one hand and managing risk on the other hand. We are also seeing a
different number of multilateral efforts to coordinate approaches at an international level.”

“The role and work of the @OECD, for example, has been central, the work of the G7 with the
Hiroshima process has also been very important in the last two years, the work of the @European
Council with the international treaty, the work of the @African Union, and the work of the United
Kingdom with the AI Safety Summit and the potential follow-up event which may take place in
France.”

Nicolas asked Anna about the effectiveness of the current global regulatory approaches and what
improvements could be made.

Anna commented: “On a global level I think the EU AI Act can be an important example of effective
regulation because it is a risk-based approach where we see that there are different kinds of AI and
they do not all have the same risk.

“We differentiate between things like social scoring which is very harmful to individual rights and
industrial applications which might not be that risky and can be very helpful. This risk-based
approach is very important is also important in terms of a global aspect.”

“It also shows how important it is for governments to come together when it comes to regulating AI
and not just regulating the technology but also regulating its application.”

Yordanka discussed how the EU AI Act can help position the jurisdiction as a leader in regulating
and facilitating innovation in AI.

“Europe is a frontrunner in the regulation of AI, implementing smart and innovation-friendly
regulation that promotes competitiveness, provides legal certainty and aims to enable all sectors to
benefit from this transformative technology.

“It is important that we will have one single set of rules applicable for all AI in the EU which will allow
companies to grow and spread over the entire single market whilst having a level playing for placing
those systems in Europe whether they are developed by companies based in, for example, China,
the US or Europe.

“Our approach, as highlighted by Anna, has been a very targeted one where we avoid overregulation
and follow a risk-based approach that will enable innovation and only affect a limited number of
applications that are the most risky.

Yordanka later added: “We also have specific measures to support the smaller players, to support
innovation with research exemptions and support SMEs in particular with regulatory sandboxes.

“From a competitiveness perspective, we find it important to have these light-touch rules for general-
purpose AI models which will allow every company in the EU to integrate these promising and
capable models in numerous innovative downstream applications which will spur growth and benefit
users across all sectors.”

Nicolas asked Dragoş to discuss some of the considerations relating to which regulators are best
placed to enforce the AI in different member states, a step that will be taken at a national level either
by creating a new authority or designating an existing authority to oversee the regulation of the act.

Dragoş said: “I am convinced we will see different solutions adopted by different member states.
Some of them will go down the route of creating a special entity dedicated to implementing the AI
Act nationally. Others will be going towards the national data protection authorities, others towards
other existing authorities, for example, those dedicated to regulation in the tech sector.

“I think what is fundamental in how the implementation of this act will go is how authorities at a
national level will understand the spirit of this legislation. Where the authorities will be sitting and
whether they will be overlapping with others or be independent is less important as long as they
understand what their role is as regulators.”

“Regulators should not understand their role to be implementing this law with a stick in their hand.
There are numerous provisions in this text which speak of the national authorities’ obligation to
facilitate, to create conditions in the way they operate sandboxes, in giving access to SMEs, in
making sure SMEs have what they need and that the right dynamic is fostered within sandboxes.

“There is a lot in this text which speaks of the need to foster innovation and that is going to play out
in the way that national authorities implement it.”

Nicolas asked Laurent, based on his experience working with international organisations, what the
main challenges are for organisations working with AI and what steps organisations can take to
address these challenges.

Laurent said: “There are three main ideas that are coming up in terms of what we are seeing on the
market and what people are telling us.

“The first point is that a lot of companies are looking at what the best use cases are for generative
AI. I think it is important to keep in mind that we are still in a period of time where it is a kind of gold
rush similar to in the late 90s for e-commerce for example.”

“The second question we have from a lot of companies is: ‘How can I be sure that this will be secure
and safe so that I can put data in this kind of system without it being at risk?’

“The third question is: ‘How should I organise my team and company in terms of governance for
AI?’ And this is not an easy question, because for AI you have a lot of skills that have to be
organised that are part of the solution and delivery of this.

“So now if you come to the regulation I think it’s a good thing to have a risk-based approach
because it makes sense to adapt this approach to the size of the company, to the number of AI
solutions and also the position of the company in the value chain.

“One part of the regulation that needs attention is the documentation and testing, and the things
that will demonstrate the reliability and explainability of AI systems. Those kinds of things are
really time-consuming and resource-intensive. We anticipate, based on our conversations with a
lot of companies that those things will be automated as much as possible.

“Beyond the compliance, one important point is to leverage the regulation requirement as an
enabler for companies because there is a good basis for that in providing an accelerator for
innovation.”

RAID’s editor Ben Avison reflected: “The rapid development of AI innovation and regulation
highlights the importance of RAID’s biannual gathering of international policymakers and
regulators. Our Brussels conference in September will take this conversation, on how to balance
the opportunities and risks of AI, onto the next level.

“Since this panel discussion in April, the European Commission has been quick to establish the
AI Office to implement the AI Act. We are delighted to welcome key figures involved in its
implementation, including Yordanka Ivanova and many more, alongside Dragos Tudorache who
was so central to getting the AI Act through parliament.”

The RAID 2024 conference takes place over two days from 23 to 24 September at the
Stanhope Hotel in Brussels. To book a ticket follow the link here.

Write-up by Nick Scott. Editing by Ben Alison.