In this exclusive interview, Henrik Trasberg, who is speaking at the RAID 2023 conference on 26th September, discusses the opportunities and challenges around the regulation of AI and other emerging technologies. Henrik is a Legal Advisor on AI and New Technologies at the Estonian Ministry of Justice and a lecturer in intellectual property law at the University of Tartu. His government legal advisory work has included representing the Estonian Ministry of Justice in negotiations with the EU around the AI Act as well as working more broadly on effective policy for the safe and effective deployment of AI.
I’d argue that in many cases, the regulatory measures that help to mitigate risks of new technologies can be designed so they wouldn’t stifle innovation. Even the AI Act can be an example of that – the key requirements are around having more understanding on how the systems work, ensuring that data is of sufficiently high quality and ensuring oversight of the systems. These requirements are not in conflict with the kind of innovation that we would want to see in Europe.
However, there is a major challenge on a more nuanced level – how to make sure that the requirements (for example on data quality), are sufficiently clear to market actors but at the same time flexible enough to consider the context in which the AI system is used. Various support measures such as standardisation, sandboxes and good guidelines can do a lot to achieve that.
One more remark – as regulators and supervisory authorities, we could be braver in saying whether particular technological solutions are legally OK or not. This has been an especially notable problem in the domain of personal data protection, where we’re seeing that companies don’t get enough clarity from regulators and authorities on whether their proposed approach is compliant with the GPDR.
I would outline three major challenges that are currently at the forefront. Firstly, there are already a number of cases filed against generative AI models claiming that these models used copyrighted content in their training without permission to do so.
The second challenge is whether the output generated by AI – e.g. an image – might be infringing copyrights. For example, there are several cases pending in the US on whether the generative AI system that mimics artist’s style in their output might be infringing copyrights.
Thirdly, the question of who the author is and who owns intellectual property rights to the content created by generative AI is a major discussion point. In the EU, you’d need an expression of intellectual creation shown by a human author to grant copyrights, which pretty much means AI-generated content would not be protected by copyright.
With all three examples, there is a common underlying policy challenge here, which is ensuring that the creators' interests are fully respected while retaining an environment where the generative AI technologies can innovate and evolve.
In terms of the borderless element of these technologies – having harmonised regulation on the EU level and avoiding fragmentation in different countries benefits everyone. For companies, it considerably lowers the barriers to entering new markets. For consumers, it ensures uniform protection, whether they are abroad or using services from another country. For governments, particularly in smaller countries, it significantly increases their capacity to enforce the rules against the big tech companies.
And then the other aspect of having a coordinated approach is about having consistency between different regulations that target different digital technologies. In an increasingly complex regulatory environment, we should strive as much as possible that the same risks in different domains are tackled in a harmonised manner and that balancing of the same conflicting interests is made similarly.
I think we are sometimes struggling with this aspect in the EU. For example, we’re now seeing some domain-specific legislative proposals whereby companies would be thoroughly restricted from processing personal data, in order to better tackle a somewhat secondary risk, while in other domains more problematic data processing practices are acceptable. Paying more attention to harmonisation of the domain-specific regulation would add a lot to the consistency and the comprehensibility of our regulatory regime.
There is more talk about AI impacts and risks – and the need to sufficiently address them – than we’ve seen with any previous novel digital technology. It makes me optimistic that there is a societal expectation brewing which helps to nudge companies to be genuinely prepared for the AI Act. Once the regulation is in force, I do believe the increased transparency, data quality standards and risk management processes it envisions will make the AI-based services and products better for consumers and will thereby bring more trustworthiness into the AI landscape.