EU Approves Draft Law To Regulate AI – Here’s How It Will Work
While far from perfect, it will be the first legislation in the world dedicated to regulating AI in almost all sectors of society — although defense is exempt.
The word “risk” is often seen in the same sentence as “artificial intelligence” these days. While it is encouraging to see world leaders consider the potential problems of AI, along with its industrial and strategic benefits, we should remember that not all risks are equal.
On Wednesday, June 14, the European Parliament voted to approve its own draft proposal for the AI Act, a piece of legislation two years in the making, with the ambition of shaping global standards in the regulation of AI.
After a final stage of negotiations, to reconcile different drafts produced by the European Parliament, Commission and Council, the law should be approved before the end of the year. It will become the first legislation in the world dedicated to regulating AI in almost all sectors of society – although defence will be exempt.
Of all the ways one could approach AI regulation, it is worth noticing that this legislation is entirely framed around the notion of risk. It is not AI itself that is being regulated, but rather the way it is used in specific domains of society, each of which carries different potential problems. The four categories of risk, subject to different legal obligations, are: unacceptable, high, limited and minimal.
Systems deemed to pose a threat to fundamental rights or EU values, will be categorised as having an “unacceptable risk” and be prohibited. An example of such a risk would be AI systems used for “predictive policing”. This is the use of AI to make risk assessments of individuals, based on personal information, to predict whether they are likely to commit crimes.
A more controversial case is the use of face recognition technology on live street camera feeds. This has also been added to the list of unacceptable risks and would only be allowed after the commission of a crime and with judicial authorisation.
Those systems classified as “high risk” will be subject to obligations of disclosure and expected to be registered in a special database. They will also be subject to various monitoring or auditing requirements.
The types of applications due to be classified as high risk include AI that could control access to services in education, employment, financing, healthcare and other critical areas. Using AI in such areas is not seen as undesirable, but oversight is essential because of its potential to negatively affect safety or fundamental rights.
The idea is that we should be able to trust that any software making decisions about our mortgage will be carefully checked for compliance with European laws to ensure we are not being discriminated against based on protected characteristics like sex or ethnic background – at least if we live in the EU.
“Limited risk” AI systems will be subject to minimal transparency requirements. Similarly, operators of generative AI systems – for example, bots producing text or images – will have to disclose that the users are interacting with a machine.
During its long journey through the European institutions, which started in 2019, the legislation has become increasingly specific and explicit about the potential risks of deploying AI in sensitive situations – along with how these can be monitored and mitigated. Much more work needs to be done, but the idea is clear: we need to be specific if we want to get things done.
Risk Of Extinction?
By contrast, we have recently seen petitions calling for mitigation of a presumed “risk of extinction” posed by AI, giving no further details. Various politicians have echoed these views. This generic and very long-term risk is quite different from what shapes the AI Act, because it does not provide any detail about what we should be looking out for, nor what we should do now to protect against it.
If “risk” is the “expected harm” that may come from something, then we would do well to focus on possible scenarios that are both harmful and probable, because these carry the highest risk. Very improbable events, such as an asteroid collision, should not take priority over more probable ones, such as the effects of pollution.
In this sense, the draft legislation that has just been approved by the EU parliament has less flash but more substance than some of the recent warnings about AI. It attempts to walk the fine line between protecting rights and values, without preventing innovation, and specifically addressing both dangers and remedies. While far from perfect, it at least provides concrete actions.
The next stage in the journey of this legislation will be the trilogues – three-way dialogues – where the separate drafts of the parliament, commission and council will be merged into a final text. Compromises are expected to occur in this phase. The resulting law will be voted into force, probably at the end of 2023, before campaigning starts for the next European elections.
After two or three years, the act will take effect and any business operating within the EU will have to comply with it. This long timeline does pose some questions of its own, because we do not know how AI, or the world, will look in 2027.
Let’s remember that the president of the European Commission, Ursula von der Leyen, first proposed this regulation in the summer of 2019, just before a pandemic, a war and an energy crisis. This was also before ChatGPT got politicians and the media talking regularly about an existential risk from AI.
However, the act is written in a sufficiently general way that may help it remain relevant for a while. It will possibly influence how researchers and businesses approach AI beyond Europe.
What is clear, however, is that every technology poses risks, and rather than wait for something negative to happen, academic and policymaking institutions are trying to think ahead about the consequences of research. Compared with the way we adopted previous technologies – such as fossil fuels – this does represent a degree of progress.
Nello Cristianini is a Professor of Artificial Intelligence at the University of Bath, and the author of “The Shortcut – Why Intelligent Machines Do Not Think Like Us”. His research interests include machine learning, artificial intelligence, computational social science, data science, and the philosophical foundations and implications of AI.