Response to the Public Consultation on the White Paper ‘On Artificial Intelligence’


Jens-Peter Schneider, Project Reporter of the ELI project on Artificial Intelligence (AI) and Public Administration, together with Co-Reporters Marc Clément and Paul Craig, and Christiane Wendehorst, European Reporter of the ALI-ELI project on Principles for a Data Economy, have submitted a response to the European Commission’s public consultation on the White Paper: On Artificial Intelligence – A European approach to excellence and trust.

The White Paper ‘On Artificial Intelligence’ (COM(2020) 65 final) was published by the European Commission in February 2020 to present policy options on the introduction of the use of AI while addressing certain risks connected to it. The policy framework focuses on creating excellent solutions (‘ecosystem of excellence’) which will enhance trust among various stakeholders (‘ecosystem of trust’). This was welcomed by the authors of the response, who focused particularly on the (legal aspects of) the ‘ecosystem of trust’.

The response addresses the regulation of AI applications in both public and private use. It points out some of the challenges, such as lack of knowledge and the public-private technology gap, as well as options for a legal framework on the public use of AI. Impact assessments in particular could serve as a source of trustworthy public AI. Model rules on such impact assessments are currently being developed in the framework of the new ELI project on Artificial Intelligence (AI) and Public Administration – Developing Impact Assessments and Public Participation for Digital Democracy

When it comes to the private use of AI, the response points out that AI applications, and corresponding risks, fall under two different dimensions: the ‘physical’ and the ‘social’. The response advocates for a targeted regulatory approach to both dimensions. It underlines that the ‘physical’ risks (such as death, personal injury or damage to property caused by unsafe products and services) could be best addressed by fully adjusting existing regulatory frameworks to the challenges of digital ecosystems, including AI. The ‘social’ risks (such as discrimination, exploitation, manipulation or loss of control by inappropriate decisions or exercise of power with the help of AI) are much more AI-specific and challenging to regulate. After expanding on different regulatory techniques the response recommends something along the lines of a combination of horizontal principles, a list of blacklisted AI practices and a more comprehensive regulatory framework for defined high-risk applications.

The full response is available here.