Register here.
On 1 August 2024, the AI Act entered into force, marking a significant milestone in the EU's efforts to regulate artificial intelligence. This followed the establishment of the European AI Office in May 2024, which is tasked with ensuring the consistent application of the Act across Member States and providing guidance on its implementation. To prepare for the application of rules addressing high-risk AI systems and practices posing unacceptable risks, set to take effect on 2 February 2025, the European Commission launched targeted stakeholder consultations. These consultations aim to refine guidelines and support the development of codes of practice, paving the way for the Act's full implementation by 2026. In furtherance of the above, the Commission launched a Multi-Stakeholder Consultation for Commission Guidelines on the Application of the Definition of an AI System and the Prohibited AI Practices Established in the AI Act on 13 November 2024, to which the ELI contributed.
ELI’s Response, authored by ELI Scientific Director, Prof Dr Christiane Wendehorst and by Mr Bernhard Nessler, Research Manager Intelligent Systems and Certification of AI at the Software Competence Center Hagenberg (SCCH), tackles the ambiguities in the AI Act's definition of an ‘AI system’ by introducing a ‘Three-Factor Approach’ to better distinguish AI from other IT systems. The Response is available here.
ELI will host a webinar on 19 February 2025, from 12:30–14:00 CET, to discuss its Response and its Three-Factor Approach. This session will bring together experts to explore the implications of these developments and offer further insights into the evolving landscape of AI regulation under the AI Act.
More information will be available soon.