Si è svolto il 20 gennaio 2025 il primo incontro che ha dato avvio alla…
Compliance through Assessing Fundamental Rights: Insights at the Intersections of the European AI Act and the Corporate Sustainability Due Diligence Directive
di Federica Paolucci, Oreste Pollicino*, Giovanni De Gregorio**, Marco Fasciglione**
*Socio fondatore di IAIC
**Fellows di IAIC
Artificial intelligence systems, including general purpose AI systems (GenAI), have become an ordinary matter. The rampant commodification of generative models in the last year is only one example that has pushed European lawmakers to rush and adopt the European approach to AI regulation, as primarily represented by the adoption of the AI Act. Unlike other sections of the Regulation, such as those outlining prohibited uses of AI systems, which were included since the European Commission’s initial proposal, norms specific to GenAI systems were introduced by the European Parliament in its version published just before the start of the trilogue negotiations in June 2023. With the provisions governing GenAI systems under the AI Act set to take effect twelve months after the Regulation’s enactment, companies have already taken proactive steps to prepare for compliance.
However, the political rush to expand the scope of fundamental rights protection in the AI Act has raised questions about the compliance process. The AI Act is oriented not only to organisational and technical standards but also to protecting fundamental rights. The Fundamental Rights Impact Assessment (FRIA) is only one example of this process requiring deployers of AI systems to approach a different way to balance risks in the digital age. Understanding this evolution is also relevant in respect to the broader normative framework stemming from the adoption of the Directive (EU) 2024/1760 on corporate sustainability due diligence (CS3D). The directive, actually, is meant to address the adverse impacts of corporate activities on human rights (and the environment) by ensuring that EU companies operate responsibly throughout their own operations and entire supply chains, as well. In particular, the Directive – which applies also to private technological sector – imposes on in-scope companies the duty to undertake human rights due diligence in order to identify, assess and manage the risks of human rights violations. Such a due diligence process too encompasses a human rights impact assessment (HRIA) methodology, which companies, included those from the AI sector, have to perform in order to identify and prevent human rights infrigments in their business operations. While the Directive represents a crucial step towards more sustainable business practices, its compliance by corporate actors poses several challenges. Companies will need to invest in developing robust due diligence systems, which may require significant resources and expertise. These efforts will be needed to ensure effective supply chain mapping and stakeholder engagement, particularly for complex global supply chains. As a result, assessing and making decisions on how to protect fundamental rights and public interest goals required by these two instruments leads to cautiously evaluating how to comply with this broader and evolving regulatory environment.