EU Big 3 Edge Toward AI Self-Regulation

France, Germany, and Italy have reached an initial agreement on regulating artificial intelligence (AI). According to a leaked joint policy paper, the three countries support mandatory self-regulation for AI through codes of conduct but oppose untested regulations.

The paper states that risks come from AI applications, not the technology itself. Developers of AI systems like foundation models will have to create “model cards” that explain how their systems work, and what they can and can’t do. An AI governance body could help develop guidelines and check that model cards are applied appropriately.

While the paper does not propose specific penalties, sanctions could be created if violations of the conduct codes occur. Germany’s Digital Affairs Minister praised the agreement for targeting AI uses rather than the technology itself. The goal is to balance innovation and responsible development.

The news comes as the EU approved its AI Act earlier this year, becoming one of the first regions to pass comprehensive AI legislation. Experts argue that in addition to regulating AI systems, policymakers need to ensure the accuracy of the data used to train them through proper data governance. Unity is needed on the importance of traceability, governance, and quality.

In summary, the major EU powers are taking initial steps to self-regulate AI via ethical codes rather than top-down legislation. The focus is on regulating applications rather than banning technologies. But robust data governance is seen as key to ensuring AI lives up to its promise responsibly.

#AIRegulation #EU #AIEthics #AIFoundationModels #EUAIAct

Leave a Reply

Your email address will not be published. Required fields are marked *