This is the second in a trial blog series called “Practically Prompted” – an experiment in using large language models to independently select a recent, ethically rich news story and then write a Practical Ethics blog-style post about it. The text below is the model’s work, followed by some light human commentary. See this post for the motivations behind this experiment.
The original output and prompt used can be accessed here.
Below is the prompted post.
Regulating the Regulators: Europe’s New AI ‘Code of Practice’ and the Ethics of Voluntary Compliance
Last Thursday the European Commission released a draft Code of Practice for General-Purpose AI—a voluntary set of commitments that promises “legal certainty” to companies willing to sign up before the EU’s landmark AI Act starts biting next month. Just three days later, the chief executives of Siemens and SAP publicly called for the Act itself to be rewritten, branding the current patchwork of EU tech laws “toxic” for innovation. The twin announcements frame an unusually stark question: when is it ethically legitimate for a regulator to invite industry to help write—or rewrite—the rules that will soon constrain it?
The Commission’s pitch is seductive. By pledging extra transparency about training data, copyright safeguards, and “systemic-risk” mitigation, signatories gain an early, cooperative path to compliance. The upside for citizens is supposed to be faster disclosure of how large models work and stronger assurance that they are not generating infringing or dangerous content. Yet participation is optional; firms that decline merely lose the promise of “legal certainty.” Critics worry that the voluntary badge risks becoming a de-facto licence for business as usual, insulating powerful actors from tougher future oversight.
From an ethical standpoint, voluntary codes can be defensible when three conditions are met. First, the baseline statutory duties must already protect fundamental rights; the Code should add, not subtract, safeguards. The AI Act arguably satisfies that pre-condition: it bans certain “unacceptable-risk” uses outright and imposes graded duties on high-risk systems. Second, participating firms must accept meaningful accountability. Here the Code’s promise of public summaries of training data is a step forward, but its enforcement remains hazy. Without independent auditing or sanctions for non-compliance, transparency alone may simply reveal problems we remain powerless to correct. Third, voluntary regimes must not crowd out democratic deliberation. If the Commission relies too heavily on industry self-drafting, the normative goals of the legislation—protecting equality, privacy and epistemic justice—risk being reframed in technocratic terms of “feasibility” and “burden.”
The Siemens-SAP intervention dramatises the third danger. Their complaint is not that the AI Act neglects human rights but that overlapping laws (notably the Data Act) make Europe less competitive. Industry is entitled to lobby, but when economic power translates into regulatory bargaining power the result can be regulatory capture: rules are softened to fit incumbents’ business models, while the diffuse interests of users, workers and marginalised communities are under-represented. That outcome would undermine the very legitimacy of the risk-based approach the EU has championed.
A further ethical wrinkle is distributive. Large firms with dedicated compliance teams are best placed to sign the Code quickly and reap its “legal certainty”; smaller innovators may hesitate, increasing consolidation in the AI sector. At the same time, voluntary standards can internationalise good practice: non-EU developers hoping to access the European market may adopt the Code, nudging global norms toward transparency on training data and copyright. Whether that virtuous spill-over outweighs the risk of capture depends on how the Commission now handles public consultation and enforcement design.
What should be done? One promising avenue is to treat the Code of Practice as an adaptive instrument. Sign-up could be coupled with a mandatory, publish-or-perish obligation: provide independent audit reports within a fixed period or forfeit the benefits. Second, civil-society representatives—especially from groups likely to be affected by biased or exploitative model outputs—should be granted seats on the Code’s governance board, ensuring that precautionary values remain visible. Finally, the Commission must resist the temptation to placate high-profile critics by “streamlining” the AI Act in ways that dilute its ethical core. Transparent calibration of obligations is welcome; rollback of substantive protections is not.
Europe has long argued that trustworthiness is a competitive asset. The coming weeks will test whether voluntary compliance can deepen that trust or merely defer hard political choices. Either way, ethicists should keep a close eye on who gets to write the next draft—and who is left reading the footnotes.
Human commentary
So far as I can tell, o3 accurately reports on the EU’s Code of Practice for General-Purpose AI, and some of the industry push back against it (the two sources it uses coming from Reuters).
The three conditions it outlines that need to be met (paragraph 3) strike me as thoughtful and correct, though, as I lack expertise in this domain, it’s hard to say to what degree it may be ‘sophisticated nonsense’. If a human had sent this to me for publication, I’d ask them for links to ground these three conditions.
Similarly, the penultimate paragraph (What should be done?) posits three regulatory solutions/recommendations to the risks highlighted in the article. Again, without links, it’s hard to verify, for instance, whether civil-society representatives weren’t in fact granted seats on the Code’s governance board. Also, the third recommendation (not placating high-profile critics/diluting the AI Act’s ethical core) was nice-sounding but vague.
Nonetheless, I did think it spelt out some important trade-offs (e.g. how these voluntary schemes can benefit large firms with compliance teams while putting off smaller innovators, increasing consolidation in the AI sector — paragraph 5). More importantly, I do ultimately think I am now more informed (if only at a very high level) about a very live issue unfolding in EU AI regulation. But this is a low bar.
One already obvious outcome from this blogging experiment is that it’s difficult to get deeper analyses/engagement with counter-arguments and enough context for a recent news story — all within the 600 or so word limit. But this is a tall order even for very good human writers.