Mapping the Future: Reconciling the AI Act with EU’s Competitive Edge
Source: PixaBay
What was pure fiction for most of us until a couple of years ago has now become a reality. The release of advanced chatbots based on large language models (LLMs) such as ChatGPT have taken aback the world, unveiling the frightening fate of human progress. With it, recitals of unprecedented benefits, ethical dilemmas, and gloomy narrations of a proximate human extinction have emerged. What is true is that Artificial Intelligence (AI) advances have been swift, fast, and nothing short of extraordinary. Striding progress is now knocking at the door of human-level intelligence, and an AGI (Artificial General Intelligence) might be just a few years away. Once again, humanity amazes.
At the onset of this revolutionary AI chapter, some of us rejoiced, some aired concerns, while some others did not even realise the breakthrough. Amidst all of this, the European Union (EU) and its bureaucrats were already trying to envision how the future of AI would have looked like in some years time. Brussels’ attempt to regulate the technology and rein-in progress for the better peaked with the AI Act, the world’s first comprehensive AI law.
The EU legal framework for AI adopts a stratified approach based on different risk levels. Unacceptable risk AI systems, including social scoring, cognitive behavioural manipulation, and biometric identification, will be banned. High risk AI systems that pose a threat to safety and fundamental rights will be strongly regulated. These are AI technologies integrated into medical devices, or technologies affecting specific domains such as education, worker management, law enforcement, or migration. Limited risk AI systems will need to comply with minimal transparency requirements and undergo thorough evaluations. By means of these complementary, proportionate rules the EU seeks to address possible AI risks while leveraging on the technology’s baseline potentialities. This way, the single market aims to place itself at the forefront of next-frontier legislation.
As a matter of fact, both developmental progress and regulation are important. But is it possible to regulate AI technologies while ensuring that the single market does not lose its competitive edge? Is it possible to justify and reconcile the existence of the AI Act with the global scramble for AGI? Unfortunately, there is no straightforward answer to it. What is sure is that higher efficiency (at the extreme of it we have an unregulated AI domain) comes at the expense of human safety (to be understood not merely in existential terms, but also in socioeconomic ones). This efficiency-safety tradeoff lies at the heart of the AI Act: Brussels aims to exchange a certain degree of efficiency in order to achieve higher safety levels by means of a checked, regulated AI growth. Paternalistic intervention is therefore needed: the idea that the AI market can possibly regulate itself is naive and too far fetched. Entrusting equilibrium on competitive forces alone risks justifying, if not encouraging, an unrestrained, highly dangerous laissez-faire economic approach; and while this does expedite progress, it also spreads an array of potential risks and negative externalities in its wake.
On the flipside of this restrictive approach, though, there is a troubling loss of competitive capacity that worries many across Europe (France, Germany, and Italy among them). The Union, in fact, runs the risk of falling behind if it imposes excessive regulations on its market, especially if other nations, most notably the US, do not adopt comparable legislative measures (and good luck striking a similar deal in Washington considering the current congressional gridlock). Therefore, considering that regulation is imperative and that a loss of competitiveness on the part of the EU is inevitable (at least so long as AI regulation remains a European feature), what Brussels can do is structuring a coherent regulatory framework capable of pulling other nations into emulative compliance. Given that AI technology can be deemed an inelastic target, and given that the AI market is definitely closer to a consumer market than to a capital one, emulative compliance is possible. This is because unlike capital, consumers cannot relocate to less regulated areas in order to flee stringent regulation. On the one hand, this means that companies might find it beneficial to adhere to EU standards altogether. On the other hand, this means that single nations might find it beneficial to coordinate AI governance. An important thing to notice, however, is that: first, companies might be willing to embrace EU standards on a global scale if and only if the advantages of doing so outweigh the benefits ensuing from the exploitation of more lenient standards in other markets. Second, nations might be willing to embrace AI governance coordination if and only if they expect widespread global compliance (this would ensure a regulated but still competitive playing field). For this to work, politics must trump blind market dynamics. Brussels must make sure that the world sees a value in it.
Recent research from McKinsey estimates that, across 63 use cases analysed, generative AI can contribute up to $4.4 trillion annually to global GDP (by comparison, Italy’s GDP is roughly $2.1 trillion). The study also estimates that between 2030 and 2060, approximately 50% of the existing job tasks may become automated. Furthermore, generative AI may boost labor productivity by 0.1% to 0.6% annually until 2040. This is a clear illustration of the technology’s potentiality, and it is against this backdrop that the EU must calibrate its policy decisions. If the bloc has decided to lead the way of regulation, so be it. Let's embrace this choice and take pride in it. But let’s also make sure that legislative action is echoed internationally and beneficial innovation is not depressed along the way.
Undoubtedly, the world cannot entirely rely on profit-driven firms for a safe, organic AI development. This is because in competitive markets these companies generate an array of negative externalities and indirect costs (not included into these firms’ utility functions) that are eventually shouldered by society at large. The existence of these Pareto inefficient externalities threatens the very nature of a beneficial market economy. A suitable alternative can involve government funding (and overseeing) AI development, potentially nationalising research labs. However, this can accord excessive power to central authorities, which might align AI capabilities with authoritarian objectives. Once again, international synergy and cooperation is needed. Regular AI summits and multilateral dialogues (involving not only single nations and firms, but also independent experts and civil society) can be an initial solution preceding a wider governmental structure based on shared objectives and democratic oversight.
AI offers foreseeable, tangible advantages that stifling regulation might impair. Nevertheless, when confronted with the prospect of human extinction (however remote), traditional cost-benefit analysis loses its relevance. Therefore, regulation is required, and this is true even if it comes at the expense of innovation. The EU has understood it well, and it’s now time for it to enlighten the global community.
Bibliography:
A European approach to artificial intelligence. (2023). Shaping Europe’s Digital Future. European Commission. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | News | European Parliament. (2023). https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
Bengio, Y. (2023). For true AI governance, we need to avoid a single point of failure. Financial Times. https://www.ft.com/content/cbd92347-83cc-41a3-ae48-3927eb866a70
Bradford, A. (2019). The Brussels effect. In Oxford University Press eBooks (pp. 25–66). https://doi.org/10.1093/oso/9780190088583.003.0003
Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee, L., & Zemmel, R. (2023). The economic potential of generative AI: The next productivity frontier. In McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights
EU AI Act: first regulation on artificial intelligence | News | European Parliament. (2023). https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Evans, B. (2023). The ‘AI doomers’ have lost this battle. Financial Times. https://www.ft.com/content/a2c29506-4a38-47a3-8775-beb5e488c169
Letter: Precautionary principle might just save us from AI. (2023). Financial Times. https://www.ft.com/content/594cd722-bb36-47ef-ae5c-52418ea96060
Regulatory framework proposal on artificial intelligence. (2023). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Schaake, M. (2023). The route to AI regulation is fraught but it’s the only way to avoid harm. Financial Times. https://www.ft.com/content/99ff5c84-ece7-43bf-bed1-2edb30d4d6df
Thornhill, J. (2023). Europe should worry less and learn to love AI. Financial Times. https://www.ft.com/content/a402cea8-a4a3-43bb-b01c-d84167d857d5
Comments