Issues, Ethics, and Policy on Artificial Intelligence: Labour Markets Distortions
Source: Unsplash
“As we ponder our uncertain AI future, our goal should not merely be to predict that future, but to create it.” David Autor, Scholar and Professor of Labour Economics at MIT.
Digital technological innovation has so far brought upon the labour force seasons of great change: depending however on who you ask to, it could be described as a blessing, or as a curse. On one hand, it greatly enhanced the capabilities and productivity of certain professions, such as those in management, finance, design, engineering... Nowadays, it would be impossible to think about what it would be like to cover these roles without the ability to use digital tools for spreadsheets and databases, presentations, complex calculations, and so on. On the other hand, innovation caused a multitude of tasks and jobs to disappear, putting workers through perils, as conversion to a new profession is often not a straightforward process.
The category that was particularly adversely impacted is that of low-skilled, low-educated workers, whose duties were mainly routine tasks. In fact, the new technologies coming along made it possible for these tasks to be automated, an opportunity that firms promptly exploited in an effort to shrink their costs, as investing once in new equipment and then incurring the expenses to keep it up to date was more convenient than holding on to a larger workforce. Accordingly, from the point of view of wages instead of skills, the stronger impact interested middle-income workers, whose human cost of labour was higher to firms than that of low-income workers. Furthermore, certain tasks associated with the lowest wages, such as the delicate handling of fabric or produce-picking, were more difficult to automate.
As a result, existing digital technologies caused jobs to become more polarised, hollowing out the middle of the income distribution and, along with other factors, driving the increase in wage inequality.
The advent of AI is a phenomenon whose features make parallelism with older digital technologies inaccurate: firstly, generative AI has a substantially broader scope of application, as it makes it possible to automate tasks which had always been thought of as exclusive human faculty; secondly, generative AI also has a broader scope of application in terms of industries, as the automatable tasks, such as writing or understanding a simple piece of text, are essential to all fields; thirdly, the speed of technical progress in AI is unprecedented, far greater than any previous rate of innovation. All of these characteristics are intensified by the development of generative AI, which has seen great leaps forward in the past couple of years. From now on, please keep in mind that every time we refer to AI we mean both non-generative and generative AI, with generative AI making effects more extreme.
It is therefore not surprising that the jobs categories most at risk are different from before: AI seems to pose the biggest threats to high-skilled, high-educated workers, though with great variability among them. In fact, generative AI makes it possible to automate fine psychomotor and nonroutine cognitive tasks, such as information ordering, memorisation, perceptual speed and deductive reasoning, which usually take humans years of studying or training to hone to a satisfactory level. Just as before digital technology was complementary to white-collars and substitute to blue-collar workers’ tasks, generative AI can now substitute more complex tasks as well.
Source: Georgieff, A. and R. Hyee (2021), "Artificial intelligence and employment: New cross-country evidence", OECD Social, Employment and Migration Working Papers, No. 265, OECD Publishing, Paris.
In particular, a study published in June 2023 by McKinsey (McKinsey, “The economic potential of generative AI: The next productivity frontier”, 2023), which conducted an assessment of expected productivity gains from AI and of its impact across industries and professions, shows that there are four business functions with the greatest potential for growth: customer operations, marketing and sales, software engineering, and research and development. All of them are in fact characterised by a multitude of nonroutine cognitive tasks involving writing, coding, computation, that automation by means of AI could make more efficient, without necessarily worsening the quality of output. It was further estimated that AI could automate all the related tasks absorbing from 60% to 70% of employees’ time. Of course, the most relevant applications will vary from industry to industry: for example, software engineering will be most relevant to the high-tech industry, while customer operations and marketing and sales will be more relevant to retail.
Automation is therefore the culprit of both economists’ and employees’ worries about AI when it comes to the labour market: if cheaper capital-intensive opportunities come along, firms will be sure to take them, moving resources away from labour towards capital. In simpler words, unemployment may rise. Yet, the picture is substantially more complex, and not necessarily so dire.
The impact of technological innovation on employment has been modelled, for example, by Acemoglu and Restrepo (Acemoglu and Restrepo, 2019, 2022a and 2022b), through three different effects:
The first is the productivity effect: in this case, automation reduces costs and therefore increases productivity through an increase in the marginal product of labour.
The second is the displacement effect: as automation rises, tasks are allocated away from labour towards capital, therefore reducing the marginal product of labour.
The third is the reinstatement effect: new technologies create new opportunities, and new opportunities create new tasks, and new jobs, that the displaced workers or other workers can pick up.
Therefore, there are two positive effects (productivity and reinstatement), pushing the marginal product of labour up, and one negative effect (displacement), pushing it down: the net effect depends on the relative strength of each of them. The marginal product of labour is one of the most important determinants of the equilibrium wage, and therefore of the labour share (the share of nominal domestic income allocated to labour) and employment as well. As a result, it is difficult to predict how employment will change, not only because of the effects opposite directions, but also because the intensity of each one is challenging to estimate. Furthermore, the impacts of AI on the labour market do not happen in a vacuum: employment is influenced by a myriad of factors, as it incurs effects from automation that does not depend on AI, but also effects that have nothing to do with technology, such as, for example, minimum wages.
The novelty of generative AI and, once again, its speed of development, make collection and formal evaluation of evidence still scarce. This scarcity is further exacerbated by the relative slowness of most firms in adopting AI, due to costs and skill barriers. Still, the evidence gathered and analysed so far shows a not-so gloomy outcome: a study published by Albanesi et al. (Albanesi et al., 2023) in July 2023 on the labour markets of 16 European countries in a period from 2011 to 2019 shows a positive association between AI-enabled automation and employment, which was especially significant for high-skilled and younger workers. It must however be kept in mind that, in the period taken into consideration, generative AI was at a substantially earlier stage of development. The effects of its recent advancement may have been instead captured in December 2023 by Hui et al. (Hui, et al., 2023), whose study based on the online labour market Upwork aimed at observing the impact of ChatGPT, released in November 2022, and DALL-E and Midjourney, released in April 2022. They analysed the changes in both the extensive (whether or not to work) and intensive (how much to work) margin of labour across different freelance occupations, such as writing and designing. They found that generative AI had an adverse effect on both the number of jobs per month that a freelancer could obtain and on the earnings generated by said jobs.
Overall, it is definitely too early to tell in which direction AI will push employment in the long run: as already mentioned, evidence is still scarce, generative AI is young and hastily evolving, and firms are not that quick at adopting it. Furthermore, the effects on employment are likely to take even longer to fully manifest themselves than for firms to adopt AI: most firms prefer in fact to let their workforce decline naturally, due to employees quitting voluntarily or retiring and not being replaced, rather than through mass layoffs. Moreover, labour reallocation due to technological change is correlated with business cycle fluctuations (Faia and Shabalina, 2023): during periods of growth the effects of technological shocks on employment are quite tame; however, when a recession hits, the employees that have been rendered the least productive start quickly losing their jobs.
To therefore recap what have been said so far, to understand the effects of AI on employment we need to answer three fundamental questions, which currently have no answer:
How much will AI be employed to automate tasks, as opposed to complement workers?
How big will the productivity gain from the employment of AI, currently biased for automation, be?
Will AI create more jobs than it destroys? In other words, will the reinstatement effect prevail over the displacement effect?
The last question in particular is the protagonist of employees’ worries, especially for highly-skilled ones, who have been able to observe the effects of previous digital technological innovation on their lower-skilled counterparts. Although there is currently no precise prediction of whether famous Schumpeterian creative destruction will once again prevail, there are for sure new roles bound to emerge: firstly, firms will need “AI trainers”; some AI trainers will of course need data science and other technical skills, but there will also be the necessity of, for example, people to simply interact with chatbots to teach them how to be perceived as compassionate and sympathetic. Secondly, firms will need “AI explainers”, professionals with an understanding of how AI models work and can therefore explain the outcomes they provide to non-experts. Thirdly, firms will need “AI monitors”, whose task will be to make sure that AI systems are working as intended, detecting their mistakes and biases and reporting them.
All these professions will therefore require an update in employees’ skills, potentially through training programmes provided by their employer, or by the public sector. Currently, we are already witnessing large wage premia for “AI-related skills”, such as statistics, database management, and machine learning. Notably, the premia are not highest for engineers or data scientists, but for managers: this goes to show that one of the crucial requests for employees is to have an understanding of how AI fits or could fit within the larger production process. Skill demand is thus set to shift towards broader scopes and transversal abilities, such as being able to communicate effectively and collaborate both with machines and with other humans of different expertise.
From a qualitative, rather than quantitative, perspective, AI could have both positive and negative impacts on workers’ well-being. On the positive side, AI can improve labour-matching; when the costs of labour-matching are lower, the natural rate of employment increases. On an individual level, so far employees have reported about AI making their jobs less repetitive, tedious, but also less dangerous. This is for example achieved through better monitoring of the workplace environment, thanks to sensors alarming workers when they are getting too close to a machine, or when they are not following safety procedures correctly. Aside from jobs that involve physically risky components, monitoring employees’ performance can also lead to fairer performance evaluations. However, there is a flipside to these positives: giving more leeway to AI in the job recruitment process could exacerbate the negative consequences on the welfare of disadvantaged workers, due to the potential for discriminatory systematisation of bias in the training data. Furthermore, both the removal of simpler tasks and more pervasive monitoring can make the work environment more intense and stressful; monitoring also poses critical ethical questions that are worth diving deeper into.
Let us now change the focus and concentrate on how AI will affect the labour market from an ethical perspective. The distortions that AI implementation will bring about are going to leave workers worse or better off? What are the implications that a massive use of AI will exert on how humans interact with knowledge? Is AI a “normal” technology or is it something more, something different? All these questions arise while pondering on an after-AI labour market, and philosophy is perhaps our best ally when trying to answer them. We think that this quote from Martin Heidegger’s Being and Time (§26) is the inevitable starting point.
With regard to its positive modes, solicitude has two extreme possibilities. It can, as it were, take away 'care' from the Other and put itself in their position in concern: it can leap in for them. The Other is thus thrown out of their own position; they step back so that afterwards, when the matter has been attended to, they can either take it over as something finished and at their disposal, or disburden themself of it completely. In such solicitude the Other can become one who is dominated and dependent, even if this domination is a tacit one and remains hidden from them. In contrast to this, there is also the possibility of a kind of solicitude which does not so much leap in for the Other as leap ahead of them in their existentiell potentiality-for-Being, not in order to take away their 'care' but rather to give it back to them authentically as such for the first time.
Artificial intelligence would probably fascinate Martin Heidegger: halfway between the Hammer and the Other, neither a mere instrument ready-to-hand nor a real person. What does it mean to interface with AI? What threats does it pose? The ridge between domination and authentic care is very subtle, as Heidegger clearly shows. It cannot be that AI leaps in, takes over, throws out of their position the people that use it, for otherwise they would become dominated and dependent (even tacitly or hiddenly). The technological progress we are discussing here cannot be explained as one of many industrial revolutions: AI is not the same as the steam engine, because it does not substitute simple manual work, but complex intellectual activity. When using whatever technology, alienation is a serious threat. Now that we are talking about using technology to replace human intelligence, prudence should be much higher. AI cannot be let free to pervade our labour market with no control, it has to be directed, tamed, governed. That is why we are proposing policies that aim at developing and implementing AI in the labour market in a gradual and sustainable way. AI is a very powerful tool that has to leap ahead of us, anticipating and soliciting our potentiality, all the opposite of leaping in for us, stealing our awareness of being.
There is also another threat that AI poses to ethics when it comes to the labour market: surveillance. Since the 19th century, firms and governments have implemented many devices and policies to monitor and control workers and citizens performing their tasks and duties. These systems of surveillance come with morally ambiguous consequences, making workers feel spied upon. Very often, the limits of such surveillance practices coincided with the physical limits of the person delegated to executing this job. Now, with the development of AI, surveillance could become a serious threat to people’s privacy, leading to more frequent and easy violations of boundaries. Policymakers should intervene in order not to let AI become too powerful in respect to workers’ surveillance. Of course, AI can be used properly to reduce moral hazard and monitor workers, however a clear and definite limit has to be set. Michel Foucault extensively worked on surveillance as a practice of power in his masterpiece Discipline and Punish: The Birth of the Prison. Surveillance is not merely a system of monitoring: it influences and changes people’s behaviour, it is a productive form of power. The analysis of the panopticon (a particular prison where a single corrections officer can observe all the prisoners at all times, without the inmates knowing whether or not they are being watched) made by Foucault helps understand how much the fear of being surveilled deprives people of their freedom. Surveillance should be allowed only in certain limits, with certain purposes and for certain reasons. It is up to policymakers to prevent such technologies from making our society become similar to Orwellian dystopias.
As we have thus discussed, AI can constitute a positive historical turn, leading to a better and more developed society. It does however pose some threats that need to be prevented. These menaces are mainly two: AI can take over too much of human work too abruptly and rapidly, and it can reinforce some unfair practices, giving in to perilous social distortions. What can we do in order to make sure that these threats will not materialise in the near future? How can the labour market change so that it will be readier to AI massive implementation?
Surely, policymaking is a tool to control for and compensate those who will be the most negatively impacted by the employment of AI. It is in fact anything but a novelty for a technology to have its “bite-backs”, as historian Edward Tenner calls them (Tenner, “Why Things Bite Back”, 1996). One cannot insist on overcoming a bite-back through a luddist approach, or, probably less violently, by simply banning new technology; neither one can idealistically hope that firms will not adopt it out of their good hearts. Innovation cuts costs and increases efficiency: if a firm does not adopt AI, some other, domestically or internationally (where maybe regulations are different) will, and the firm will be outcompeted. Economic historian Joel Mokyr, in an article for the Institute for European Policymaking of Bocconi University (Mokyr, “Is Technology Our Enemy?”), remembered the six laws of technology elaborated by historian Melvin Kranzberg: the fourth law is particularly relevant in this context: “… many complicated sociocultural factors, especially human elements, are involved, even in what might seem to be ‘purely technical’ decisions.” “Technologically ‘sweet’ solutions do not always triumph over political and social forces” (Kranzberg, “Kranzberg’s Laws”, available here). Policy, on a national or European level, can therefore be employed to make sure that firms can adopt AI and not lose out on its productivity gains, but that workers will also receive some protection. Furthermore, the intervention of policy may also be a useful reminder for firms that a too narrow focus on cost-cutting would be detrimental to them as well: once a task is automated, it becomes a commodity, and value creation does not happen at the level of automation or commodities. Therefore, if firms want to create value, they need to step up to a higher level: they need to find ways to use AI to complement their employees’ abilities, to reach new potentials, not to just replace what can already be done, even if the cost is now lower (Satell, “How to Win with Automation (Hint: It’s Not Chasing Efficiency)”, 2017). After all, technology is the true mother of necessities.
One area of policy intervention is therefore that of skills: firms provide training to their employees, but only after they have already implemented AI. The lack of AI literacy and basic digital skills is, however, one of the main reasons why firms may delay the adoption of the new technology, in the meanwhile losing productivity gains. The mechanism is therefore that of a vicious cycle, which could be broken by better policy in education and employees (re)training programmes at a public level. The OECD Employment Outlook for 2023 (OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, 2023) reports about some initiatives already put in place by European countries: for example, Finland created a free programme called “Elements of AI”, which offers online courses for non-experts, to increase AI literacy and social acceptance of AI; the “AI Strategic Vision for Luxembourg” includes the integration of courses on AI into other disciplines, such as business, law, health and human sciences at both secondary and post-secondary level; Italy implemented in 2021 and 2022 the “Training 4.0 Tax Credit” for firms which provided their employees with training for the consolidation or acquisition of skills related to the digital transformation.
Tax systems are in fact another area of regulation and policy intervention. As a matter of fact, as pointed out by Acemoglu, tax systems place heavier burdens on firms that hire labour with respect to those that invest in algorithms to automate work. This is obviously incentivising firms to replace workers with AI, in particular when innovating. In the long run, if nothing is done to fix this disparity, distortions would materialise at the labour market level. AI would in fact replace more human work than what is socially optimal, profiting from the negative externality generated by the tax burden on human employees. As Acemoglu suggests, we should aim at creating a more symmetric tax structure, where marginal taxes for hiring (and training) labour and for investing in equipment and software are equated. This will shift incentives toward human-complementary technological choices by reducing the bias of the tax code toward physical capital over human capital. At the European level, action by the Commision could be advocated for. However, the lack of a common fiscal system and the profound differences in tax systems that scatter the Union are far too many. The absence of a fiscal union is surely one of many difficulties of intervening in this field, nonetheless there still exist options of directives and policies that can force governments into reducing the fiscal discrepancies between workers and softwares. Both by reducing the tax pressure on labour or increasing it on algorithms implementation, governments can ensure a slower and safer transition to a new economy where AI will assist and ease human work complementarily.
Artificial intelligence will revolutionise our economies: it is up to us to assure that the changes it will impose will be positive and proficuous. These changes would not be comparable to technological advancements registered in the previous centuries, such as the industrial revolutions, as the impact will be more radical and different in its nature. Unregulated AI would lead to drastic negative distortions in the labour market and various adverse social consequences. Both from an economic standpoint as well as from an ethical perspective, policy is needed to prevent AI implementation to crush the already frail equilibria of the labour market. Unemployment, consumers’ surplus, wages, productivity, can all seem like mere numbers, cold data; but let us remember that behind every economic parameter there are people, or at least there should be. Artificial intelligence will change everything, not only for what concerns economics. That is why policymaking cannot only look at productivity gains and efficient allocation of resources, but it has to acknowledge the historical significance of the digital transition. The implementation of artificial technology has to be centred on complementarity and gradualness, allowing for safe adaptation of social and economic equilibria. Ethics has to guide this transition, fueling policymaking.
Bibliography:
Acemoglu and Restrepo (2019), “Automation and New Tasks: How Technology Displaces and Reinstates Labor”, Journal of Economic Perspectives 33(2): 3–30. Available here.
Acemoglu, Daron, 2021. “Harms of AI”. SSRN Electronic Journal. Available here.
Acemoglu, D and P Restrepo (2022a), “Tasks, automation, and the rise in US wage inequality”, Econometrica 90(5): 1973–2016. Available here.
Acemoglu, D and P Restrepo (2022b), “Demographics and Automation”, Review of Economic Studies 89(1): 1–44. Available here.
Acemoglu, Autor and Johnson (2023), Policy Insight 123: Can We Have Pro-Worker AI? Choosing a path of machines in service of minds, CEPR Policy Insight No 123, CEPR Press, Paris & London. Available here.
Adukia, A., Evans, J., et al, “The AI Trends That Will Define Society and Political Economy in 2024”. ProMarket (Chicago Booth). January 12, 2014. Available here.
Albanesi, Stefania, et al., “Artificial Intelligence and jobs: Evidence from Europe”. VoxEU. 29 July, 2023. Available here.
Faia, E and E Shabalina (2023), “Cyclical Move to Opportunity”, CEPR Discussion Paper 18546. Available here.
Georgieff, A. and R. Hyee (2021), "Artificial intelligence and employment: New cross-country evidence", OECD Social, Employment and Migration Working Papers, No. 265, OECD Publishing, Paris. Available here.
Hui, X, O Reshef, and L Zhou (2023), "The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market", available here.
Ilzetzki, Ethan, and Suryaansh, Jain, 2023. “The impact of artificial intelligence on growth and employment”. VoxEU.org. Available here.
Krämer, C. and S. Cazes (2022), "Shaping the transition: Artificial intelligence and social dialogue", OECD Social, Employment and Migration Working Papers, No. 279, OECD Publishing, Paris, here.
Kranzberg, Melvin, 1986. “Technology and History: ‘Kranzberg’s Laws.’” Technology and Culture 27, no. 3: 544–60. Available here.
McKinsey, “The economic potential of generative AI: The next productivity frontier”. Report of June 14, 2023. Available here.
Mokyr, Joel, “Is technology our enemy?”. Institute for European Policymaking, Bocconi University. Available here.
OECD (2023), OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market, OECD Publishing, Paris. Available here.
Satell, Greg, “How to Win with Automation (Hint: It’s Not Chasing Efficiency)”. Harvard Business Review. March 30, 2017. Available here.
Tenner, Edward, 1996. “Why Things Bite Back: Technology and the Revenge of Unintended Consequences”. Knopf. Available here.
Comments