At the height of the pandemic, RingCentral, a communications software company, hired 4,000 people to deal with a surge of clients as remote working took hold.
But over the past two years, the company has cut its pandemic-era human resources team of 300 by nearly half. Head of HR, Alvin Lam, has told his superiors that he cannot lose more people. But, if it came to it, he says he probably could cut back again. Artificial intelligence tools such as the company’s HR chatbot Ringo would probably allow him to “figure out a way to continue to produce the same level of service [for] all our stakeholders and still be able to deliver”.
Lam’s admission goes to the heart of the challenges facing companies as they grapple with the possibilities of generative AI. Executives in all areas are examining how, and how fast, they can use the technology in their own teams, while also defending themselves against the assumption that AI agents could perform many of their teams’ duties more efficiently and more cheaply.
The removal of humans from human resources is emblematic of what could happen across many more corporate functions. “Almost all of my [HR] peers in [Silicon] Valley . . . are really struggling because [of the] consistent mandate from the C-suite: ‘Leverage AI to reduce workforce. Make sure the entire workforce are AI equipped’,” says Lam. “That is something that we all struggle with.”
It is still relatively early days. US Census Bureau
second-quarter data on 1.2mn companies show just over 9 per cent reporting they had used generative AI in production of goods and services, though that figure is rising quickly.
Chief executives at large companies such as Salesforce, Amazon and JPMorgan Chase, however, are increasingly vocal about how jobs will be lost and productivity will improve as AI changes the nature of work.
In May, Marianne Lake, CEO of consumer and community banking at JPMorgan Chase,
told investors that the group’s operations department, which provides support in areas such as tackling fraud and processing statements, was “at the tip of the spear on using and leveraging new AI tools and capabilities”. She added: “We expect headcount will trend down by about 10 per cent over the next five years or so, even as the business grows by another more than 25 per cent.”
Executives are trying to resolve the tension between the need to comply with such top-down corporate productivity goals and the risk that AI could hollow out vital operations.
In the rush to apply the technology, one fear is that the core of important corporate activities such as HR will be delegated prematurely to the equivalent of RingCentral’s Ringo and other administrative automata.
Laszlo Bock, former head of people operations at Google and a serial founder of HR technology companies, says the big economic question is whether AI’s productivity benefits are claimed by capital — feeding through to bigger profits or higher executive pay — or shared with labour, in the form, say, of shorter working hours or higher salaries.
Where people managers come down in this debate is “the ultimate test”, he says, of whether HR executives are “the secret police of corporations [or], as they claim to be, champions of employees . . . Post-Covid, it’s the best opportunity they’re going to have to prove who they are.”
Traditionally, the department formerly known as “personnel”, now sometimes called “people”, helps operating executives by handling the minutiae of staff policies as well as advising about the hiring and firing, welfare and wellbeing of employees. HR managers were heavily involved in tackling the disruption of the shift towards remote working and the mental and physical toll of the pandemic, for instance.
HR staff are also, however, regularly derided as paper-pushing tools of senior management, hatchet-wielding enforcers of mass lay-offs, and, lately, purveyors of “woke” diversity initiatives.
Human resources departments are not only responsible for redesigning how staff work with AI. They have to train the workforce to implement the new technology or shift to new jobs. Where necessary, they must also push through the job losses caused by widespread rollout of AI tools.
By January 2024, nearly two-thirds of organisations using AI in HR were applying it to “talent acquisition” (that is, recruitment) according to a
survey by the Society for Human Resource Management, which represents HR professionals. Leadership and development — such as training courses or coaching — and performance management were the next most common areas of application.
At Vendasta Technologies, HR team members review all incoming applications. But the Canadian software company plans to use an automated talent agent, Scout, to screen by phone all candidates who reach the next stage, using a standard set of questions. Kim Coutts, director of people operations, says that could save more than 1,000 hours of human recruiters’ time annually, which Vendasta can reinvest in more demanding tasks.
Companies were using automated screening before OpenAI’s release of ChatGPT to general users in November 2022, in part to overcome the risk of human bias. Now they are also having to use AI to fight fire with fire, because candidates have
learnt how to deploy generative AI to draft convincing applications and résumés, and submit them automatically at scale.
Inside many companies, AI chatbots now respond to common staff inquiries about the location of specific policies or how to book time off. At IBM, 94 per cent of such queries are handled by its tool AskHR, which, since August 2024, has used generative AI to produce answers based on a vast pool of HR policy documents.
HR’s use of AI is hemmed in, however, by a tangle of regulation. It includes existing data protection measures, but also new legislation such as the
EU’s AI Act, which deems some HR-related AI activity to be “high risk” because such systems “may have an appreciable impact on future career prospects, livelihoods of those persons and workers’ rights”. The state of California is considering a
“No Robo Bosses” Act, which would bar companies from using automated decision-making systems such as AI to promote, punish or sack workers without any human oversight.
Litigation is already looming. In the US, technology worker Derek Mobley has
sued Workday, alleging an algorithm in its ubiquitous candidate-screening software discriminated against him on the basis of age, race and disability, throwing out his applications for more than 100 posts at different companies since 2017. The potential implications for the whole HR technology industry as it rolls out more sophisticated tools have “got everyone scared”, according to Stacia Garr, co-founder of RedThread Research, an HR technology research consultancy. (Workday has said the suit is “without merit”.)
Jon Lester, IBM’s vice-president of HR technology, data and AI, says: “A lot of [chief human resource officers] don’t want to go to Gen AI, because they have to get it compliant. Our view is, if we can make it compliant, we can make it innovative.” For instance, by testing different applications, IBM discovered its own large language model, Granite, provided better answers on sensitive questions about staff benefits, which met the legislative and regulatory standards.
Bock, the tech entrepreneur, says HR should also defend its role within companies, by submitting clearer evidence of the return on its investment in AI, such as the relative performance of employees recruited via an AI-enabled screening process against those hired by human recruiters.
As for the impact on jobs, recruitment site Indeed’s
index of postings in big economies such as the US, UK, France and Germany shows hiring for HR departments has slowed more than the overall jobs market since the advent of generative AI. But it is hard to disentangle such data from general business trends, let alone link the decline to the advance of technology.
While IBM will not give precise numbers, there are now fewer people in the IBM HR function than there were in 2016. The amount IBM spends on HR has dropped by 40 per cent over the past four years, including savings made by cutting nearly 9 out of 10 of the systems the department was using.
That there is scope for more efficiency is not in doubt. A March
report on the state of AI by McKinsey puts HR in the top four of business units that reported cost reductions from generative AI use in the second half of last year, above marketing and sales and product development, the two areas where generative AI use is most widespread. This is despite the fact that the same report ranks people operations, on average, low in a list of corporate functions that said they were regularly using AI.
HR heads pioneering AI use say that as well as saving money, the technology is helping them improve the mix of work done by their teams and, in some cases, the rewards for taking on more complex tasks.
IBM HR staff told Lester: “If I’m too busy answering queries, managing data, I’m too busy to do my own job better.” He says they now have “space to think”. At US regional bank Citizens Financial, which has used AI to create an interactive internal marketplace for skills, recruiters have turned into “talent advisers” who help with workforce planning: “We make it clear that these jobs are going to change quite dramatically, so if you don’t shift, there will be some degree of displacement over time,” says chief human resources officer Susan LaMonica.
Human resources teams’ adaptation to, and integration of, AI suggests five main ways other divisional heads might tackle the unbending top-down mandate to improve productivity.
First, implementation of AI, like the previous wave of digitisation, has to be handled strategically. The data on which the technology feeds and the processes it is designed to improve must be in order. Healthcare group Johnson & Johnson spent more than five years reorganising and refining its global HR organisation and procedures as it gradually introduced AI. Peter Fasolo, who has recently stepped down as chief human resources officer, says: “It had nothing really to do with technology per se . . . The mistake that can be made is [if] you’re applying those technologies to inefficient processes or broken processes.”
Second, the simplest savings can come in support and administrative functions. Just as HR is introducing automation of internal queries, companies are upgrading customer service chatbots with AI, making them more conversational, and simultaneously shrinking their call centres. In July, Marc Benioff, chief executive of Salesforce, which, like IBM, has an interest in promoting its AI tools, wrote
in the FT how the cloud software group’s AI agents were already “resolving 85 per cent of [customers’] incoming queries”.
Third, senior executives must be engaged in the rollout of internal AI tools. McKinsey found that the proportion of “C-level” executives (those with “chief” in their title) regularly using generative AI for work was greater than for mid-level managers. At IBM, Lester says 98 per cent of the group’s managers and 97 per cent of executives use AskHR. That includes leaders who have their own (human) personal assistants. Chief executive Arvind Krishna likes to tell clients how he used the tool to prepare a job verification letter he needed to act as guarantor on the lease on his child’s student accommodation.
Fourth, AI can be used to blur the boundaries between different divisions and make visible previously unseen inefficiencies. A recent article in
MIT Sloan Management Review describes how leaders from different operations use AI to interrogate data quickly and directly — what it calls “vibe analytics”. One south-east Asian telecoms company brought together account and product managers, as well as executives from finance and marketing, to apply large language models to customer data, budgets, research and product plans, to identify high-margin, high-risk and high-cost contracts. What would otherwise have taken 90 days took 90 minutes.
Pharmaceutical company Moderna has gone so far as to merge its technology and human resources functions under HR head Tracey Franklin. “The traditional model that separates talent from technology is increasingly outdated,”
she said in a recent interview with Unleash.ai, an HR technology events business.
Fifth, AI requires a redesign of how work is done. Moderna used to plan what the future workforce would look like separately from the group’s technology needs. Franklin now aims to design “how tasks, information and decisions get done” and create the optimal mix of people and technology, such as ChatGPT agents.
Evidence of AI-induced job losses is so far largely anecdotal and has had only a small effect on macroeconomic statistics. A July report from Goldman Sachs analysts, citing the US Census data, pointed out that “the vast majority of [US] companies still have not incorporated AI into regular workflow”.
US employment growth in areas such as marketing consulting, graphic design, office administration and call centres has fallen “well below trend” since ChatGPT’s public release. But the banks’ economists are sceptical about warnings that AI will cause long-term mass unemployment. They suggest AI models’ limitations and the fact humans still perform many tasks better than machines will give people “a meaningful comparative advantage” in lots of areas “for the foreseeable future”.
Automation could also lead to increases in jobs in some areas, and may create new, as yet unimagined, roles as workers acquire new skills. “Humans must remain at the centre of the story,” Benioff wrote last month, adding that AI use was “freeing [Salesforce’s] human teams to accelerate projects and deepen relationships with customers”.
In human resources, Bock, formerly of Google, predicts that 80 per cent of functions will end up being automated. But that last fifth of tasks will always be handled by a core group, “not necessarily because machines can’t do it”, but because in some critical cases “people will just prefer to deal with people . . . It’s why people go to a priest for confession. You could pray to the sky, but you’re going to feel a little better telling a human being the darkest things.”