Algorithms may start deciding who gets fired
Artificial intelligence is revolutionizing how we work and how we lose work. No, not just those chatbots that are coming for human jobs, but software that can determine which employees get pink slips when companies decide to downsize.
Whether any employers used algorithms to conduct layoffs in recent months has been a topic of speculation, though none have disclosed it.
But Capterra, a business-oriented tech review platform, recently surveyed 300 leaders in human resources and found that 98% said they would rely on software and algorithms to reduce costs during a recession.
Marketplace’s Meghan McCarty Carino spoke to Brian Westfall, the author of that Capterra report. He said HR is much more data driven today than it was during the Great Recession 15 years ago.
Below is an edited transcript of their conversation.
Brian Westfall: We asked HR leaders how much these layoff decisions are going to be driven by data versus by gut instinct, and 46% said it would be an equal divide. But I think at the end of the day, being data driven is a good thing. It can help HR departments make decisions based on evidence instead of unconscious biases. The big, red flag is that we want HR departments to proceed with caution. If they’re using bad data, or not understanding how that data is being used by an algorithm to make these decisions, that’s where bad things can come into play.
Meghan McCarty Carino: Tell me more about how these algorithms work. What kinds of data might be considered?
Westfall: It depends. They could look at skills data and performance data. Or they could look at salary data, or flight risk data or work status. The algorithm will do the analysis and say, based on that data, the recommendation is that you lay off these employees. I would highly encourage HR departments to be really critical about determining what data they are going to use and what data are they not going to use.
McCarty Carino: What are some of the concerns with using this technology and the data that goes into it?
Westfall: Technology and data can be biased. There’s a famous example from a few years ago where Amazon got a ton of job applications coming in. They wanted to know, “How do we quickly identify the top candidates that we should consider for a position?” So, they created an algorithm and decided to look at their current top performers and compare them to the job applicants that came in, thinking it would make it easier to surface the top candidates. But what they found was, because their department was predominantly male, the algorithm was penalizing female applicants. So this is an example of how if you’re not careful and you’re not understanding how these algorithms work, they can do real harm.
McCarty Carino: When companies do use technology like this to let go of people, what message does it send to employees?
Westfall: I think it’s going to be interesting to see how transparent companies will be about these decisions if they do use technology and algorithms. On the one hand, it says this wasn’t necessarily a biased human decision. It shows they weighed the data and came to a very logical conclusion about the decisions made. On the other hand, it does open up companies to people that are laid off saying, “An algorithm made this decision. How does this algorithm work? What data did it use?” And then companies become vulnerable if they really don’t understand and they just plucked a product off the shelf, plugged in some data and moved forward that way. So it sends a mixed message. I think the human element plays such a crucial role to keep your workforce intact and keeping engagement sound. Layoffs can be a real hit to that.
McCarty Carino: There’s never a good way to lay off employees, but there are better ways and worse ways, and we’ve seen those play out among the different companies that have had mass layoffs. How a company frames these decisions seems to be an important element. I can see the use of this technology going both ways in that argument.
Westfall: Right. Going back to the last recession, there was a lot of reliance on these old, tired stereotypes or fables like “Last in, first out” as in, “We should just lay off the last people we hired because they’re not providing as much value to the organization as tenured employees.” But there’s no data behind that. You could have just made a great hire three weeks ago that is going to provide more value to your organization than someone who’s been there for 10 years or more. So, I think the shift to data and analysis of that data to arrive at these decisions is a good thing. Because we want to get away from those old, tired tropes that really didn’t arrive at the best decisions. It just requires a bit of knowledge, a bit of skepticism and a bit of talking to technology vendors and asking how these algorithms work. HR departments can better understand how they’re using this stuff when they know what data it is using and how it weighs that data.
McCarty Carino: How would you advise companies to use these kinds of tools ethically?
Westfall: No. 1, you have to perform a data audit. These algorithms are a black box, and the user can’t always see how it works. But there are such critical decisions on the line and these algorithms could be making biased or improperly weighted recommendations without people knowing it. It’s important to ask questions and get as much information as possible about how the sausage is made with the algorithms.
Related links: More insight from Meghan McCarty Carino
Westfall mentioned the gender bias in the hiring AI at Amazon.
Reuters broke that story back in 2018. Sources said Amazon was testing an experimental recruiting tool that gave applicants a score between 1 and 5 stars, but, as Westfall said, the algorithm learned from previous applications that male candidates were more successful. So the system gave fewer stars to resumes that included phrases like “women’s chess club champion” or attendance at a women’s college.
Hiring AI can also amplify all kinds of problematic patterns from the past that many companies say they’re trying to reverse, like a preference for college graduates for jobs that shouldn’t require a four-year degree.
Here on “Marketplace Tech,” we recently reported on an effort to debias hiring AI so that it can better predict which applicant skill sets match the needs of a position without falling back on a familiar alma mater.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.