Blog Article

Using Artificial Intelligence the Right Way: How to Limit the Risks of AI-Driven Hiring

Risks of AI in hiring

(This article originally appeared on Recruiter.com)

Artificial intelligence (AI) has the potential to revolutionize the way we find and hire talent. From helping companies identify candidates who match super-specific sets of criteria to reducing bias in the hiring process, AI can transform key aspects of recruiting for the better.

However, AI also has a dark side. If developed or used improperly, AI can actually increase discrimination, which could lead to reputational damage; legal action; and less diversity, productivity, and innovation.

This doesn’t mean companies are flying blind when it comes to the deployment of AI in hiring. For example, companies can use “glass box” algorithms that rely on data and processes that are intelligible and transparent (as opposed to “black box” algorithms which are too complex to understand).

Companies should also resist the urge to consider applicant information that isn’t relevant to the job at hand. The fact that you have access to data doesn’t mean you should always use it. Even if you’re impressed by a digital hiring solution, you shouldn’t deploy it unless you have full confidence in its ability to make selections accurately and impartially.

AI is a powerful tool in the hiring process, and it’s only going to become more user-friendly and accessible in the coming years. As long as recruiters and hiring managers remember to use AI responsibly, it will continue to help companies build more diverse and talented workforces.

Transparency Should Always Be Your Top Concern

In 2018, Reuters reported that Amazon had abandoned an AI-powered recruiting tool that was biased against female candidates. Because the algorithm relied on a pool of resumes that had been submitted to Amazon over a 10-year period, its conception of an ideal candidate was skewed toward men, who had been hired at a disproportionate rate up to that point. While Amazon tried to make the algorithm gender-neutral, people familiar with the effort told Reuters there was “no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.”

This is a reminder that companies need to understand how their hiring algorithms function — not only because they should be able to identify problems when they arise, but also because they need to be capable of explaining their methodologies in the event of a lawsuit or some other challenge. A 2019 paper published by researchers from Cornell and Microsoft found there isn’t much publicly available information about hiring algorithms, and some of the companies that produce these algorithms don’t even mention efforts to combat bias. The paper concludes, “Transparency is crucial to further our understanding of these systems.”

AI gives companies the ability to move beyond traditional hiring methods, which are rife with bias and human error, but that makes it all the more important to ensure the AI is as fair and equitable in its functions as possible.

Know What You Are (and Aren’t) Looking For

The implementation of a transparent and effective AI-powered hiring platform should begin with a well-defined idea of what you’re looking for in candidates. This will allow you to develop a concrete set of criteria and only gather the information that is relevant to the position, which decreases the likelihood that the algorithm will be led astray by extraneous and spurious data (such as the gender of previous successful applicants). Hiring algorithms should separate the signal from the noise, which is why you should reduce the noise as much as possible.

In one study published in the Journal of Applied Psychology, human decision-making was consistently outperformed by a simple algorithm across many different applicant evaluations. In a summary of the study in Harvard Business Review, the researchers explain that people are “easily distracted by things that might be only marginally relevant, and they use information inconsistently.”

This is an argument for stripping unnecessary inputs from your algorithm and making it as transparent as possible. When you know exactly what you want and how you’ll go about finding it, you ensure that you’re hiring candidates with the right traits and abilities for your company.

Building a Foundation for the Future of Hiring

Despite some of the problems presented by AI-powered hiring, it’s clear that the pre-AI status quo wasn’t working. From the existence of racial bias to the inefficiency and unreliability of sifting through piles of resumes and conducting interviews, traditional hiring methods have more than their fair share of problems. Just consider the sheer number of great candidates you’ve missed because you simply couldn’t process applicant information on a large enough scale!

AI can make the hiring process more consistent, affordable, and inclusive, but to take full advantage of the opportunity, companies have to be as scrupulous and transparent as possible with the implementation of AI. If companies use opaque algorithms that don’t allow them to understand, analyze, and potentially reevaluate their own hiring decisions, they’ll only be putting themselves in a weaker position.

On the other hand, if companies take control of the hiring process by using proven techniques to rigorously assess candidates on the basis of specific criteria, they’ll build the most diverse and productive workforces they’ve ever had.

Related Articles

  • Lauren Bailey Shares How to Hire Top Female Talent
    December 5, 2023

    How to Look Beyond the Resume and Start Hiring Top Female Talent

    Read More
  • The Criteria Team at the 2023 HR Tech Expo and Conference
    November 8, 2023

    Key Learnings from HR Technology Conference & Expo 

    Read More
  • The best of both worlds for humans and AI in hiring
    October 4, 2023

    AI in Hiring: Getting the Best from Humans and Artificial Intelligence

    Read More