Blog Article

What's in Store for AI Hiring Bias Regulation?

AI in Hiring

Humans make mistakes, particularly when judging other humans. The technology sector has spent quite a bit of time and money attempting to correct human-centric mistakes in hiring. And although the industry has made great strides, such as document management and data processing, tech tools for hiring professionals still fall short in critical ways.

Artificial intelligence (AI) has become an increasingly common resource for hiring, and because AI tools require data sets and inputs generated by humans, they can still exhibit human character flaws. A key example: bias.

Understanding bias in hiring

Humans are inherently biased. Regardless of our best efforts, scientists have shown that our brains have over 100 biases built in. They are a critical part of how we process information. In a workplace context, however, some biases can prove harmful, particularly those that discriminate against people. On a practical level, biases can come into play when we’re hiring new people, conducting performance reviews, bringing teams together, looking for “expert input” for a project, or promoting someone to a leadership role.

One of the purposes of AI is to mitigate that human bias. But when the inputs for AI-driven hiring technology are based on bias-influenced historical data sets, machines will show those same tendencies in their work. And when AI is used in the hiring process, the tech, through no fault of its own, winds up perpetuating generations of hiring bias.

The New York AEDT Rule

In an effort to curtail bias in hiring, the Department of Consumer and Worker Protection has recently implemented a new legislation for automated employment decision tools, what has most commonly been referred to as the AEDT rule. According to the DCWP, this legislation “prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.”

On the surface, the AEDT sounds fair and equitable. However, the  new rule can be interpreted quite broadly, extending regulation beyond AI and basic technology to longstanding HR tools that are considered fundamental to the hiring process. Its broad definition of automated employment decision tools means that audits will be required of “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues … a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions.”

So, what tools might this include? In theory, it could include any evidence-based approach or tool that has been developed on the basis of data, including traditional cognitive and personality assessments.

At the other end of the spectrum, a tool where the scoring rubric is based on intuition that has never been checked against actual data would have no compliance requirements. That tool could be very biased, but the absence of a professional development and validation process would mean that the law does not apply.

The potential consequences of AEDT legislation could impose additional costs and responsibilities on companies, potentially hindering hiring decisions. New roles or departments may also be required to ensure compliance. It could potentially also steer organizations away from objective and evidence-based approaches, which in turn would inevitably amplify bias and could result in far worse outcomes in terms of diversity and business outcomes.

What the future holds for AI hiring bias regulation

The New York rule sets a strong precedent for the need for hiring bias regulation. To ensure consistent compliance while preventing companies’ resources from being stretched, more research and, eventually, a codified approach, are necessary. Identifying the biases in human hiring inputs and removing them from (or correcting) affected data sets is the quickest and most reliable way to reduce bias and promote fairness in the hiring system. If the goal is to reduce bias in the hiring process, upending the status quo in hiring should be the top priority. Because a system that leaves room for too much subjectivity and that favors things like experience and educational pedigree (which yield negative diversity outcomes and subpar business results) does not serve organizations well either.

Related Articles

  • Lauren Bailey Shares How to Hire Top Female Talent
    December 5, 2023

    How to Look Beyond the Resume and Start Hiring Top Female Talent

    Read More
  • The Criteria Team at the 2023 HR Tech Expo and Conference
    November 8, 2023

    Key Learnings from HR Technology Conference & Expo 

    Read More
  • The best of both worlds for humans and AI in hiring
    October 4, 2023

    AI in Hiring: Getting the Best from Humans and Artificial Intelligence

    Read More