We spent our first webinar talking about what AI and machine learning are and how they are being used in hiring. In the next webinar, we are going to talk about ways AI is being regulated, issues of fairness, and how organizations can develop their AI systems ethically. More specifically, the purpose of the webinar is to answer three key questions:
- What are the current regulations and what is in the pipeline?
- Why is there a call for regulations?
- What ethical frameworks can help guide companies using AI in hiring?
What are the current regulations and what is the pipeline?
Current regulations are rare and with the exception of the Illinois Artificial Intelligence Video Interview Act, those enacted tend to broadly assert the potential impact of AI or institute committees to study or advise on AI. Take Delaware’s House Concurrent Resolution 7 that “recognizes the possible life-changing impact the rise of robotics, automation and artificial intelligence will have on Delawareans and encourages all branches of state government to implement plans to minimize the adverse effects of the rise of such technology.”¹ This was enacted in 2019. Or Alabama’s Senate Joint Resolution 71 aimed at “establishing the Alabama Commission on Artificial Intelligence and Associated Technologies.”² This was also enacted in 2019. Most of the recent state-level proposals in 2020 failed and those in 2021 are pending.³
The most well-known state-level regulation is the Artificial Intelligence Video Interview Act enacted in 2019 in Illinois (House Bill 2557).⁴ This Act requires companies to disclose the use of artificial intelligence to applicants in video interviews, limits the sharing of the videos, and requires the destruction of the videos within 30 days (at the request of the applicant). Illinois legislators have since proposed an amendment (House Bill 53) that in instances where an organization relies only on AI interview scores to select applicants for in-person interviews, then they must collect demographic information and report it to the Department of Commerce and Economic Opportunity each year (proposed in 2021 and is pending).⁵
More legislation appears to be coming down the pipeline (see Burt’s, April 2021, recent Harvard Business Review piece on the topic⁶), but until then organizations are largely responsible for self-imposing restrictions on their development and use of AI in hiring. This limited regulation is in line with the federal stance on AI, however. In February 2019, Executive Order 13859 established the American AI Initiative that states, “It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy,” which is to be done through five principles you can read about in detail here.⁷ More recently, President Biden launched ai.gov to oversee and support AI advancements.⁸ This is all to say that there is minimal regulation in the U.S. currently.
Meanwhile, the European Commission proposed a regulatory framework in April 2021 that explicitly bans certain types of AI uses (e.g., “social credit scoring”) and identifies high-risk AI—or that which may challenge someone’s fundamental rights—which includes use cases in employment and work management).⁹ Good overviews of the European Commission’s proposal can be found here¹⁰ and here¹¹.
Why is there a call for regulations?
Those calling for regulation are most concerned about bias. The reality is that bias can occur in models. The most famous examples being Amazon’s resume screening model¹², lending discrimination¹³, and differences in recidivism risk scores¹⁴.The question you should be asking is: How does bias occur?
If you watched the first webinar, then you heard me say “garbage in, garbage out.” This is a foundational concept of statistical modeling: if your data are poor, then don’t bother trying to interpret your results. Unfortunately, people sometimes think of statistical models as a composter where you can load in scraps of information and it creates something useful. It’s more like putting your garbage into a blender. Afterward, you just have blended garbage. Instead, you should be thinking about whether you are training your model against clean and high-quality data to improve validity. For example, in your training data set, did you collect human ratings of candidates by simply asking them to rate the candidate responses on a 1 to 5 scale? Or, did you structure your digital interview process and train human raters to respond to behaviorally anchored scales that captured specific competencies (e.g., leadership)? The latter affords cleaner data, likely less bias in human ratings¹⁵, and, importantly, defensibility. Clean, high-quality data aren’t going to solve all issues related to bias, but it is indeed a necessary condition as we work toward solutions.
Of course, not understanding how artificial intelligence works often creates fear and one of our best tools to address fear is applying an ethical framework to how we approach, build, train, test, and operationalize our models.
What ethical frameworks can help guide companies using AI in hiring?
There are a number of ethical AI frameworks out there, but no single overarching and generally agreed-upon one. In my family’s practice, we use a framework we’ve curated over the years to critique and improve our own models, as well as our clients’ models. We developed this framework specifically for AI in hiring. We assess models according to the five following criteria:
- A set of model building “best” practices (e.g., leaving information we know can be biasing and unrelated to the job out of the model such as zip code)
This framework, which I will discuss in more detail in the webinar, allows us to assess models according to the principles and guidelines in our domain (I-O Psychology; e.g., Uniform Guidelines on Employee Selection Procedures¹⁶; Principles for the Validation and Use of Personnel Selection Procedures¹⁷; Standards for Educational and Psychological Testing¹⁸) as well as the practical use (i.e., usability) of the model. This framework approaches the ethicality of the model through its prioritization of validity. Put differently, we have more trust in models that meet these criteria because model developers provide evidence that they are measuring what they are supposed to be measuring, which is a primary mandate of science.
If you are interested in hearing more about regulations, bias, and ethical frameworks, join our next conversation.