We spent our first webinar talking about what AI and machine learning are and how they are being used in hiring. In the next webinar, we are going to talk about ways AI is being regulated, issues of fairness, and how organizations can develop their AI systems ethically. More specifically, the purpose of the webinar is to answer three key questions:

  1. What are the current regulations and what is in the pipeline?
  2. Why is there a call for regulations?
  3. What ethical frameworks can help guide companies using AI in hiring?

What are the current regulations and what is the pipeline?

Current regulations are rare and with the exception of the Illinois Artificial Intelligence Video Interview Act, those enacted tend to broadly assert the potential impact of AI or institute committees to study or advise on AI. Take Delaware’s House Concurrent Resolution 7 that “recognizes the possible life-changing impact the rise of robotics, automation and artificial intelligence will have on Delawareans and encourages all branches of state government to implement plans to minimize the adverse effects of the rise of such technology.”¹ This was enacted in 2019. Or Alabama’s Senate Joint Resolution 71 aimed at “establishing the Alabama Commission on Artificial Intelligence and Associated Technologies.”² This was also enacted in 2019. Most of the recent state-level proposals in 2020 failed and those in 2021 are pending.³

Illinois enacted the Artificial Intelligence Video Interview Act in 2019

The most well-known state-level regulation is the Artificial Intelligence Video Interview Act enacted in 2019 in Illinois (House Bill 2557).⁴ This Act requires companies to disclose the use of artificial intelligence to applicants in video interviews, limits the sharing of the videos, and requires the destruction of the videos within 30 days (at the request of the applicant). Illinois legislators have since proposed an amendment (House Bill 53) that in instances where an organization relies only on AI interview scores to select applicants for in-person interviews, then they must collect demographic information and report it to the Department of Commerce and Economic Opportunity each year (proposed in 2021 and is pending).⁵

More legislation appears to be coming down the pipeline (see Burt’s, April 2021, recent Harvard Business Review piece on the topic⁶), but until then organizations are largely responsible for self-imposing restrictions on their development and use of AI in hiring. This limited regulation is in line with the federal stance on AI, however. In February 2019, Executive Order 13859 established the American AI Initiative that states, “It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy,” which is to be done through five principles you can read about in detail here.⁷ More recently, President Biden launched ai.gov to oversee and support AI advancements.⁸ This is all to say that there is minimal regulation in the U.S. currently.

Meanwhile, the European Commission proposed a regulatory framework in April 2021 that explicitly bans certain types of AI uses (e.g., “social credit scoring”) and identifies high-risk AI—or that which may challenge someone’s fundamental rights—which includes use cases in employment and work management).⁹ Good overviews of the European Commission’s proposal can be found here¹⁰ and here¹¹.

Why is there a call for regulations?

Those calling for regulation are most concerned about bias. The reality is that bias can occur in models. The most famous examples being Amazon’s resume screening model¹², lending discrimination¹³, and differences in recidivism risk scores¹⁴.The question you should be asking is: How does bias occur? 

Not to be used in your statistical models nor your blender

If you watched the first webinar, then you heard me say “garbage in, garbage out.” This is a foundational concept of statistical modeling: if your data are poor, then don’t bother trying to interpret your results. Unfortunately, people sometimes think of statistical models as a composter where you can load in scraps of information and it creates something useful. It’s more like putting your garbage into a blender. Afterward, you just have blended garbage. Instead, you should be thinking about whether you are training your model against clean and high-quality data to improve validity. For example, in your training data set, did you collect human ratings of candidates by simply asking them to rate the candidate responses on a 1 to 5 scale? Or, did you structure your digital interview process and train human raters to respond to behaviorally anchored scales that captured specific competencies (e.g., leadership)? The latter affords cleaner data, likely less bias in human ratings¹⁵, and, importantly, defensibility. Clean, high-quality data aren’t going to solve all issues related to bias, but it is indeed a necessary condition as we work toward solutions.

Of course, not understanding how artificial intelligence works often creates fear and one of our best tools to address fear is applying an ethical framework to how we approach, build, train, test, and operationalize our models.

What ethical frameworks can help guide companies using AI in hiring?

There are a number of ethical AI frameworks out there, but no single overarching and generally agreed-upon one. In my family’s practice, we use a framework we’ve curated over the years to critique and improve our own models, as well as our clients’ models. We developed this framework specifically for AI in hiring. We assess models according to the five following criteria:

  1. A set of model building “best” practices (e.g., leaving information we know can be biasing and unrelated to the job out of the model such as zip code)
  2. Accuracy
  3. Fairness
  4. Explainability
  5. Usability

This framework, which I will discuss in more detail in the webinar, allows us to assess models according to the principles and guidelines in our domain (I-O Psychology; e.g., Uniform Guidelines on Employee Selection Procedures¹⁶; Principles for the Validation and Use of Personnel Selection Procedures¹⁷; Standards for Educational and Psychological Testing¹⁸) as well as the practical use (i.e., usability) of the model. This framework approaches the ethicality of the model through its prioritization of validity. Put differently, we have more trust in models that meet these criteria because model developers provide evidence that they are measuring what they are supposed to be measuring, which is a primary mandate of science.

If you are interested in hearing more about regulations, bias, and ethical frameworks, join our next conversation.

Join us in July as we discuss regulations, bias, and ethical AI

Register Now

¹ Delaware House Concurrent Resolution 7, 150th General Assembly (2019 – 2020) (Enacted). https://legis.delaware.gov/BillDetail?LegislationId=47166

² Alabama Senate Joint Resolution 71, (2019 – 2020) (Enacted). http://alisondb.legislature.state.al.us/ALISON/SearchableInstruments/2019RS/PrintFiles/SJR71-int.pdf

³ National Conference of State Legislatures. https://www.ncsl.org/research/telecommunications-and-information-technology/2020-legislation-related-to-artificial-intelligence.aspx

⁴ llinois House Bill 2557, 101st General Assembly (2019 – 2020) (Enacted). https://www.ilga.gov/legislation/BillStatus.asp?DocNum=2557&GAID=15&DocTypeID=HB&LegId=118664&SessionID=108&GA=101

⁵ Amendment to The Artificial Intelligence Video Interview Act. Illinois House Bill 0053, 102nd General Assembly (2020 – 2021). https://www.ilga.gov/legislation/fulltext.asp?DocName=&SessionId=110&GA=102&DocTypeId=HB&DocNum=53&GAID=16&LegID=127865&SpecSess=0&Session=0

⁶ Burt, A. (2021). New AI Regulations are Coming. Is Your Organization Ready? Retrieved from https://www.ilga.gov/legislation/fulltext.asp?DocName=&SessionId=110&GA=102&DocTypeId=HB&DocNum=53&GAID=16&LegID=127865&SpecSess=0&Session=0

⁷ Exec. Order No. 13859. https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence

⁸ “The Biden Administration Launches AI.gov …” (2021 May 5). https://www.whitehouse.gov/ostp/news-updates/2021/05/05/the-biden-administration-launches-ai-gov-aimed-at-broadening-access-to-federal-artificial-intelligence-innovation-efforts-encouraging-innovators-of-tomorrow/

⁹ Proposal for a Regulation laying down harmonised rules on artificial intelligence. (2021). European Commission. Retrieved from https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

¹⁰ Mueller, B. (2021 May 4). The Artificial Intelligence Act: A Quick Explainer. Center for Data Innovation. Retrieved from https://www.natlawreview.com/article/european-commission-publishes-proposal-artificial-intelligence-act

¹¹ “European Commission Publishes Proposal for Artificial Intelligence Act”. (2021 April 22). National Law Review, 11(112). Retrieved from https://www.natlawreview.com/article/european-commission-publishes-proposal-artificial-intelligence-act

¹² Dastin, J. (2018 October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

¹³ Bartlett, R., Morse, A., Stanton, R., & Wallace, N. (2021). Consumer-lending discrimination in the FinTech era. Journal of Financial Economics. Retrieved from https://www.sciencedirect.com/science/article/pii/S0304405X21002403?casa_token=azvK7Z2WNJcAAAAA:4TWCf6fFaWZZKuawG9CKy65Za4FlAgaDuwkBfSRZgCVZYYdUo4H9Uw2ywgnrqT3COsrxtBBDCw

¹⁴ Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1). Retrieved from https://advances.sciencemag.org/content/4/1/eaao5580

¹⁵ McCarthy, J. M., Van Iddekinge, C. H., & Campion, M. A. (2010). Are highly structured job interviews resistant to demographic similarity effects? Personnel Psychology, 63, 325 – 359. Retrieved from hhttps://webapps.krannert.purdue.edu/sites/Home/DirectoryApi/Files/13ca31f2-4208-476d-8db1-1396866af998/Download?_ga=2.263706252.1476646013.1622807012-1448426644.1577825067

¹⁶ Uniform Guidelines on Employee Selection Procedures. (1978). Office of Personnel Management. Retrieved from https://www.opm.gov/FAQs/QA.aspx?fid=a6da6c2e-e1cb-4841-b72d-53eb4adf1ab1&pid=402c2b0c-bb5c-44e9-acbc-39cc6149ad36

¹⁷ Principles for the Validation and Use of Personnel Selection Procedures. (2018). American Psychological Association. Retrieved from https://www.apa.org/ed/accreditation/about/policies/personnel-selection-procedures.pdf

¹⁸ Standards for Educational and Psychological Testing. (2014). American Educational Research Association. Retrieved from https://www.aera.net/Publications/Books/Standards-for-Educational-Psychological-Testing-2014-Edition

Emily D. Campion, PhD.

Emily D. Campion is an Assistant Professor in Management in the Strome College of Business at Old Dominion University and a Consultant for Campion Services. Her research falls under the “future of work” umbrella and includes topics related to machine learning and natural language processing in personnel selection, alternative and remote work experiences, and workforce diversity. Her consulting work includes improving personnel selection systems using machine learning and natural language processing, evaluating and reducing employment discrimination, and assessing pay equity. Prior to academia, she was a reporter for a daily newspaper in Indiana and an AmeriCorps member in Washington, D.C. She earned her B.A. in Journalism from Indiana University and her Ph.D. in Organization and Human Resources from the University at Buffalo, The State University of New York.