Human-resource (HR) managers, ethicists, and politicians continue to push back against concerns that artificial intelligence (AI) will harm marginalized groups.
Key Details
- Google saw backlash in 2020 for firing computer scientist Timnit Gebru after she submitted a research paper warning about biases toward marginalized groups from large language model AIs.
- Amazon’s experimental recruitment tools were scrutinized in 2018 and found to defer to male candidates, with female candidates being more frequently rejected.
- A U.S. National Bureau of Economic Research study of Fortune 500 companies found that applicants with black-sounding names received fewer responses than stereotypically white names.
- Last October, the Biden administration proposed a blueprint for an AI Bill Of Rights to help protect people from biases and discrimination from emerging technologies.
- In February, a class-action lawsuit was filed against WorkDay Inc. over claims that its AI systems and screening tools were disproportionately screening out Black and older candidates, violating federal civil-rights protections.
Why It’s Important
With the launch of ChatGPT on November 30, the world entered a new age. Seven months of rapid AI innovation has completely transformed the business world, with leading companies like Google’s Alphabet, Microsoft, Amazon, Meta Platforms, and Apple rushing to get the first AI applications to market as quickly as possible.
The rapid pace of the adoption has many critics worried that old fears about AIs are coming to pass, with multiple studies and lawsuits showing that software algorithms appear to discriminate against racial minorities, gender minorities, and disabled applicants.
HR departments have come to use AI as a screening tool, allowing hiring managers to sort through applicants’ resumes faster and choose the best candidates more efficiently. Critics argue that the inherent biases of software programmers are bleeding through the process, with base assumptions about ideal candidates reflecting able-bodied white male expectations.
Technologies like HireVue allow for interviews to be recorded, with analysis being made available of word choice and facial expressions, a technology that stands to harm applicants with disabilities.
Possible Solutions
Analysts are beginning to advocate for more human intervention in the HR process to avoid these processes, allowing managers to perform interviews and search for soft skills while relying on established methods to help undermine biases in the hiring process. “Some things cannot be measured by data alone,” says author and career coach Della Judd. “Great care will be needed to ensure that the right questions are asked of the AI tool and that there are checks and measures in place to review the output.”
Dr. Di Ann Sanchez is an HR expert with DAS HR Consulting, LLC in Hurst, Texas. She tells Leaders Media that hiring managers have already implemented processes for reducing discrimination in the past half-decade that ought to be heeded by AI developers and users.
“From a human perspective, HR understands those scenarios, and we know we can’t discriminate based on these features, but the AI doesn’t do that. The HR world has already embraced blind resumes; we take names off resumes when we send them to managers so they can’t tell the ethnicity of the applicant. From an AI perspective, HR is responsible that their tools do not screen out ethnicity, disability, or gender. I am a human being, so I go based on what I see and hear. Algorithms can put an applicant through a pre-determined interview process with questions. We’ve got to check our tools to make sure they’re not being discriminatory,” says Dr. Sanchez.