Does AI Make Hiring More Biased?

Some of these AI tools scan through resumes and then select the most appropriate candidates to bring in for one’s interview.

Some of these AI tools scan through resumes and then select the most appropriate candidates to bring in for one’s interview.

Currently, many corporations and business organizations employ various forms of artificial intelligence in recruiting new employees. Some of these AI tools scan through resumes and then select the most appropriate candidates to bring in for one’s interview. The purpose is to ensure that hiring is not a tedious process, but rather a quick process. However, critics hold that by using such tools, cases of discrimination and bias might be enhanced rather than diminished in the course of recruitment.

What Is Bias in Hiring?

Discrimination in the context of employment involves the process of selecting or rejecting applicants based on their characteristics like race, gender, age, etc., which should not be the case if a proper selection criterion has been put in place as it focuses on the performance of the employees. An employer should avoid using prejudice such as ethnicity, gender, and color as factors for employment and decisions. Unfortunately, despite the best efforts of these algorithms, they might be skewed by the prejudices of the programmer or designers and come up with biased results. The idea was to bring the factor of Artificial Intelligence into the process to try and filter out human biases concerning hiring candidates.  

How AI Hiring Tools Become Biased

Here is the problem – the AI systems acquire the bias from the data through the process of data training. A negative case would be that if an AI tool is trained on past hiring data and the data showed the company hiring more white men, the tool might make a conclusion based on the inputs and understand that being a white male helps get a job there. The candidates with those attributes will be favored by the AI while the others will be discriminated against even if they are arguably more qualified” The described algorithm discriminates against women and minorities deliberately to serve the interest of certain candidates.

This has also raised some questions that the board would always want to know why the AI rejected or gave certain candidates a higher priority. In contrast, AI models can be challenging to evaluate for bias since understanding the intricate processes behind decision-making is inapplicable to AI systems.  

Politics and its ultimate consequences are briefly touched upon about employing and improving attested bias in AI recruitment.

However, they also acknowledge that there is still merit in incorporating such a system in hiring and improving its fairness if done properly. Key steps include:

– Continued training the AI using more generalized and extensive datasets that do not contain the existing respective biases towards the specific political leaning.  

– To include this de-biasing aspect in a manner more often than preached by the AI, humans seek out any favoritism in the recommendations provided by the AI.

– Making the AI algorithms and great decisions that they are making as transparent as possible.

– This is a way of presenting the decisions made by the AI as advice that the human can consider rather than as final authoritative decisions.

Read Previous

Is Russia forcing African migrants and students to fight in Ukraine?

Read Next

“They’re Taking Our Jobs”: Indian Origin Engineer Laid Off, Replaced By Indians Living In India

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x