When Artificial Intelligence stepped Into the Hiring Process
The corporate culture, the pinnacle of ‘profession above all’ atmospheres, still has a long way to grow out of personal biases that come in with people walking into the door. It is observed even within the hiring process where personality and likeability tend to outweigh the significance of the candidate’s qualification and skillset that constitute the actual parameter of any job role. However, likability is often a primary requirement for any role in the professional arena, taking critical decisions based on favoritism is both unethical and unprogressive. However, Artificial Intelligence promises to make hiring unbiased in all its potential.
There are certainly many areas to start with. Employee referrals are vastly common in most companies. These processes allow recruiters and hiring managers to bring their own biases to the process. According to recent studies, choosing people with the “right-sounding” names and educational background is a norm and that goes for most organizations ranging across various cultures globally.
The study has also put forth a few statistics about how companies lack racial and gender diversity, with the ranks of underrepresented people thinning at the highest levels of the corporate ladder. The interesting thing to note is lesser than 5 percent of chief executive officers at Fortune 500 companies are women. Even racial diversity among Fortune 500 boards is alarmingly dismal, as four of the five new appointees to boards in 2016 were white.
“Identifying high-potential candidates is very subjective,” said Alan Todd, CEO of CorpU, a technology platform for leadership development. “People pick who they like based on unconscious biases.”
There is a substantial amount of support backing the notion that AI can effectively eliminate some of these biases. Instead of allowing hiring decisions to be driven by people’s feelings, companies such as Entelo and Stella.ai use machine learning to screen the applicants and detect the skills imperative for certain jobs and thereafter goes on to match the candidates possessing those skills with open positions. These companies claim to not only find better candidates but also highlighting more of those who had gone unrecognized in the traditional process.
Rich Joffe, founder of Stella affirms that their algorithm only assesses candidates based on skills. The algorithm is only allowed to match based on the data dictated and determined by the company. The AI brings the specificity of accessing candidates only on the basis of skills, industries, and tiers of companies, thus limiting the bias.
Entelo also released Unbiased Sourcing Mode to anonymize hiring. To prevent any kind of discrimination, the software allows recruiters to hide names, photos, school, employment gaps and also gender-specific pronouns.
In a way, AI today is also widely used to develop internal talent. The company CorpU partnered with the University of Michigan’s Ross School of Business and built a 20-week online course that uses machine learning to identify high-potential employees. The candidates that ranked highest were not usually the ones that were already on the promotion track. Quite interestingly, these were the individuals who exhibited qualities such as introversion and are often overlooked during the recruitment process.
Solon Borocas is an assistant professor in Cornell’s Information Science department and studies fairness in machine learning. According to him, human decision-making is pretty awful. However, it will be unwise to overestimate the neutrality of technology, either. In his research, he found that machine learning in hiring, much like implications in facial recognition, can exhibit unintentional discriminatory outcomes. Algorithms could carry implicit biases of the programmer or could be skewed to favor certain qualities and skills suitable for a particular data set. Borocas says, “If the examples used for training the system fail to include certain types of people then the model developed won’t be perfect in accessing those individuals.” Although not all the algorithms are perfect. There has been a lot of disagreement among the AI community about the potential of an algorithm to be entirely fair for the hiring process.
There is another type of machine learning that relies on programmers to decide on qualities that should be prioritized during the hiring process. These “supervised” algorithms could be programmed to scan for individuals with esteemed academic backgrounds and qualities of extroversion. However, ‘unsupervised algorithms determine which data to prioritize on their own. Based on existing employees’ qualities and skills, the machine generates its own inferences to determine the qualities desired of future employees.
As most companies have admitted that they can’t eliminate bias and preferences in hiring, Borocas argues that dealing with this issue with the help of AI is far better than the status quo.
Check out: Top Artificial Intelligence Security Companies.