For about a decade now, information technology has been increasingly used in recruitment procedures. However, this practice is not unanimously accepted. In Switzerland, a software program is attempting to reconcile artificial intelligence (AI) and ethics.
In the field of human resources (HR), information technology first demonstrated its potential by preselecting of the most promising job applications. Later, certain larger organizations began to use it to conduct video job interviews. This development drastically reduced the number of face-to-face interviews that required the presence of HR specialists, as well as to make more informed choices that were free of human evaluation biases.
However, this approach quickly showed its limitations. As early as 2015, Amazon observed that its CV-sorting software favored male applicants, thus reproducing the very biases it was supposed to minimize. In early 2020, the video recruitment platform, HireVue, also announced that it would abandon facial analysis for fear that AI had built-in racist biases.
"From a legal standpoint, the greatest risk posed by this technology is that it may commit discrimination", says Prisca Quadroni-Renella, a lawyer specializing in artificial intelligence. "Of course, no one is safe from discrimination by a human being. But given the scale at which AIs are deployed and the number of applications processed, the consequences can be even more serious." Some HR practices incorporating new technologies have evidently been adopted without the necessary consideration. However, should the use of AI in recruitment be completely ruled out?
Re-Prioritizing the human factor
"The aim is to enable candidates to reveal themselves as they truly are, without the risk of being misunderstood or misinterpreted", explains Caroline Matteucci. This former police inspector turned profiler founded the company CM Profiling in 2017. Its flagship product, called "Cryfe," is a software that analyzes facial expressions and body language during job interviews. It was developed in collaboration with the Fribourg School of Engineering and Architecture (HEIA-FR) and the Idiap Research Institute in Martigny.
In order to avoid the mistakes made by giants like Amazon or HireVue, the company based in Muri (BE) wanted to develop a solution that combines computer and human intervention. "It is not about sending the interview video to a machine for it to analyze it coldly and make a decision, explains Caroline Matteucci. "The solution only works when in tandem with an HR specialist and AI, which provides assistance in identifying inconsistencies between spoken language and body language."
The machine analyzes in real time the verbal and nonverbal expressions (facial expressions, tone of voice, and gestures) with the aim of providing HR personnel with a more comprehensive understanding of candidates' responses and limiting the risk of biased impressions.
Easy to use
"The software requires some adaptation", admits Carole Piller, director of the Valjob HR placement agency in Fribourg. "But the purchase of Cryfe includes training course to help you get the hang of it."
During a test phase, the Fribourg-based SME was the first to use the software. "For the candidate, the method is relatively easy to use: you have to agree to be filmed and to show your hands – the machine also analyzes gestures. Apart from that, it is just an ordinary job interview", says Carole Piller. However, the software allows the recruiter to go back to a passage of the interview if a bodily expression detected by the system does not seem to match the candidate's words. "The machine is a great help as it detects details that we might miss", adds the director.
"By weighing the impressions of the human and the analysis of the machine, the goal is to minimize biases on both sides", adds Caroline Matteucci. "Moreover, its usefulness is not limited to personnel recruitment. It could also be applied in the context of psychiatric examinations, always with the aim of assisting human observation."
Lawyer Prisca Quadroni-Renella is generally in favor of these different types of AI usage. "The fact that a software provides assistance, as a colleague might do, but is not the sole authority, is an excellent point. However, the challenge will be to ensure that operators are properly trained and, if necessary, supervised."
On the theme
AI and ethics: an unsolvable issue?
With little regulation, AI is now the subject of a new proposed regulation in the European Union, which classifies facial analysis as a "high-risk" practice. In Switzerland, the law is currently more flexible, but ethical considerations are set to animate the debate in the coming years. According to Prisca Quadroni-Renella, an AI specialist lawyer, "In the absence of legally binding regulations, it may be useful to engage in ethical reflection before adopting certain practices to ensure they align with the company's values."
Regarding behavioral analysis in recruitment, the Zurich-based lawyer is concerned about potential intrusions into privacy. "Take someone who suffers from an autistic or attention deficit disorder – some may not even be aware of it. If an AI reveals it without the person's knowledge, then there is a real ethical problem. Especially since some companies could use this technology to resell data anonymously, as the law allows them, under certain conditions, to use it for monetary purposes."
Last modification 05.04.2023