Analysis: the use of AI to help in recruitment shows some of the problems that can occur when human decision-makers are replaced by AI

Artificial Intelligence is currently creating massive changes in many institutions. A German editor was recently fired over a fake AI interview with racing legend Michael Schumacher. Students are finding it increasingly easy to use AI tools such as ChatGPT to write essays and even to complete examinations AI is used in the workplace for functions ranging from ensuring compliance with increasingly complex regulations to managing compensation and benefits.

The Terminator franchise was based on the premise that AI systems might gain awareness and decide to defend themselves by getting rid of humans. While this idea still seems to be a stretch, there is disturbing evidence that the distinction between AI and human intelligence might be shrinking.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Morning Ireland, is AI a threat to humanity?

One growing application of AI in the workplace is in screening job applicants and sometimes even in making hiring decisions. Instead of using supervisors or hiring managers to screen, interview and hire or reject job applicants, organisations are turning to automated methods, especially when the number of job applicants is large. This application of AI to hiring provides a neat example of how AI systems are developed and used and of some of the problems that can be encountered when human decision-makers are replaced by AI.

At the most basic level, AI systems work by "learning" to mimic human decision makers. It often starts by identifying consistent patterns in data to form a provisional rule for making decisions. For example, suppose you feed the last several hundred hiring decisions into a computer algorithm and what the computer determines is that graduates of TCD get job offers while graduates from other universities do not. This creates a potential rule – ie hire TCD graduates.

You can then apply this rule to another set of many hiring decisions to see if this rule successfully predicts who is or is not hired. With more information, the rule might be modified (eg hire TCD graduates, but also graduates from the University of Limerick with degrees in the following fields…). At some point, the AI program will be able to model human decision-makers and apply that hiring strategy reliably (and at very little cost) to future applicants.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Brainstorm, is it really bonkers to use an algorithm to hire a person?

One of the current controversies over the use of AI in hiring helps to illustrate the strengths and weaknesses of this approach. Despite claims that automated hiring can eliminate biases and discrimination in hiring decisions, there is clear evidence that the use of AI can perpetuate, and perhaps even "bake in" discriminatory hiring practices, leading to continuing discrimination against women, older workers, members of disadvantaged groups, etc. In the United States, agencies charged with enforcing laws designed to guarantee civil rights in employment have announced major initiatives to monitor and enforce violations of these rights that arise from the use of AI to make important decisions about job applicants and employees.

But how can AI hiring systems produce and perpetuate bias and discrimination? After all, aren't computer algorithms objective? Can’t they review applicants without considering their age, gender, attractiveness, race or other factors that bias hiring decisions?

It turns out that eradicating bias is very difficult because AI systems learn how to make decisions by studying and modeling human decisions. If these human decisions have consistently discriminated against applicants who are old, female, overweight, the "wrong" race, etc, an AI system developed to mimic these decisions will perpetuate the same biases. The better the job the system does by using available objective cues to capture human decisions, the higher the likelihood that they will also capture systematic biases in those human decisions.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's The Business, tech journalist Elaine Burke on the huge developments in the world of human-like chatbots

For example, the computer might "know" what university you attended. But if the people who graduated from TCD (continuing with our example) are different from people who graduate from other universities in terms of things like academic success, parents’ education, the quality of the schools in their towns and the like, a hiring rule that says "hire TCD graduates" might inadvertently favor students will all of the other characteristics that separate TCD graduates from other applicants.

This issue is especially difficult to resolve because AI rules are rarely as explicit or simple as "hire Trinity graduates". It might not even be possible to articulate the rules the AI system follows, especially in AI systems where rules are continuously revised ad improved.

AI is likely neither the villain nor the saviour when it comes to hiring discrimination. If AI systems that are trained to mimic decisions that have historically discriminated against members of a particular group, it might be hard to get the system to stop this pattern of discrimination.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's The Business in 2021, Science Gallery curator Julia Kaganskiy on their Bias : Built This Way exhibition

On the other hand, AI systems can empirically test for systematic discrimination in ways that can be difficult when studying human decisions. Suppose an organisation has only screened a couple of hundred of job applicants in recent years. It might be hard to determine whether there is reliable evidence of discrimination in favor of or against members of specific groups.

AI systems make it possible to model future decisions with very large numbers of applicants and to simulate the effects of changing the composition of the applicant pool to see what might happen, for example, if more women were to apply to a job traditionally held by men. Simply switching from human decision-makers to AI will not make hiring discrimination go away, but the smart use of AI can help you understand when and where it occurs and how to retrain AI systems that once mimicked biased decision-makers into systems that follow more equitable rules.


The views expressed here are those of the author and do not represent or reflect the views of RTÉ