Is your AI-powered recruiting technology ethical?

Editor’s note: Alina Zhiltsova is an NLP Data Scientist on the iCIMS Talent Cloud AI team.

When it comes to AI, HR professionals have a list of concerns and questions about how it works, whether or not it’s ethical, and how reliable it is. In turn, many may be missing out on using this powerful technology to their advantage. In fact, according to ZDNet, almost 70% of C-level leaders don’t know how AI model decisions or predictions are made. Plus, only 35% said their organization uses AI in a way that’s transparent and accountable. These challenges are real.

However, implementing ethical AI tools in your recruiting and hiring processes, with buy-in from all levels of your organization, can help to reduce bias and improve your diversity, equity, and inclusion (DEI) outcomes.

At iCIMS, we evolve our AI-enabled recruiting software according to best practices and global regulations so you can cut through the confusion and feel confident you are recruiting more ethically to build your winning workforce. Our AI teams follow initiatives spelled out by industry leaders, such as the Responsible AI Institute (formerly AI Global), which is an organization that continually assesses all major responsible AI initiatives worldwide to create a framework called the Responsible AI Trust Index.

You can count on AI in iCIMS to uphold these best practices so you can use these tools to help reduce bias and meet your DEI goals—all while explaining how it works.

What makes AI ethical?

You can consider an algorithm fair when each candidate is assessed solely based on their skills and experience. AI-enabled tools will surface and prioritize two CVs submitted by candidates with similar skills and experience, regardless of demographic information. An unfair algorithm, on the other hand, will give higher chances to one of the candidates based on gender, race, or ethnicity because historically that gender was more represented in a particular role.

Algorithmic bias is difficult to identify in deep learning because it is not clear at what stage the bias is introduced. For this reason, most vendors have a difficult time showing how their AI/ML algorithms make decisions; in other words, the algorithms don’t show their work. Since your recruiting teams can’t fix what they can’t see, it’s iCIMS’ priority to be transparent with our AI and consistently measure the fairness of the algorithms we use in our products. Our team constantly tests our models to make sure they are fair.

How do technology teams build ethical AI?

To understand how our data scientists create ethical AI technology, it’s important to highlight that a lot of our models use word vectors. Word vectors are the way to represent words as numbers on a vector space, which makes it possible to perform predictions and calculations on text documents. They can be a common source of biases in natural language processing, which deals with texts such as resumes, CVs, and cover letters.

The easiest way to understand word vectors is to imagine a map. On each map, you can find the names of cities; some are closer physically than others. For example, the geographic coordinates of Glasgow and Stuttgart are closer together than Glasgow and New York. In a word vector space, Glasgow and Stuttgart would be correlated more closely than either is to New York.

Map showing proximity of Glasgow and Stuttgart

Source: Caliper


Like coordinates on a map, word vectors are numeric representations for common words people use to communicate daily. Just like cities on a map, word vectors will have different distances between them on a vector space. Words that are closer in meaning will also be closer to each other on the vector space.

For example, “kitten” will be closer to “puppy” and “cat” than to “constellation.” A machine learning algorithm will determine how related those words are. If one vector moves, all the vectors adjust their position in the vector; one tiny change can set off a domino effect.

The same principles can be applied to recruitment AI. Depending on the modification, an algorithm could reduce or increase bias. For example, in the word vector image below, you’ll notice nurse, librarian, nanny, stylist, and dancer all share a close association – all jobs society historically associated with women. So, unless taught better, an algorithm might use that historical data to predict that women will make better nurses or librarians, regardless of skill.

Word vector showing gender bias in clusters of profession words

Source: ResearchGate


At iCIMS, we put a lot of effort into pre-processing and post-processing of our models to reduce the possibility of bias. One of the ways to do this is to adjust the distance between word vectors using industry best practices. By doing so, we ensure that the hundreds of thousands of words we have in our vector space will not influence the algorithm unfairly against someone’s gender, race, ethnicity, or other identifiers of demographic data.

How does the iCIMS Talent Cloud help?

iCIMS teams consistently measure bias and research bias mitigation—using the method discussed above—and we are always on the lookout for the latest updates from state-of-the-art research, such as those from Responsible AI Institute.

To provide employers with more confidence in their hiring technology, iCIMS helps promote DEI using responsible AI across the iCIMS Talent Cloud, and how it works remains transparent. This way, you know how you are using your AI-powered recruitment technology and that it’s being done ethically.

Now that you know how AI can reduce bias in your recruiting process, you can more easily present to your leadership team the value of using AI responsibly and contribute to building a more equitable workforce.

Back to top

Receive the latest iCIMS thought leadership directly to your email.

Privacy Notice

Subscribe to the iCIMS blog today

Sign up

The latest from iCIMS

Explore categories