You’ve seen your TA counterparts use artificial intelligence (AI) and get incredible results. From finding better quality candidates, like The Wendy’s Company, to reducing recruiter effort, AI has made an impact on the hiring process.
But if you’re like many of your peers, you may struggle to make the case for AI to your business or compliance teams because of the potential risks. There are plenty of scary news articles out there about discrimination, and it seems like new laws are proposed every day. It can feel like the weight is on your shoulders to ensure that AI recruiting tools won’t do something wrong, even though you’re not an AI expert.
And while every person or organization using AI does have a responsibility to use it ethically, the good news is that you don’t have to navigate these questions alone.
The best way to make a strong commitment to responsible AI — and achieve stakeholder buy-in — is to partner with an expert vendor.
That’s why we’re sharing the story of how iCIMS built its Responsible AI program. You’ll learn how we adopted global standards, turned principles into policies and set the bar for responsible AI programs in the TA software industry.
The beginning of iCIMS’ AI journey
iCIMS has sold enterprise talent acquisition software since 2000 and adopted the first bit of AI technology through our acquisition of TextRecruit in 2018. But our AI journey truly accelerated in 2020 after acquiring Opening.io. Their technology applied sound data science at the top of the recruiting funnel to help recruiters prioritize candidates and understand why each candidate was a match. Even before formal AI regulations existed, Opening.io centered design on ethical principles, using only relevant experience and skills data for matching and minimizing bias. About six years (and a lot of innovation) later, that foundation evolved into the responsible AI capabilities you know as iCIMS AI Talent Explorer.
Building a governance framework
iCIMS took steps to formalize AI governance practices and build our Responsible AI program beginning in 2020. At the time, few AI-specific standards existed, but two frameworks stood out. The high-level expert group on artificial intelligence (AI HLEG) published its Ethics Guidelines for Trustworthy AI in 2019, and the OECD AI Principles outlined a path for innovative, trustworthy AI that respects human rights and democratic values the same year. We modeled our program on these standards, anticipating that comprehensive EU regulation would set a global benchmark, much like GDPR did for privacy.
Policies that put people first
iCIMS translated these principles into action through internal policies and guidance. Our Artificial Intelligence and Machine Learning Policy and companion guidance documents set development guardrails for our internal teams. We condensed our ethical standards into our Responsible AI Principles, which define six pillars:
- Human-led
- Transparent
- Private and secure
- Inclusive and fair
- Technically robust and safe
- Accountable
Our policies are reviewed regularly against established frameworks such as the NIST AI Risk Management Framework, and they reflect both existing and emerging laws including NYC Local Law 144 and the EU AI Act.
Our human-led pillar is especially important in the era of agentic AI. We ensure that humans are in the loop at appropriate times so that you can utilize autonomous AI in your hiring process while minimizing risk. Agents are a newer aspect of AI, and we continuously monitor for regulatory changes to ensure you have access to the latest technology while reducing your risk.
Designed for compliance
Regulations are evolving quickly. We design for compliance and provide the artifacts, settings, and information your compliance teams need.
- NYC Local Law 144: If you use certain automated tools to assist with making hiring decisions in New York City via Automated Employment Decision Tools (AEDT), you must meet annual bias audit and candidate notice requirements. We publish guidance on how iCIMS capabilities support AEDT compliance.
- EU AI Act: The EU AI Act received final approval in May 2024 and entered into force on August 1, 2024. Obligations are phased, with full application for high risk systems approaching in August 2026. iCIMS is preparing the necessary policy updates, documentation, and collateral to assist our customers with their compliance efforts as enforcement takes effect. You can read more about it here.
Committees that keep us accountable
Responsible AI is not just a policy. It is a practice. We established cross functional committees, so governance is woven into our day-to-day work.
- The Responsible AI Committee reviews AI developed by iCIMS and aligns it with ethical standards.
- The AI Governance Committee oversees product AI functionality, including annual bias audits, Privacy by Design reviews, and AI risk assessments.
Independent validation
In March 2025, iCIMS became the first — and, to-date, only — enterprise recruiting software provider to earn TrustArc’s TRUSTe Responsible AI Certification. This recognition validates fairness, transparency, privacy, and accountability across the hiring lifecycle. It also reflects the way we embed AI throughout iCIMS, so your team can adopt AI confidently.
We kaizen: Continuous innovation in responsible AI
Kaizen is the Japanese principle of continuous improvement. Our Responsible AI program is always evolving to keep pace with changing regulations and rapid advances in AI and regulations. We maintain processes for bias audits, transparency reporting and human oversight, and we continue to align with updated frameworks such as the revised OECD AI Principles, the NIST AI Risk Management Framework and ISO 42001.
Technology and regulations may change, but the goal remains the same. Build adaptive, scalable, and ethical AI that helps you hire faster and smarter, without compromising trust.
Why It Matters
Responsible AI is how you protect your brand and build trust with candidates and employees. When you choose iCIMS, you choose a partner with a long track record of responsible innovation, independent validation, and practical governance you can trust.
Learn more about iCIMS AI here.