Accelerate hiring key talent to deliver care and exceed patient satisfaction.
Attract skilled candidates, speed up hiring and grow expertise in your workforce.
Simplify recruiting finance and banking talent with a platform for hard-to-fill roles.
Build a talent pipeline that engages and drives your business forward.
See how diverse and global enterprises use iCIMS to employ millions, drive innovation and connect communities worldwide.
Learn how a beloved restaurant hires 40,000+ annually with a great candidate experience.
Uncover unique market insights, explore best practices and gain access to talent experts across our library of content.
View press releases, media coverage, the latest hiring data and see what analysts are saying about iCIMS.
Streamline your tech stack and take advantage of a better user experience and stronger data governance with ADP and iCIMS.
The combined power of iCIMS and Infor helps organizations strategically align their business and talent objectives.
Our award-winning partnership with Microsoft is grounded in a shared desire to transform the workplace and the hiring team experience.
Our partnership with Ultimate Kronos Group (UKG) supports the entire talent lifecycle by bringing frictionless recruiting solutions to UKG Pro Onboarding.
As AI becomes a core part of talent acquisition workflows, employers face growing obligations to ensure these tools are used responsibly and lawfully. State legislatures are taking action to ensure these technologies don’t perpetuate illegal discrimination, often by updating existing state anti-discrimination laws. Two of the most significant recent regulatory updates emerging from Illinois and California are now in effect and have major implications for employers.
Below is a breakdown of these laws and what they mean for talent acquisition, as well as how iCIMS helps to support its customers with their compliance efforts. It’s important to be aware that while some states are updating their laws in this area, other states like New Jersey have taken a different approach and clarified that existing anti-discrimination laws apply to AI tools to the same extent that they apply to human decision-making.
The Illinois Human Rights Act (IHRA) prohibits discrimination, harassment, sexual harassment, and retaliation against individuals in connection with employment, real estate transactions, access to credit, public accommodations, and education. The IHRA was amended in August 2024 to prohibit employers from using AI that has a discriminatory effect on employees “[w]ith respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.” These updates took effect January 1, 2026.
What Employers Need to Know:
California’s Fair Employment and Housing Act (FEHA) prohibits employment discrimination and harassment in California based on protected characteristics like age, disability, gender, and others. New regulations promulgated by the California Civil Rights Council in September 2025 explicitly extended these employment discrimination prohibitions to “automated decision systems” (ADS) used in hiring. Beginning October 1, 2025, employers may not use any ADS that have a discriminatory effect on protected groups.
What Employers Need to Know:
What this means for employers:
Put simply: employers may continue using AI tools, but must ensure these tools do not create discriminatory outcomes.
The IHRA and FEHA do not prohibit the use of AI in recruitment or hiring. However, they ensure that existing anti-discrimination principles are applied to technology in the same manner that these principles are applied to humans. The direction from the states is clear: anti-discrimination laws will apply to recruitment technology as it does to human recruiters.
Employers subject to these laws and regulations must ensure that the tools that they use in recruiting and hiring are regularly tested to ensure that these AI tools do not have the effect of subjecting people to discrimination based on protected characteristics. It will be incumbent on organizations to use AI and other technology that can demonstrate that the tools are not subjecting candidates and employees to discrimination.
There are a number of ways in which iCIMS can support our customers’ compliance efforts.
As Illinois and California take the lead, other states are expected to follow similar frameworks and regulations; even more states are also likely to clarify that their existing laws apply to AI tools, as has been the case in New Jersey. As iCIMS designs and develops AI technology in our platform, we will continue to do so in accordance with our Responsible AI principles, foremost of which is Inclusivity & Fairness. We will continue to monitor and evaluate our AI tools for bias through regular disparate impact testing, to ensure that iCIMS AI models and technology do not unfairly discriminate against protected characteristics. As regulations continue to evolve, iCIMS remains committed to supporting customers with transparent, tested, and responsible AI solutions.
Christine serves as a key liaison between the product development, engineering, and legal teams in her role as AGC, Product and Strategic Programs, and serves as a trusted advisor to iCIMS’ internal teams in multiple legal areas.
She also serves on iCIMS’ Responsible AI Committee, ESG Committee, and provides support and guidance across the business for commercial transactions, partnership programs, and policy development. Christine is licensed to practice law in New York and New Jersey, and holds multiple professional certifications including CIPP/E, CIPP/US, CIPT, AIGP, and FIP.