< Back to Hiring Blog

Illinois and California AI hiring laws: How iCIMS supports compliance

February 20, 2026
4 min read
Learn how iCIMS can
help you drive ROI

As AI becomes a core part of talent acquisition workflows, employers face growing obligations to ensure these tools are used responsibly and lawfully. State legislatures are taking action to ensure these technologies don’t perpetuate illegal discrimination, often by updating existing state anti-discrimination laws. Two of the most significant recent regulatory updates emerging from Illinois and California are now in effect and have major implications for employers.

Below is a breakdown of these laws and what they mean for talent acquisition, as well as how iCIMS helps to support its customers with their compliance efforts. It’s important to be aware that while some states are updating their laws in this area, other states like New Jersey have taken a different approach and clarified that existing anti-discrimination laws apply to AI tools to the same extent that they apply to human decision-making.

Illinois Human Rights Act

The Illinois Human Rights Act (IHRA) prohibits discrimination, harassment, sexual harassment, and retaliation against individuals in connection with employment, real estate transactions, access to credit, public accommodations, and education. The IHRA  was amended in August 2024 to prohibit employers from using AI that has a discriminatory effect on employees “[w]ith respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.” These updates took effect January 1, 2026.

What Employers Need to Know:

  • Applies anti-discrimination principles across the entire candidate lifecycle—from sourcing to hiring, promotion, and beyond.
  • The law covers true AI systems (including generative AI), meaning machine‑based tools that generate predictions or recommendations based on input data.
  • Candidates and employees must be provided with specific notices.

California FEHA

California’s Fair Employment and Housing Act (FEHA) prohibits employment discrimination and harassment in California based on protected characteristics like age, disability, gender, and others. New regulations promulgated by the California Civil Rights Council in September 2025 explicitly extended these employment discrimination prohibitions to “automated decision systems”  (ADS) used in hiring. Beginning October 1, 2025, employers may not use any ADS that have a discriminatory effect on protected groups.

What Employers Need to Know:

  • Not limited to AI.
  • ADS is defined as “a computational process that makes a decision or facilitates human decision making regarding an employment benefit.” An ADS may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.

What this means for employers:

Put simply: employers may continue using AI tools, but must ensure these tools do not create discriminatory outcomes.

The IHRA and FEHA do not prohibit the use of AI in recruitment or hiring. However, they ensure that existing anti-discrimination principles are applied to technology in the same manner that these principles are applied to humans. The direction from the states is clear: anti-discrimination laws will apply to recruitment technology as it does to human recruiters.

Employers subject to these laws and regulations must ensure that the tools that they use in recruiting and hiring are regularly tested to ensure that these AI tools do not have the effect of subjecting people to discrimination based on protected characteristics. It will be incumbent on organizations to use AI and other technology that can demonstrate that the tools are not subjecting candidates and employees to discrimination.

How iCIMS Supports compliance

There are a number of ways in which iCIMS can support our customers’ compliance efforts.

  • Responsible AI and Certification: First, we commit to our Responsible AI Principles, which are woven throughout our software development process, from the design stage through to production. We back this commitment to Responsible AI development with our Responsible AI certification from TrustArc, to ensure third-party accountability to Responsible AI governance practices.
  • Third-Party Audit: iCIMS also instituted regular and frequent disparate impact testing on our Candidate Ranking feature, in order to allow our customers to comply with NYC LL 144/2021. Our third-party bias audit is available in our Trust Portal for customers to rely on for compliant use of that feature. Importantly, our third-party independent auditors also audit our overall governance practices in that report, providing further assurance to our customers of our commitment to Responsible AI governance. This means customers can rely on independent evidence that our tools meet emerging regulatory standards.
  • Disparate Impact Testing: In response to legislative updates, iCIMS has instituted regular disparate impact testing for our other AI tools used in recruitment and hiring. iCIMS customers may visit our Trust Portal for regular updates on these test results.

Looking Ahead

As Illinois and California take the lead, other states are expected to follow similar frameworks and regulations; even more states are also likely to clarify that their existing laws apply to AI tools, as has been the case in New Jersey. As iCIMS designs and develops AI technology in our platform, we will continue to do so in accordance with our Responsible AI principles, foremost of which is Inclusivity & Fairness. We will continue to monitor and evaluate our AI tools for bias through regular disparate impact testing, to ensure that iCIMS AI models and technology do not unfairly discriminate against protected characteristics. As regulations continue to evolve, iCIMS remains committed to supporting customers with transparent, tested, and responsible AI solutions.

×

Learn how iCIMS can help you drive ROI

Explore categories

Explore categories

Back to top

Join our growing community
and receive free tips on how to attract, engage, hire, & advance the best talent.

About the author

Christine Raniga

Christine serves as a key liaison between the product development, engineering, and legal teams in her role as AGC, Product and Strategic Programs, and serves as a trusted advisor to iCIMS’ internal teams in multiple legal areas.

She also serves on iCIMS’ Responsible AI Committee, ESG Committee, and provides support and guidance across the business for commercial transactions, partnership programs, and policy development. Christine is licensed to practice law in New York and New Jersey, and holds multiple professional certifications including CIPP/E, CIPP/US, CIPT, AIGP, and FIP. 

Read more from this author >