Starting October 1, 2025, California’s new regulations on Automated Decision Systems (ADS) will impose strict compliance obligations on employers using AI in hiring, promotions, evaluations, and other employment decisions. Meanwhile, a federal court has greenlit a nationwide class action against Workday, Inc., alleging that its AI hiring tools caused age discrimination. These two developments significantly raise the stakes for HR teams, demanding immediate attention to AI-related risk and regulatory compliance.
1. What California’s ADS Rules Require
Under the regulations, California’s Fair Employment and Housing Act (FEHA) applies to any automated tool that materially influences employment outcomes. Employers must:
- Ensure AI and algorithmic tools comply with FEHA’s anti-discrimination standards.
- Guard against disparate impact—even if there’s no intent to discriminate.
- Remain liable for biased results from third-party vendors, who may be deemed employer “agents.”
- Retain decision logic, inputs/outputs, and bias audit records for four years.
- Conduct bias audits and outcome monitoring to build a potential affirmative defense.
2. Why the Workday Lawsuit Matters
In Mobley v. Workday, a court allowed a collective action under the federal Age Discrimination in Employment Act (ADEA) to proceed, representing job applicants over the age of 40. The plaintiffs alleged that Workday’s AI hiring systems—used by many employers—systematically excluded older applicants.
The court found:
- Workday may be liable as an “agent” if its tech influences hiring decisions.
- Bias in training data was sufficient to justify class treatment.
- Centralized evidence of how the AI screens applicants supported claims of systemic age discrimination.
The case sends a clear message: Employers can face liability for discriminatory AI—even if the system is vendor-supplied.
3. What Employers Need to Do
Assess Your AI Tools
Inventory all AI and algorithmic systems used in hiring and HR. Understand their decision-making roles.
Audit Vendors and Contracts
Ensure vendors have completed bias audits, share necessary data, and provide legal protections in contracts.
Implement Oversight and Testing
Conduct regular bias audits, monitor outcomes by protected class, and require human review of AI-driven decisions.
Train HR Teams
Educate recruiters and managers on how AI tools function, when to intervene, and how to respond to candidate concerns.
Watch the Legal Landscape
Stay ahead of emerging federal and state regulations—California’s model is likely to spread to other jurisdictions.
4. Final Thoughts
California’s new ADS rules and the Mobley lawsuit mark a shift toward AI accountability in employment. Employers can no longer treat these tools as neutral or outside of legal risk. Whether developed internally or sourced from vendors, AI systems must be actively governed, audited, and aligned with anti-discrimination laws.
With the October 1, 2025 deadline approaching fast, now is the time to assess tools, vet vendors, document compliance efforts, and train your HR teams. This isn’t just about avoiding litigation—it’s about building a fairer, legally sound hiring process for the future.
This publication is designed to provide general information on pertinent legal topics. The statements made are provided for educational purposes only. They do not constitute legal or financial advice nor do they necessarily reflect the views of Holland & Hart LLP or any of its attorneys other than the author(s). This publication is not intended to create an attorney-client relationship between you and Holland & Hart LLP. Substantive changes in the law subsequent to the date of this publication might affect the analysis or commentary. Similarly, the analysis may differ depending on the jurisdiction or circumstances. If you have specific questions as to the application of the law to your activities, you should seek the advice of your legal counsel.