Updated March 27, 2026
California AI Restrictions Target Hiring Bias, Employers Face Legal Risk
Artificial intelligence now screens millions of job applications, but California's new ai restrictions demand accountability when these systems perpetuate hiring bias. The state has approved comprehensive regulations that hold employers legally responsible for discriminatory outcomes in automated hiring processes, even when using third-party AI tools. Employers face significant legal liability because these regulations extend beyond traditional employment law, requiring extensive record-keeping and anti-bias testing. Understanding these requirements is essential to protect both your rights as an employee and your organization's compliance obligations.
California Approves Sweeping AI Employment Regulations
The California Civil Rights Council finalized regulations that amend the Fair Employment and Housing Act to address automated decision systems in workplace settings. These regulations clarify how existing anti-discrimination laws apply to employers using artificial intelligence, machine-learning, algorithms, statistics, and other data processing technologies to facilitate employment decisions.
When Do the New Rules Take Effect?
California's Civil Rights Council employment regulations regarding automated-decision systems took effect on October 1, 2025. Employers operating in California had until this date to ensure their AI hiring tools and automated systems comply with the state's anti-discrimination framework.
Separately, the California Privacy Protection Agency approved regulations on automated decisionmaking technologies used in significant decisions about consumers, including employees. The Office of Administrative Law finalized these CPPA regulations, requiring businesses to achieve full compliance by January 1, 2027. This deadline applies to any automated decisionmaking technology already in use prior to that date.
Who Must Comply with These Regulations?
The regulations apply to all employers in California covered by the Fair Employment and Housing Act. This includes employers using artificial intelligence, machine-learning, algorithms, statistics, or other data processing methods to facilitate human decision-making regarding recruitment, hiring, and promotion of job applicants or employees.
The rules define an automated decision system broadly as any computational process that makes or assists in making employment decisions, such as hiring, promotions, selection for training programs, or similar activities. Third-party vendors providing services related to hiring decisions, applicant screening, recruiting, or administering automated systems for employers also fall within the regulatory scope. According to the regulations, vendors operate as agents acting on behalf of employers, directly or indirectly.
What Prompted California to Act Now?
The Civil Rights Council developed these regulations in response to the growing use of AI tools in employment settings. According to the California Civil Rights Department, automated-decision systems are increasingly used to facilitate a wide range of decisions related to job applicants or employees, including recruitment, hiring, and promotion.
The Council stated that while these tools can bring numerous benefits, they can also exacerbate existing biases and contribute to discriminatory outcomes. The regulations aim to protect against potential employment discrimination resulting from the use of artificial intelligence, algorithms, and other automated-decision systems.
The regulations resulted from extensive public input. The Civil Rights Council conducted a series of public discussions, including an April 2021 hearing, and carefully considered input from experts and the public, along with federal reports and guidance. Following the release of proposed regulations in May 2024, the Council initiated a public comment period with a July 18, 2024 deadline and held a public hearing on that same date.
The regulations do not codify new antidiscrimination laws. Instead, they clarify how the state's existing antidiscrimination laws apply to employers' use of emerging AI systems to make employment decisions. Furthermore, the regulations do not prohibit employers from using AI tools in employment decision-making. Rather, they prevent discriminatory use of the tools.
How Automated Decision Systems Discriminate in Hiring
Automated hiring systems discriminate through multiple pathways that replicate and amplify human biases embedded in training data. An estimated 99% of Fortune 500 companies now use some form of automation in their hiring process, yet research demonstrates these tools systematically disadvantage applicants based on protected characteristics.
Resume Screening Tools Replicate Existing Biases
AI resume screening discriminates by learning patterns from historical hiring data that reflects decades of workplace inequality. Amazon scrapped its experimental resume screening program in 2018 after discovering the system penalized resumes containing the word "women's" and downgraded graduates of all-women's colleges. The algorithm trained on ten years of resumes submitted primarily by men, teaching itself that male candidates were preferable.
University of Washington researchers tested three large language models across 554 resumes and found white-associated names were preferred 85% of the time versus Black-associated names 9% of the time. Male-associated names received preference 52% of the time compared to female-associated names only 11% of the time. The systems never preferred Black male-associated names over white male-associated names in any comparison.
Intersectional bias creates unique harms beyond single identity categories. Black female-associated names were preferred 67% of the time versus Black male-associated names at 15%. This demonstrates that examining race or gender independently fails to capture the full extent of algorithmic discrimination.
The Equal Employment Opportunity Commission settled its first AI hiring discrimination lawsuit against iTutorGroup in August 2023. The company's system automatically rejected female applicants age 55 or older and male applicants age 60 or older, screening out over 200 applicants. iTutorGroup paid $365,000 to the rejected applicants and agreed to adopt anti-discrimination policies.
Video Interview Analysis Creates Unfair Barriers
AI video recruitment has grown from 58% adoption among employers in 2024 to 72% in 2025. These systems analyze speech patterns, facial expressions, and vocal tone to assess candidates, creating barriers for applicants with disabilities and those speaking with accents.
Research participants reported that non-native English speakers and individuals with speech-affecting disabilities experienced their words being transcribed incorrectly, resulting in lower algorithmic ratings. One AI systems company disclosed that only 6% of its training data came from Australia or New Zealand, and 36% of job applicants in the training data were white.
Facial recognition systems perform worse at identifying the gender of women and people of color compared to white males. IBM withdrew from facial recognition technology in 2020 due to concerns about racial profiling and discriminatory performance. Video interviewing software that analyzes speech patterns to assess problem-solving abilities fails to score applicants fairly when speech impediments cause significant differences in speech patterns.
Job Advertisement Targeting Reinforces Stereotypes
Recruiters increasingly use AI algorithms for customized job postings designed to appeal to different applicant types. Large language models trained on limited datasets reproduce gender bias and racial stereotyping when generating job advertisements and targeting specific candidate populations.
Assessment Games Trigger Unlawful Medical Inquiries
Gamified hiring assessments claim to measure personality traits, aptitudes, and cognitive skills through electronic games. These tools ask questions likely to elicit information about medical conditions or directly screen out applicants with certain conditions, violating ADA restrictions on pre-employment disability inquiries.
Performance tracking software fails to account for reasonable accommodations, measuring employees against preset specifications designed for workers without disabilities. Timed assessments create barriers for individuals with intellectual disabilities or dexterity limitations affecting mouse and keyboard use. Videos without closed captions exclude candidates with hearing impairments, while poor color contrast disadvantages those with low vision or color blindness.
Employers Face Legal Liability Even for Third-Party AI Tools
Purchasing AI hiring tools from third-party vendors does not shield employers from legal responsibility. California's ai restrictions under FEHA establish that employers remain accountable for discriminatory outcomes regardless of who developed the algorithmic system.
Vendors Operate as Employer Agents Under FEHA
California regulations explicitly extend liability to third-party vendors or staffing agencies used by employers. If an employer's staffing partner or AI software provider uses an automated decision system on the employer's behalf and it produces discriminatory impact, the employer remains responsible.
The regulations define an agent as anyone acting on behalf of an employer, directly or indirectly, to exercise a function traditionally performed by the employer or any other FEHA-regulated activity. This broad definition stems from the California Supreme Court's decision in Raines v. U.S. Healthworks Medical Group, which expanded FEHA's employer definition to include third-party business entities performing employment-related functions.
The Raines plaintiffs received job offers conditional on pre-employment medical screenings conducted by a third-party company using automated decision-making. The screening form contained intrusive medical history questions that violated FEHA. The Court examined FEHA's definition of "employer" as "any person regularly employing five or more persons, or any person acting as an agent of an employer, directly or indirectly". The ruling established that recognizing the medical provider as an agent extended liability to the company most directly responsible for the FEHA violation.
Discrimination Claims from Deterred Applicants
A federal judge allowed Mobley v. Workday to proceed as a nationwide class action in May 2025, ruling that Workday's AI-powered hiring tools may have discriminatory impact on applicants over age 40. The court granted preliminary certification of a collective action under the Age Discrimination in Employment Act, permitting the lead plaintiff to notify other job seekers age 40 and older who applied through Workday's system.
Derek Mobley, an African-American over age 40, applied for dozens of jobs with employers using Workday's AI tools and received rejections every time. He reported rejections within hours or minutes of applying, suggesting automated screening tools made rejection decisions on behalf of employers. Workday argued that employers make final hiring decisions, not their tools, but Mobley's rapid rejections contradicted that claim.
New Consumer Reporting Lawsuits Target AI Hiring Platforms
A January 2026 class action against Eightfold AI alleges the platform violates the Fair Credit Reporting Act by generating Match Scores without proper disclosures. The lawsuit argues Eightfold collects applicant data from resumes, LinkedIn profiles, social media, and internet activity, then uses that data to generate proprietary Match Scores that employers use to automatically filter candidates before human review.
Plaintiffs claim Eightfold operates as an unregistered consumer reporting agency. One plaintiff alleged that only 0.3% of thousands of applications submitted through Eightfold-powered portals progressed to follow-up or interview. The complaint states applicants received no disclosure that Eightfold existed or would evaluate them, gave no written authorization, never saw their Match Scores, and received no adverse action notices when not selected.
Contract clauses vendors write to limit exposure transfer that exposure directly onto employers. Compliance misconceptions around vendor responsibility run deep, but regulators treat employers as liable parties regardless of who built the algorithm.
New Record-Keeping Mandates Extend to Four Years
Record retention obligations under California's ai restrictions doubled from two years to four years for all personnel and employment records. This expansion applies specifically to automated-decision system data, defined as any data used in or resulting from an ADS and any data used to develop or customize an ADS for use by an employer or covered entity.
What Data Must Employers Preserve?
The regulations require preservation of ADS-related records including dataset descriptors, scoring outputs, and audit findings for four years. The mandate extends beyond traditional personnel files to encompass applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data, and other records created or received by the employer dealing with any employment practice affecting any employment benefit of applicants or employees.
The definition of automated-decision system data captures three categories: data used to customize the system, data used by the system during operation, and data generated by the system as output. Organizations using automated systems in employment contexts must retain a broader scope of data that may not have previously been subject to record retention requirements.
Retention requirements include data inputs, outputs, decision criteria, audit results, and correspondence. Employers must maintain comprehensive records of decision-making logic, input and output data, and results from bias audits for at least four years. Companies screening thousands or tens of thousands of job applicants each year face particularly significant data volume challenges. The expanded retention requirements will likely have a more significant impact on record retention obligations for job applicants compared to active employees.
Storage and Cybersecurity Cost Implications
Breaches involving shadow data took 26.2% longer to identify and 20.2% longer to contain, averaging 291 days. These incidents resulted in higher breach costs averaging $5.27 million where shadow data was involved. Lost business and reputation damage accounted for an average of $1.47 million.
Job applicant data has become an increasingly attractive target for hackers. Employers storing four years of AI hiring records face exposure to lawsuits, regulatory fines, and reputational damage from data breaches.
Recognizing Discriminatory Practices
Identifying discriminatory practices can be challenging, as they may not always be overt. Here are some common signs of workplace discrimination:
- Unequal Treatment: If you notice that you are being treated differently than your colleagues for similar work, this may indicate discrimination.
- Harassment: Any form of harassment based on protected characteristics can be grounds for a discrimination claim.
- Retaliation: If you report discriminatory behavior and face negative consequences, such as demotion or termination, this may constitute retaliation.
- Failure to Promote: If you are consistently overlooked for promotions despite qualifications, it may be a sign of discrimination.
If you experience any of these situations, it is essential to document your experiences and gather evidence to support your claim.
Filing a Charge of Discrimination
Before you can file a lawsuit for discrimination, you must first file a charge with the Equal Employment Opportunity Commission (EEOC) or the California Department of Fair Employment and Housing (DFEH). Here’s how to proceed:
Step 1: Gather Evidence
Collect any relevant documentation that supports your claim, including:
- Emails or messages that demonstrate discriminatory behavior.
- Performance reviews that highlight your qualifications.
- Witness statements from colleagues who can corroborate your experiences.
Step 2: File a Charge
You can file a charge of discrimination through the EEOC or CRD. This process typically involves:
- Completing a charge form that outlines your allegations.
- Submitting the form within the required time frame (usually within 180 days of the discriminatory act).
- Participating in an interview with an investigator to discuss your case.
Step 3: Await Investigation
Once your charge is filed, the agency will investigate your claims. They may contact your employer for a response and gather additional evidence. The investigation process can take several months.
Step 4: Receive a Right to Sue Letter
If the agency finds sufficient evidence of discrimination, they may issue a "right to sue" letter, allowing you to pursue legal action in court. If they do not find sufficient evidence, you may still have the option to file a lawsuit, but it is advisable to consult with an attorney first.
Legal Options After Filing a Charge
If you receive a right to sue letter, you can proceed with filing a lawsuit against your employer. Here are some key considerations:
Types of Claims
You may pursue various claims, including:
- Disparate Treatment: Claims based on unequal treatment due to discrimination.
- Hostile Work Environment: Claims arising from a workplace that is intimidating or abusive due to discriminatory practices.
- Retaliation: Claims based on adverse actions taken against you for reporting discrimination.
Seeking Damages
If you win your case, you may be entitled to various forms of compensation, including:
- Back Pay: Compensation for lost wages due to discriminatory practices.
- Emotional Distress Damages: Compensation for the emotional impact of discrimination.
- Punitive Damages: Additional damages intended to punish the employer for egregious behavior.
The Importance of Legal Representation
Navigating the legal landscape of discrimination claims can be complex and overwhelming. Hiring an experienced employment attorney can significantly enhance your chances of success. Here’s why:
- Expertise: An attorney understands the intricacies of discrimination law and can help you build a strong case.
- Negotiation Skills: Your attorney can negotiate on your behalf, whether with the agency or in court.
- Emotional Support: Legal disputes can be emotionally taxing. Having an advocate can provide reassurance and guidance throughout the process.
Alternative Dispute Resolution
Before pursuing a lawsuit, consider alternative dispute resolution methods, such as mediation or arbitration. These processes can be less adversarial and may lead to a quicker resolution. Here’s how they work:
- Mediation: A neutral third party facilitates a discussion between you and your employer to reach a mutually agreeable solution.
- Arbitration: A neutral arbitrator hears both sides of the case and makes a binding decision.
These options can save time and resources while still addressing your concerns.
Conclusion
If you believe you have been a victim of workplace discrimination, it is crucial to understand your rights and the steps you can take to seek justice. California law provides robust protections against discrimination, and you have the right to pursue legal action if necessary. Remember to document your experiences, file a charge with the appropriate agency, and consider seeking legal representation to navigate the complexities of your case.
Taking action not only helps you but also contributes to a fairer workplace for all employees. If you have questions or need assistance, do not hesitate to reach out to a qualified employment attorney who can guide you through the process.
Call Setyan Law at (213)-618-3655 to schedule a free consultation.







