By: Daniel Cohen, Esq., Founding Partner, Consumer Attorneys
“I feel like I’m arguing with a ghost,” a client once told me. They applied for a job. Everything looked normal until the background check came back wrong. Not “kind of wrong.” Wrong person, wrong record. The employer’s portal showed a short notice, nothing else. The screening company had a dispute process that moved at its own pace, built around a 30-day investigation cycle that can stretch to 45. The employer’s hiring timeline moved at theirs. By the time the file was corrected, the position was gone.
That story is becoming ordinary because discrimination suddenly got easier to distribute. The system brought in automation to eliminate unfairness. Did it manage to do that?
It didn’t. It made unfairness scalable, and then it hid the machinery behind “proprietary systems,” vendor contracts, and decisions that arrive with no human attached.
I call this denial without appeal, and it is the defining consumer rights problem of the automated economy.
Denial Without Appeal Is the Real Crisis
Most public debate about algorithmic discrimination focuses on biased data and biased outcomes. That is real, but, truth be told, it is not the whole danger. The bigger danger is procedural. Automation increasingly produces high-impact decisions while removing the practical path to challenge them.
We observe this trio showing up together:
- Volume: decisions are made at scale, faster than humans can meaningfully review.
- Opacity: the reasons are vague, or the system cannot translate them into plain language.
- Delay: dispute rights exist, but the process moves too slowly to protect people in real time.
That is how discrimination becomes hard to see and even harder to prove: it stops looking like discrimination and starts looking like “the system.”
Federal agencies have started warning about exactly this. In a 2023 joint statement, the FTC Chair and officials from DOJ, CFPB, and EEOC said they already see AI tools that “automate discrimination” and emphasized that “there is no AI exemption” from the law. And that is the legal and moral line we draw in the sand.
If You Cannot Explain a Decision, You Should Not Be Allowed to Scale It
Here’s the standard I think we should adopt across credit, housing, employment screening, tenant screening, insurance, and fraud flagging:
No explanation, no deployment.
If a tool can deny someone a job, apartment, credit line, or access to their own money, then the company using it must be able to explain the decision in plain English, defend it with evidence, and fix mistakes quickly.
Regulators are moving in that direction already. The CFPB’s 2023 guidance on AI in credit denials makes the point sharply: creditors must give accurate and specific reasons for adverse action, and they cannot hide behind check-the-box explanations that fail to reflect the real drivers of the decision, even when the model is complex or “black box.”
That matters far beyond lending. A meaningful explanation is the beginning of accountability. Without it, consumers cannot correct errors, cannot improve eligibility, and cannot detect discrimination.
Two Enforcement Examples That Show What “Discrimination Without a Face” Looks Like
You can see the problem clearly when you look at the enforcement record.
1) Automated suspicion treated as truth
In its Rite Aid action, the FTC described how facial recognition systems produced thousands of false positives, leading to people being confronted, searched, ordered to leave, or accused, sometimes in front of family, and the FTC alleged disproportionate impacts on people of color. That is a machine-generated accusation turned into a real-world humiliation, at scale.
2) Algorithms steering opportunity away from protected groups
In the Justice Department’s housing advertising case against Meta, the government alleged that algorithms used to determine which users receive housing ads relied in part on Fair Housing Act-protected characteristics. DOJ described it as its first case challenging algorithmic bias under the FHA, and the settlement required Meta to stop using a “Special Ad Audience” tool for housing and develop a new system to address disparities. Software has no malice, no motive, and no bias in the human sense. What it has nowadays is impact, repeated millions of times, and that impact can be discriminatory.
There are thousands of cases, from different industries, with the same lesson: when automated systems operate at scale, harm stops being an exception and starts being a normal feature of the workflow.
Where Consumer Attorneys See the Damage Every Day
In consumer protection, the most brutal versions of “denial without appeal” happen when bad data meets fast decisions:
- A tenant screening error that functions like a housing ban.
- A background check mismatch that becomes a job loss.
- Identity theft that morphs into a long-term credibility problem.
- Credit reporting issues that trigger cascading denials.
And this is not just theory. The CFPB and FTC actions related to TransUnion’s rental background checks and credit reporting practices show how tenant screening and credit reporting can trigger legal scrutiny when accuracy and process break down.
The lived experience is simpler than the policy debate: a person gets blocked, the clock is running, and the dispute path is too slow to matter.
What Consumer Due Process Should Require in an Automated Economy
If we’re serious about civil rights and consumer protection in the AI era, we need to modernize the procedural baseline. And I’m not referring to beautiful slogans, but to requirements. At a minimum, consumer due process should mean:
Explain. Appeal. Audit.
- Explain: disclose when automation materially contributed to the outcome and provide specific, usable reasons.
- Appeal: provide a dispute process that works at the speed of real life, with meaningful and direct human review and authority to override.
- Audit: test for disparate impact and retain records so compliance can be proven, not merely claimed.
States are starting to formalize parts of this. For example, California’s Civil Rights Council regulations, approved in 2025, clarify how existing anti-discrimination rules apply to automated decision systems and emphasize recordkeeping, including retaining automated-decision data for at least four years.
But even good rules can fail if enforcement is weak. A New York State Comptroller audit of NYC’s Local Law 144 enforcement found major gaps in how compliance was identified and how complaints and reviews were handled, highlighting how hard it is to enforce transparency regimes when noncompliance is difficult to detect.
That is the point: accountability cannot depend on consumers guessing what tool was used against them.
What Businesses Should Do if They Want to Stay On the Right Side of This
If your automated system can deny a person a job, housing, credit, or access to their own money, then compliance cannot be a footnote, nor a nice-to-have. Implement the safeguards below in your operations; they’re the risk controls that make automation lawful and defensible:
- Treat vendors as suppliers, not liability shields. If you use the tool, you own the outcome.
- Build reasons that map to reality, not generic categories.
- Track false positives, false negatives, reversal rates, dispute timelines, and disparate impact, and learn from them.
- Keep decision logs and training data governance tight enough to support audits and investigations.
- Slow down where the harm is irreversible: housing, jobs, account closures.
If a company cannot do these things, then it’s definitely not ready to deploy automation at scale.
What Consumers Can Do When a Black-Box Decision Hits Them
If you’ve been denied and the explanation feels generic or automated, assume time is not on your side. The hardest part is you don’t even know who what to argue with. And while you’re figuring that out, the system will keep moving. Your task is to create leverage: records, deadlines, and a dispute that can’t be ignored. This is not legal advice, just survival guidance:
- Ask for the underlying report or record: consumer report, tenant screening report, adverse action notice, reason codes.
- Save everything: screenshots, emails, timestamps. Portals change. “Evidence” disappears.
- Dispute in writing when possible and keep copies.
- Move fast in housing and employment. These timelines are brutal.
- If the damage is urgent, talk to a consumer rights attorney early.
The Line We Draw
You can almost predict it: build systems to move fast and deny at scale, then weaken the paths to explanation and correction, and “discrimination without a face” becomes the default.
DOJ’s Civil Rights Division, the CFPB, the FTC, and the EEOC have all said the same thing in plain language: there is no ‘AI exemption’ from the laws on the books. The next step is making the procedural floor non-negotiable. No explanation, no deployment.
Because if the future of consumer life is automated decisions, then the future of consumer rights has to be automated accountability, too.
About Consumer Attorneys
Consumer Attorneys is a BBB A+ rated national consumer protection law firm specializing in Fair Credit Reporting Act (FCRA) litigation. With over $100 million recovered for clients, the firm represents consumers in disputes involving credit reporting errors, background check mix-ups, identity theft, and other violations of federal consumer protection laws. Founded by Daniel Cohen, Esq., Consumer Attorneys maintains offices in New York and serves clients nationwide. For more information, visit consumerattorneys.com.
Contact:
Consumer Attorneys
Email: pr@consumerattorneys.com
Address: 68-29 Main St, Flushing, NY 11367
Website: consumerattorneys.com
Disclaimer: The content of this article is for informational purposes only and does not constitute legal advice. It is not intended to replace professional consultation or guidance. Readers should seek advice from a qualified legal professional for specific legal matters or concerns.



