Why Poor AI Is Now a Bigger Threat to Customer Loyalty Than Poor Support

Photo Courtesy: Simply Contact
Photo Courtesy: Simply Contact

Three months after replacing its entire frontline support team with an AI chatbot, a mid-size e-commerce company was making urgent calls to bring its human agents back. CSAT had dropped 22 points. Refund requests were climbing. And the support inbox was overwhelmed with messages from customers who had spent quarter-hours looping through a bot incapable of processing their actual problem.

It’s a story that Konstantin Ryzhov, Co-Founder and CEO of Simply Contact, knows well and returns to often.

“I tell this story not because AI failed them. It didn’t,” he says. “The technology worked exactly as configured. The problem was what it was configured to do.”

That distinction, between AI that functions and AI that serves, sits at the center of what Konstantin Ryzhov believes is one of the most underappreciated risks in business today.”The biggest threat to customer loyalty right now is AI deployed with the wrong objective,” he says. “Systems built to deflect contact, creating a convincing surface of service while delivering none of the substance.”

When the Bot Works Against the Customer

Most chatbot failures follow a pattern that will be familiar to anyone who has tried to resolve a billing dispute or track a missing parcel through an automated system. A customer arrives with a problem. The bot recognizes keywords, serves an FAQ, and considers the interaction handled. The customer, whose situation fits none of the FAQ options, tries to rephrase. The bot serves another FAQ. The customer asks for a human. The bot offers another self-service option.

The logic is deliberate. Bots built on deflection are optimised to prevent escalation, and every customer who gives up and leaves is recorded as a successful containment. The metric looks clean. The revenue doesn’t.

According to McKinsey, 90% of consumers are likely to abandon a purchase if they cannot get quick answers. Qualtrics data identifies customer effort, how hard someone has to work to resolve an issue, as one of the strongest predictors of churn. The math is straightforward: when a bot makes resolution harder than picking up the phone, it is accelerating the very attrition the business is trying to avoid.

The Pressure That Produces Bad AI

The conditions that create these deployments are, Konstantin argues, entirely predictable. “The C-suite sees the AI investment case: labour cost reduction, faster response times, 24/7 availability. The numbers are compelling. The directive comes down.”

What follows is a familiar organizational squeeze. The CX team, under pressure to implement quickly, deploys what they can. They know the bot is not ready for complex queries. They know the escalation path is clunky. But without the data to push back and without the runway to do it properly, the customer ends up with a bot that was never designed with their problem in mind.

Gartner found that 91% of customer service leaders feel pressure from senior management to implement AI, a statistic that Konstantin says reflects something he has observed consistently across organizations. The people signing off on AI deployments and the people who will live with the consequences are rarely in the same room when the decision gets made.

The Cost of Getting It Wrong

There is an important asymmetry in how customers respond to human failure versus automated failure, one that rarely appears in the business case for automation.

“When a human agent is slow, or gets something wrong, or transfers a customer twice, the customer is frustrated,” Konstantin Ryzhov explains. “But they understand, on some level, they are dealing with a person having a difficult shift. They extend a degree of patience no machine receives.”

Bad AI earns no such patience. When a customer realizes they are trapped in a loop with a bot that refuses to connect them to a human, the response is a judgment about the company itself.Ā  “It feels like a deliberate choice the company made to prioritize its own cost structure over the customer’s time,” says Konstantin Ryzhov. “A perception nearly impossible to recover from.”

The retention data bears this out. Qualtrics found that customers who choose a brand for service quality report 92% satisfaction, compared to 78% among those choosing on price. Every customer who churns after a poor automated interaction takes their lifetime value with them and, as Konstantin Ryzhov notes, most will never explain why they left.

“Slow support loses you the interaction,” he says. “Bad AI loses you the account.”

A Different Model: Co-Intelligent CX

Konstantin Ryzhov is careful to separate his critique of how AI is deployed from any argument against AI itself. “Criticism without an alternative is just noise,” he says.

The model he advocates starts with a fundamental repositioning of where AI sits in the service architecture. “The most effective deployments use AI behind the agent, not in front of the customer,” he explains. In practice, this means AI that surfaces the right knowledge at the right moment, suggests responses across languages in real time, and flags sentiment shifts before a conversation deteriorates, reducing handle time without removing human judgment from complex problems.

On escalation, his view is equally direct. When a customer asks to speak to a human, most systems treat this as a failure to be minimised. Konstantin Ryzhov sees it differently. “The system hasn’t failed. It has produced information.” That information, that the problem was too nuanced for automated handling, that the emotional stakes were too high for a scripted response, is, in his view, more valuable when tracked and acted upon than when suppressed.

He is also unambiguous on the question of transparency. “Customers who know they are talking to AI, and who can switch to a human without friction, consistently show higher satisfaction than those subjected to synthetic voices and fake background office noise designed to simulate a human interaction.” The deception, he argues, is never neutral. “When customers realise it, and they usually do, the retention cost is disproportionate.”

The Metric That Changes Everything

Konstantin Ryzhov’s argument comes down to what companies choose to measure. “If your primary AI metric is the percentage of contacts deflected from human agents, you are measuring cost avoidance, not service quality. Those two things occasionally overlap. Often, they don’t.”

Working with multilingual support teams at Simply Contact, he has seen the difference play out directly. “Agents using AI as a copilot, not a replacement, consistently outperform fully automated flows on both resolution rate and customer satisfaction. The technology is the same. The architecture is different.”

The business case for getting this right is, he insists, as strong as the case for AI adoption itself. Research suggests well-implemented AI generates around $3.50 for every $1 invested, but poorly implemented AI generates negative returns through churn, reputational damage, and the operational cost of remediation. “The qualifier matters enormously,” he says.

His broader argument is that the framing of support as a cost centre is itself part of the problem. “Every enterprise I know with genuinely loyal customers has treated support as a revenue protection function in how they staff it, measure it, and fund it.”

Where This Lands

As AI deployment accelerates across the industry, Konstantin Ryzhov believes the companies that will emerge strongest are those willing to hold a clear line between what AI should handle and what it shouldn’t, and to maintain that line when financial pressure pushes in the opposite direction.

Before any AI deployment, or any quarterly review of existing automation, he offers a single question worth sitting with:

“Is your AI measured by how many customers it deflected, or by how many it actually helped? Those are not the same number. For most companies right now, they are not even close.”

At Simply Contact, we help companies design AI + human support systems that are genuinely effective. Explore partnership opportunities and reach out to us.Ā 

Spread the love

This article features branded content from a third party. Opinions in this article do not reflect the opinions and beliefs of CEO Weekly.