November 09, 2025 •

Adobe Stock
Recently I received an email saying that the flight departure time for an upcoming family vacation had changed from 6:30 a.m. to 6 a.m. My wife and children — who already thought the 6:30 departure time was too early but were overruled by dad — revolted and insisted we take an 8:30 a.m. flight instead.
However, this major airline wanted 14,000 more air miles per passenger to change to the later flight. Their stated policy was that they can modify any flight by up to 60 minutes (in either direction) without offering free alternative changes to another available flight.
I checked online using several self-service portal options, and workable flight changes were available but expensive, which seemed unfair to me.
Naturally, I called the 800 number for customer service support. Their “easy-to-use” AI chatbot walked me through the routine steps of finding my trip with my booking code, informing me that my flight departure time has been changed for the return flight, thanking me for being a member of their frequent flyer program and so on.
But over the next 15-plus minutes, I was routed around to various automated menus and was told several times that I could not speak with a customer service agent until I provided more details about what my problem was. I was ultimately offered my desired changes, at a cost of 14,000 miles per traveler — which was why I was calling in the first place.
As my frustration grew, I was told that the wait time to speak with a live agent was greater than 30 minutes.
I will spare you the many details regarding all that happened over the next hour. I finally spoke to a human agent and eventually was escalated to a manager who authorized the free change for the four of us to move to the 8:30 a.m. flight.
Bottom line is I wanted (and ultimately received) an exception to the policy that was granted by a manager, with a sincere apology.
Sound familiar?
This is just one example, but I have had similar customer service experiences over the past year with cellphone companies, football game ticket purchases, warranty company incidents, and yes, government call centers.
No doubt, some readers may be thinking the AI chatbot and the overall process I described worked just the way they were intended. The airline does not want to give exceptions, and they want to make it difficult to get one. Perhaps all of this was well thought out, and the “one-hour policy” remains correct, based upon metrics, enhanced analytics and careful analysis.
I disagree with this process, but I also recognize that this may in fact be the case. If so, at least they have thoughtfully re-examined the process prior to applying AI automation. But even if that is the case, planners must also recognize that implementing chatbots and AI agents (agentic AI) that are sometimes tasked with delivering bad news to customers will not resolve the problem. Customers will only be further irritated until they eventually talk with someone who can act on their behalf (and show genuine empathy), or worse, perhaps even take their business to another company.
But this blog is not about criticizing AI or chatbots or automated call trees. The online portal, AI chatbot and initial agent at the airline were all polite, professional and also adamant about enforcing the policy that they had been given.
So what’s my point?
Policies and procedures are vital as we role out automated, AI-enabled solutions to help clients interact with us and solve problems at home and work.
True, policies have always been important. Nevertheless, the huge investments being made regarding AI-enabled chatbots, agentic AI and other transformative business practices will be doubly wasted if customer service is reduced and clients become unhappy (or worse) with interactions. AI is an accelerator of policy implementation and process enforcement.
I asked ChatGPT if AI improves customer service, and here was the answer:
“Yes, AI significantly improves customer service by handling routine tasks, providing 24/7 support, and personalizing interactions, which frees up human agents to manage complex issues requiring empathy and judgment. It boosts efficiency through automation, intelligent routing and sentiment analysis, leading to faster resolutions and higher customer satisfaction when implemented effectively. However, challenges like AI ‘hallucinations’ and the potential for harmful advice necessitate careful oversight and a human-in-the-loop approach.
“How AI improves customer service
“Potential challenges
Let me start this wider discussion by clearly stating that I am an advocate for AI overall, and my goal is to “get to yes” for the thoughtfully identified business cases. I also believe the AI boat has left the dock in most cases, and CxOs need to enable the good while disabling the bad.
Nevertheless, there are plenty of valid concerns that must be addressed to implement AI successfully in customer service areas like call centers, and we can’t put our heads in the sand and ignore these issues. For example, see this piece: “The Great AI Customer Service Con: Why 70% of Customers Will Leave.” Here’s an excerpt:
“Right now, boardrooms are approving massive AI investments based on consultant projections that ignore a fundamental truth: 70% of customers will abandon your brand after just one
“Multiple independent studies reveal customers overwhelmingly reject AI customer service.
“The Verizon CX Annual Insights Report, surveying 5,000 consumers across seven countries, found a damning 28-percentage-point satisfaction gap: 88% of customers are satisfied with human agents versus just 60% with AI interactions.
“Harvard Business Review research shows 77% of people find chatbots frustrating, with 88% still preferring human agents—even when AI achieves near-perfect technical performance.”
Further, there are plenty of Reddit discussion threads that dive deeper into this topic. Here is one example:
The post said, “Lately, I’ve noticed more companies relying on AI chatbots, voice assistants and automated responses for support. On one hand, it feels great when a quick FAQ is solved instantly without waiting in a queue. On the other hand, when the issue is even slightly complicated, the AI just loops me around until I finally reach a human.”
One response said, “Honestly, I think we’re still in the awkward teenage years of AI customer service. Most companies rushed to deploy chatbots without thinking through the actual customer journey, and that’s why you’re getting those frustrating dead-end conversations.
“The real issue isn’t AI capability, its implementation. …
“But here’s what I think is really happening: Companies are using AI to reduce costs first and improve experience second. That’s backwards. The best implementations focus on actually solving customer problems faster, and the cost savings follow naturally.”
I was watching CNBC on the morning of Nov. 7, 2025, and
The same is true for poorly aligned business policies. All AI will do is accelerate the time it takes to get to the complications (problems in the policy), while making the customer unhappier because they can’t talk to an actual person about a fix.
I believe that this problem is widespread right now across the public and private sectors, with AI being deployed in situations where the underlying business processes and policies are broken.
In the 1990s, when business process re-engineering was popular, we called this issue “repaving the cow path.”
Sadly, the same concerns today can derail transformations and undercut expected savings with AI.
So my (seemingly boring but vital) message is this: Get your policies right — or your AI customer service transformations will fail.

Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.