Prologue: The Human Thread in AI Transformation
Across industries, one truth is becoming impossible to ignore: AI transformation doesn’t fail because of algorithms, it fails because of people.
At the recent Zartis AI Summit, leaders from healthcare, compliance, and technology gathered to share their journeys in bringing artificial intelligence into real-world operations. What emerged wasn’t a story about code or compute power, it was a story about culture, courage, and human adaptation.
From Sudha Regmi’s patient-first transformation in healthcare, and Rob Meaney’s insights on using technology for better merger management, to the concept of creating a ‘code red’ moment to ignite cultural transformation; the pattern was unmistakable. Success with AI depends far less on technical mastery than on emotional intelligence.
As partners in digital and AI transformation, we at Zartis have seen this repeatedly: the organizations that thrive with AI are not those with the largest datasets, but those with the deepest trust between their people.
This article weaves together three human stories from the Summit and the research that supports them to reveal what it truly takes to build an AI-ready culture.
The AI Paradox: The Technology Works — People Don’t
By now, the technical side of AI is astonishingly capable.
Large language models can generate reports, analyze complex data, and even write code. Yet, despite the technological leap, most organizations are stuck in pilot mode.
A 2023 MIT Sloan study found that 95% of AI pilots fail to scale, not because of performance or accuracy, but because of cultural resistance, unclear ownership, and fear of disruption.
The paradox is simple: the technology works, but the people and processes around it often don’t.
AI isn’t just a system you install, it’s a shift in how people think, decide, and work. And as the leaders at our summit made clear, that shift can’t be outsourced to machines.
The Human Core of AI Transformation
“We didn’t start with the algorithm, we started with people. Because in care, trust is the true data layer.” – Sudha Regmi, Nourish Care
When Sudha Regmi, Director of Data and AI at Nourish Care, began exploring how AI could improve patient outcomes, he knew the technology would only succeed if caregivers trusted it.
Nourish operates in one of the most human-centered sectors imaginable, social care. Mistakes can affect lives, not just margins. In that environment, AI isn’t a shiny innovation. It’s a tool that must earn its place in a clinician’s workflow.
So Regmi’s team started with empathy. They trained staff, clarified ethical boundaries, and introduced AI not as a replacement for judgment but as an augmentation of it. Their goal wasn’t to automate the carer’s instinct, it was to amplify human care through better data and insight.
This approach echoes what Harvard Business School professor Amy Edmondson describes in The Fearless Organization (2019) that psychological safety is the foundation of innovation. In psychologically safe cultures, people are willing to experiment, make mistakes, and learn, all of which are essential when integrating AI.
Regmi’s insight reframes the challenge: AI transformation isn’t about teaching machines to think, it’s about teaching people to trust again.
Creating a Code Red Moment
If trust builds AI foundations, urgency lights the fire.
You need to commmunicate to your company that “AI is existential,” and you better make it believable. To achieve that, you can do what great change leaders do: make the need impossible to ignore.
This could be described as a “code red” for the organization. It is a deliberate act of cultural disruption designed to shake people awake. Sure, morale may dip for a short perioud of time as uncertainty arises, and fears surface. But something else happens too; curiosity. People start asking, What does this mean for me? How do we fit in this new world?
That curiosity becomes the energy that powers transformation.
And you can’t just rely on the hype. You need to have a strategy and you need to provide your team with the tools and freedom to change. Invest heavily in structured research, mapping customer pain points, testing prototypes, and iterating based on feedback.
Leadership scholar John Kotter famously wrote in Leading Change (1996) that the first step in any transformation is establishing a sense of urgency. Without it, change stalls. But Fairman’s story adds a modern nuance: urgency alone can burn teams out unless it’s matched with transparency, psychological safety, and purpose.
Once the smoke clears, you will find that your company has reinvented itself.
Enhancing People Relations with AI
If Sudha’s story shows how to build trust from within, Rob Meaney’s story from ABC Glofox shows what happens when you have to build it between cultures and how AI can potentially enhance a process such as merging two companies.
The central argument presented is that acquisitions are fundamentally a “very human matter.” The process generates a significant amount of energy, both positive (excitement, opportunity) and negative (uncertainty, fear). Successfully navigating this requires prioritizing human concerns.
- Managing Uncertainty: Employees’ primary concerns revolve around job security, identity (e.g., “we go from working for this little Irish startup working with this big big American company”), and the potential loss of a cherished culture.
- The Power of Being Heard: The key to success is creating channels for employees to “bubble up those concerns.” The analysis stresses the importance of leadership genuinely listening, taking concerns on board, and communicating transparently about how they will be addressed, even if the news isn’t universally positive. As stated, “It’s communication. It’s feeling-heard.”
- Building Personal Connection: The contrast between the two acquisitions highlights the importance of approachability. In the successful case, executives from the acquiring company made an effort to “get to know people” on a personal level.
While people remain at the heart of successful integrations, technology is the engine that sustains momentum and scale. The most effective acquirers don’t just digitise workflows — they engineer visibility, predictability, and adaptability into the entire integration process.
1. Creating a Single Source of Truth
Post-acquisition, teams often struggle with fragmented data across systems. Successful integrations prioritise early alignment around a unified data model — consolidating operational, financial, and HR information into shared dashboards. This ensures decisions are made on consistent, real-time information rather than siloed assumptions.
2. Embedding Observability and Metrics
Integrations that succeed treat metrics as a feedback loop, not a post-mortem. Leveraging monitoring tools and AI-driven analytics enables leadership to spot risks (e.g., cost overruns, delayed milestones, attrition trends) and intervene early. The key is observability by design. Every system and process should be instrumented to provide measurable insights.
3. Intelligent Automation and AI as Force Multipliers
AI is becoming central to reducing complexity across M&A processes, from due diligence and integration planning to cultural alignment and employee engagement. Well-designed systems leverage automation to remove manual friction, while AI agents augment decision-making by surfacing context, detecting risks, and recommending next best actions.
In short, technology doesn’t replace the human element — it amplifies it, enabling leaders to make faster, more informed, and more empathetic decisions at scale.
Building the AI-Ready Culture
(The Zartis Synthesis)
When you step back, the stories of these tech leaders point to the same truth: AI readiness isn’t a technical maturity, it’s a cultural one.
At Zartis, we often describe AI transformation as a team sport. You can’t delegate it to the data scientists or the IT department. It requires product leaders, engineers, designers, domain experts, and executives all rowing in the same direction.
From our work and what we heard at the summit, five principles consistently separate organizations that scale AI from those that stall:
-
Communicate with radical clarity
People don’t fear AI, they fear ambiguity. Leaders who speak clearly about goals, ethics, and expectations remove that fear.
Fairman’s code red worked because she made uncertainty explicit, then gave people a path through it.
-
Empower through learning
AI anxiety fades when people are trained to use it. Reskilling isn’t optional — it’s oxygen.
McKinsey’s State of AI Report (2023) found that organizations investing heavily in reskilling are 2.5x more likely to report positive ROI from AI initiatives.
-
Model vulnerability
In AI transformations, leaders don’t need to have all the answers, they need to ask better questions.
In Sudha’s words:
“We didn’t know everything, but we knew who to ask, our people.”
-
Reward experimentation
Innovation flourishes when failure isn’t fatal.
Zartis’ own internal experiments echo this; success came from dozens of small iterations, not one perfect model. The last 5%, the part where people align around new workflows, took the most time, but delivered the most value.
-
Build bridges, not silos
Every successful AI project sits at the intersection of disciplines. Engineers, designers, compliance experts, and domain specialists each hold a piece of the puzzle.
At Compliance & Risks, cross-functional collaboration turned abstract AI ideas into tangible, market-ready products.
Together, these principles form the foundation of what we call an AI-ready culture, one where curiosity outweighs fear, learning is continuous, and people see AI not as a threat, but as a teammate.
The Future of AI Belongs to Human-Centered Leaders
It’s easy to think of AI as a technological revolution. But the leaders who will truly harness its potential know it’s also a cultural revolution.
The next decade won’t be defined by who builds the biggest models, it will be defined by who builds the most adaptive organizations.
In healthcare, compliance, and software, we’ve already seen it:
- Trust beats automation.
- Urgency beats complacency.
- Transparency beats control.
AI, at its core, is a mirror. It reflects our organizational strengths and amplifies our weaknesses. It forces us to ask: Are we ready to work differently? Are we willing to learn publicly, fail safely, and rebuild continuously?
When we answer “yes,” AI stops being a threat and becomes a catalyst.
At Zartis, we believe the future of AI isn’t built on silicon, it’s built on trust.
Because every algorithm learns from data, but every great company learns from people.
Discover our AI Services and how we can help you adopt AI across your whole organisation.
References
- MIT Sloan Management Review (2023). Why So Many AI Pilots Fail to Scale.
- Amy C. Edmondson, The Fearless Organization (Harvard Business Review Press, 2019).
- John P. Kotter, Leading Change (Harvard Business Press, 1996).
- Google Re:Work Project Aristotle (2015). What Makes a Team Effective?
- McKinsey & Company (2023). The State of AI Report.