Faisal Hoque's Blog
October 10, 2025
Begin Within

KEY POINTSAuthentic connection requires connecting with yourself first.Mindfulness is the foundation that enables genuine self-awareness and presence.Accepting impermanence transforms how we relate to ourselves and others.
We’re more linked than ever and somehow more alone than we’ve ever been.
Our screens pulse with notifications and digital chatter that promise belonging but leave us feeling drained, fragmented, and unseen.
We move faster but not necessarily forward. We communicate constantly, yet we rarely connect.
When I wrote the first edition of Everything Connects a decade ago, I wasn’t crafting another leadership manual. I was tracing something more essential, something that has shaped my journey from studying Eastern philosophy to building businesses. I kept returning to a truth that is as ancient as it is urgent: Everything arises because of everything else.
The Buddha called this truth pratītyasamutpāda (dependent origination). Nothing exists on its own; we are all part of an intricate, complex interdependent web.
When we forget that truth, fragmentation follows. Anxiety rises, collaborationfalters, burnout festers. This isn’t abstract spirituality. I’ve lived it, chasing outcomes and mistaking motion for meaning, all while losing sight of the human pulse.
Modern neuroscience echoes both my own experience and many spiritual traditions on this topic. Research suggests that our brains thrive on connection; for example, practicing empathy strengthens neural circuits that support creativityand problem-solving. When we feel disconnected, the stress response spikes and, over time, weakens the brain regions responsible for emotional regulation.
Science confirms what we instinctively know: Our brains weren’t made for constant digital stimulation but for the slower, richer rhythms of human connection.
The solution isn’t to create more apps and new ways of connecting. Rather, as so often, the best way to approach the future is by encouraging ourselves to wake to the present. We might say that the task is not so much to build a web of connection as it is to inhabit the one we already have.
Interdependence: The Web We Already InhabitWe are not self-contained units but open systems. Every breath we take, every thought we think, depends on what surrounds us. The language we speak, the ideas we inherit, the people who shape our sense of what is possible all flow into us and through us.
And yet, despite living in this vast web of interdependence, we feel isolated because we’ve lost touch with the most fundamental connection of all, the one with ourselves.
We can meet another person only to the extent that we are in touch with our own experience. Deep listening becomes possible only when we have learned to be still with our thoughts. And authentic presence arises not from performance but from the courage to permit ourselves to be unguarded and attentive.
Mindfulness: The Gateway to Self-ConnectionThe web of interdependence is always there. But we can perceive and participate in it consciously only when we’re present enough to notice it. And that presence must begin with ourselves.
Mindfulness is often sold as a relaxation tool. In truth, it is revolutionary—a disciplined presence that cuts through illusion and creates the capacity for authentic connection. When we practice mindfulness, we learn to notice our thoughts without being consumed by them, to feel our emotions without being controlled by them, and to recognize our patterns without being imprisoned by them.
Such self-awareness is the foundation of all genuine connection. When we’re present with ourselves, we create the capacity to be present with others.
The Obstacle: Why We Can’t Let GoThe solution seems simple: Be present. So why do we struggle so much to practice it?
The answer lies in attachment.
We cling. We grasp. We hold on to outcomes, identities, expectations, and stories about who we are and who we should be. We grip the future so tightly we can’t feel the present.
When we’re attached, we can’t be present. We’re too busy managing, manipulating, and maintaining. We’re too afraid of loss to fully inhabit the moment. This fear blocks both self-connection and connection with others.
Attachment keeps us at arm’s length from our own experience—and from genuine relationships with others.
Impermanence: Learning to Let GoThe antidote to attachment is accepting the Buddhist insight of anicca—impermanence.
Impermanence teaches us that everything is temporary: our thoughts, our emotions, our circumstances, even our sense of self. This isn’t a reason for despair; it’s an invitation to let go. And when we do let go, when we stop clinging to what must change, we discover that we can be more present to what there actually is.
I’ve learned this in moments of loss and reinvention—shutting down companies, walking away from projects, watching identities I’d built dissolve. Each ending, painful as it was, revealed what wanted to emerge next. Accepting the impermanence of things turned out to be the foundation of growth.
Four Practices for Connecting WithinHere are four practices you can begin today that will help you start being present to the connection with yourself:
Start with the breath. Spend five minutes each morning simply noticing your breath. No agenda, no improvement, just awareness of breathing in and breathing out.Name what you’re holding. Write down one thing you’re gripping too tightly: an outcome, an identity, a fear. Ask: What would happen if I loosened my grip?Practice the pause. Before responding in conversation or reacting to news, count three breaths. Notice what shifts in that space.Reflect on impermanence. Once a day, notice something that has changed—a season, a relationship, your own mood. Observe without judgment.Returning to WholenessWhen we’re connected to ourselves—aware, present, unburdened by grasping—we naturally connect more authentically with others. We are more able to recognize that the person across from us is not a means to achieving our goals but instead a living, breathing miracle of existence, and one we are deeply connected to.
We begin to see, in the words of the poet John Donne, that “no man is an island, entire of itself.” And over time, we internalize Donne’s closing lines, beautifully expressive of the ancient Buddhist truth of dependent origination:
“Any man’s death diminishes me,
Because I am involved in mankind.
And therefore never send to know for whom the bell tolls;
It tolls for thee.”
[Photo: YURIMA/Adobe Stock]
Original article @ Psychology Today.
The post Begin Within appeared first on Faisal Hoque.
October 9, 2025
Navigating America’s AI Action Plan – a guide for business leaders

by Faisal Hoque , Pranay Sanklecha , Paul Scade
The US government has published a blueprint for maintaining American dominance in the age of AI. Here’s how global firms can respond while managing opportunities and risks.Artificial intelligence is reshaping the competitive landscape, and governments are racing to position their economies for leadership. In the United States, the recently announced AI Action Plan signals a decisive shift in policy, favoring deregulation and rapid innovation over precaution and control.
Winning the Race: America’s AI Action Plan marks an important shift in both policy and philosophy. Rather than the government coordinating and safeguarding AI development – as under previous US administrations and as continues to be the case in most other countries – the plan sets out an approach that emphasizes deregulation, private-sector development, and a “try-first” mentality.
In pursuit of this goal, it outlines dozens of federal policy actions distributed across three pillars:
Accelerate AI innovationBuild American AI infrastructureLead in international AI diplomacy and securityWhile the plan still includes room for national standards and approaches to evaluating AI tech and usage, such as NIST’s AI Risk Management Framework, the underlying philosophy is striking in its openness. Its goal is to “dismantle regulatory barriers” and support faster and more far-reaching AI innovation than was previously possible.
For multinationals operating in, or seeking to operate in, the United States, these principles create both opportunities and risks.
On the one hand, a minimalist, light-touch regulatory environment will enable businesses to test minimum viable products (MVPs), implement new tools, and bring new products to market more rapidly than ever before. At the same time, with fewer prescriptive federal guardrails, there will be a heightened risk that flawed algorithms or systems that are rushed into production might fail to meet consumer needs or may even act directly counter to their interests. Such outcomes carry significant ethical and brand risks for the company responsible.
As outlined in an I by IMD article ‘From consumers to code: America’s audacious AI export move’, senior leaders cannot afford to ignore the new environment created by the Action Plan, if only because competitors will move quickly to exploit it. Effective engagement requires a systematic approach that works carefully to make the most of the full range of opportunities the plan offers while simultaneously minimizing the risks involved.
Responding effectively to the AI Action Plan requires a dual mindset that encompasses both radical optimism and deep caution.Dual mindsets: radical optimism and deep caution
Responding effectively to the AI Action Plan requires a dual mindset that encompasses both radical optimism and deep caution. In our recent book TRANSCEND , and an accompanying article in Harvard Business Review, we set out two complementary frameworks designed to help companies operationalize this dual mindset while thinking systematically about how to implement AI-driven transformation.
The OPEN framework – Outline, Partner, Experiment, Navigate – helps companies fully harness AI’s enormous potential, while the CARE framework – Catastrophize, Assess, Regulate, Exit – ensures they effectively manage AI’s equally enormous risks.

These frameworks can provide valuable scaffolding for developing potential approaches for implementing AI in any environment. But four components – Partner, Experiment, Catastrophize, and Exit – offer particular value when responding to the US AI Action Plan.
PartnerIn the OPEN framework, the Partner stage offers a tool for choosing and shaping relationships so you can bridge resource and knowledge gaps – whether what’s missing is , data, or hard-won know-how. The goal here is to narrow the gap between ambition and capability. In the context of America’s AI Action Plan, progress will often run through partnerships. These include conceptual partnerships between government and the private sector – with government coordinating priorities and standards while firms do the building – and practical collaboration among companies to use common open-weight AI stacks across the supply chain, consistent with the plan’s push for interoperability.
Partnerships also make you more resilient. The plan emphasizes the importance of standards, evaluation, and secure supply chains. Working with the groups shaping those norms will help ensure that your compliance evidence is reusable across business units, boards, and regulators. Understanding future export-control expectations can also reduce the chances that critical components or export paths become unavailable.
Co-develop evaluation. Join NIST/CAISI convenings in your domain; adopt shared test suites, failure modes, and incident reporting formats so results are easily shared within your business and with regulators.De-risk the supply chain. Build an export-control-aware vendor map; require attestations on chip origin, model provenance, and data lineage, with audit and step-down clauses if a partner’s status changes.Encourage open-weight AI stack adoption. Partner with SMEs in your network to jointly adopt the same open-weight models.Aim to make customers feel like you are experimenting alongside them, not on them.Experiment
In OPEN, Experiment means running small, real-world trials to answer practical questions, such as “What value does this create?” “What are the risks and costs?” and “What would it take to run it at scale?” The aim is to learn quickly and inexpensively on the way to making an actionable decision about whether to take a program further or kill it. In the regulatory environment created by America’s AI Action Plan, the US market will provide uniquely beneficial conditions for effectively running “AI labs in the wild.” It will be possible to put new features in front of customers more rapidly and with fewer restrictions, dramatically shortening the path from proof-of-concept to working products.
Design reversible trials. Ship production-adjacent pilots with a small footprint. Decide in advance which results will signal “scale,” “fix,” or “stop.”Build with the customer. Co-design pilots with customers and schedule fast feedback cycles so changes happen rapidly, and users see progress. Aim to make customers feel like you are experimenting alongside them, not on them.
In the CARE framework, Catastrophize means identifying the worst ways an AI system could plausibly fail so that it is possible to prepare for the risk and avoid or mitigate it. With the light-touch, try-first environment envisioned in America’s Action Plan, the responsibility for catastrophizing shifts decisively to businesses. As the government limits its regulatory requirements, businesses need to take up the slack by becoming the primary custodians of responsible AI implementation.
This is not just an ethical obligation. It is also sound business practice. A proactive approach to identifying risks and defining what levels are acceptable and what are not means leaders can approve appropriate plans with greater speed and confidence.
Run pre-mortems and red-team sprints. List concrete harm hypotheses, stress-test with realistic inputs and adversarial prompts, estimate potential impact, and record explicit stop/rollback triggers.Set risk appetite and binding guardrails. Define unacceptable uses, threshold metrics (e.g., error rates, bias, safety), and required human-in-the-loop points. Publish them where teams will see them.Think of the Exit step as the development of a three-part architecture, with technical, reputational, and legal layers.Exit
In CARE, Exit means determining well in advance of need precisely under what conditions and how you will stop or unwind an AI implementation. While the AI Action Plan includes a range of guardrails and standards, it leaves control over the exit process almost entirely in the hands of businesses. Pre-defined exit plans shorten crises, limit harm, and preserve value, so it is important to treat them as part of the design, not an afterthought.
Think of the Exit step as the development of a three-part architecture, with technical, reputational, and legal layers. The goal is simple: if an AI system misfires, or if public sentiment requires an implementation to come to an end, your teams must know who pulls the plug, what gets rolled back, and how fast normal service resumes.
Layer 1: Technical resilience. Build every system with a parallel process that can immediately take over if the AI component fails. If your AI-driven supply chain optimization goes haywire, can you revert to human decision-making within hours, rather than days?Layer 2: Reputational firebreaks. Keep experiments at arm’s length from the core brand: use “lab” labels, limited cohorts, and explicit opt-ins. Prepare customer comms, refund/credit policies, and de-publishing steps in advance.Layer 3: Legal isolation. Separate high-risk trials in distinct entities where appropriate. Update Terms & Conditions to flag experimental status and data use, add audit trails to show reasonable care, and secure insurance that names AI failure modes (e.g., model error, IP/data leakage). Include supplier obligations in contracts, such as prompt breach notice, quarantine, artifact sharing, and cooperation during wind-down.America’s AI Action Plan represents a watershed moment for global multinationals, offering unprecedented freedom to innovate while demanding equally robust responsibility in implementation. Success in this new landscape requires that companies move beyond traditional risk-reward calculations and embrace a sophisticated dual approach – pursuing transformative opportunities while vigilantly managing risks.
By adopting these complementary twin tracks while focusing on strategic partnerships, rapid experimentation, proactive risk identification, and clear exit strategies, multinationals can position themselves not just to navigate the AI revolution but to help shape its trajectory.
Original article @ IMD.
The post Navigating America’s AI Action Plan – a guide for business leaders appeared first on Faisal Hoque.
October 8, 2025
Want to Prepare for the Future?

KEY POINTSWe can’t eliminate threats, but we can cultivate agency to navigate an uncertain world.Autonomy without morality is dangerous; we need a moral compass.Stoicism builds autonomy and history provides moral wisdom. Together they create principled humans.
Right now, the future somehow manages to feel both deeply uncertain and terrifyingly dystopian. Society is polarized, and communities are fracturing. The news is dominated by reports of war. The job market is threatened by AI. Digital addiction is rising, mental health is suffering.
We can barely predict the threats, let alone eliminate them. The forces driving phenomena such as social polarization, wars, digital addiction, and AI are more powerful than any individual, and we cannot, as individuals, materially affect their overall direction.
So what can we do? How can we prepare ourselves to not just cope but actively thrive in a world that sometimes seems like it is on the verge of falling apart?
Intention Is the FoundationAs the Buddhist monk Pema Chodron emphasises, we must start where we are. In this context, it means accepting—truly accepting—that we can neither predict nor control the future.
Once we do this, we begin to experience what Chodron talks about: a sudden liberation of energy, because all the energy we put into denial is suddenly no longer needed for that purpose. Instead, we can use it to do what we need to do to thrive in whatever situation we are in.
What might this be?
Well, I don’t have a complete answer. But I am convinced that the foundation is developing the ability to live with intention. Intention is nothing other than the ability to make free and conscious choices, and when we are able to do this consistently, we become the authors of our own stories, protagonists who co-create the world rather than being determined by it. Intention allows us to live proactively rather than reactively—on our terms rather than someone else’s.
In a way, this is simply what freedom means: the ability to act with intention. And developing this ability is fundamental for meeting every threat we’ll face. For example, will we think critically or get swept up in tribal consensus? Will we recognize online manipulation or become digitally addicted? Will we use AI as a tool or become dependent on it?
In each case, the determining factor will be whether we are able to be free, whether we can respond with intention, with deliberate choice rather than impulse, with autonomy rather than compulsion.
In the end, this is about something more than success or even survival. It is about living an honorable, meaningful, and authentic life, a life that is truly ours and that we are proud to stand by.
Start With StoicismSo how do we develop this ability?
One good place to start, I think, is with an ancient philosophy that has never been more relevant than it is today: Stoicism.
The Stoics taught a fundamental distinction: There are things within our control and things beyond it. We cannot control whether wars break out, whether economies collapse, whether technology disrupts everything we know. But we can control how we respond. We can control what we do and we can control who we are.
People who are able to live this truth have internal sovereignty. And that’s what Stoicism is designed to develop.
Stoicism teaches us to ask: What is within my control right now? Where can I act? What can I choose? And this simple practice helps us transform from reactive victims into intentional agents.
Cultivate a Moral Compass: History as the Learning MechanismHistory is full of people who were highly intentional actors but who chose to do terrible things with that autonomy. These people had internal sovereignty, but they lacked moral direction.
This points to something crucial: We don’t want to make just free choices. We also want to make good choices.
So, alongside internal sovereignty, we need something else: a moral compass. And history is a fantastic learning mechanism for developing this.
Why? Because history shows us the consequences of moral choices—not in theory but for real, across time and space, with real human beings affected by the outcomes. History provides us with an almost infinite source of real-life case studies from which we can learn. We see how individuals reacted to difficult circumstances and also how they took advantage of opportunities. And all of this is incredibly rich information that we can then apply in our own lives.
It is not that studying history will provide us with one perfect moral truth. Rather, the point of studying history is that it shows us possibilities, teaches us to recognize patterns, and thereby gives us knowledge that helps us make our choices today.
As the saying goes, wise men learn from other men’s mistakes, fools from their own. The student of history uses the whole world’s experiences to become wiser.
Practices to Start TodayHow do we actually develop a moral compass and the ability to live intentionally?
Here are three practices you can begin immediately:
Daily Stoic Reflection: End each day by asking: What was in my control today? How did I respond? What would I do differently?Ask the Pattern Question: When you encounter any strong claim—cultural, social, technological—ask: Has humanity tried this before? What happened? What patterns do I recognize?Create Space for Hard Questions: Make time weekly to discuss one hard question about ethics, morality, or meaning with someone you respect. No right answers required, just honest thinking about difficult issues.Facing the FutureWhatever the future may bring, we can try to meet it as free and moral people, people who act with intention and maintain their principles even when the world feels like it may be falling apart.
That preparation begins with practicing Stoicism, which helps us live with intention. And it continues with the study of history, which helps us develop moral discernment and wisdom. Together, these practices develop something powerful: the ability to make free choices and the ability to use that freedom in service of things that really matter.
[Photo: Prokopchuk/Adobe Stock]
Original article @ Psychology Today.
The post Want to Prepare for the Future? appeared first on Faisal Hoque.
October 1, 2025
Leadership in a world of autonomous algorithms

A new business infrastructure is emerging with enormous potential impact but almost no conscious design. In this new world, algorithms negotiate with algorithms, making decisions that shape markets, determine the course of careers, and decide whether companies succeed or fail. Humans, meanwhile, risk being left to watch from the sidelines.
On LinkedIn, posts written by AI models are liked by bots and commented on by AI assistants. In recruiting, candidates use AI to draft résumés while companies use AI to evaluate them. In procurement, some organizations are already using AI to draft requests for proposals, or RFPs—detailed documents that invite vendors to bid on supplying goods or services—while vendors are turning to AI to generate the proposals they have been invited to submit.
The efficiency gains that AI can deliver are very real—automation can save time, cut costs, and improve consistency. But this does not mean we should ignore the dangers that those gains obscure. If we want to avoid slipping into a world in which humans are increasingly irrelevant, we need to be both alert to the risks and intentional about designing processes and tools to mitigate them.
WHAT CHANGES WHEN ALGORITHMS INTERACTIn order to navigate this new reality, business leaders must first understand it more precisely. Here there are four important features of our algorithmically abstracted world:
The Audience Changes
New technologies often transform business, but what’s happening now is different. The new technology isn’t just providing new tools, but a new audience. This isn’t an entirely new phenomenon. Humans have been tuning content for algorithms in some areas for years, as in the case of search engine optimization for websites. But not only is the scale now changing, but the algorithmic audience is taking over both sides of the conversation.
When algorithms speak to other algorithms, language changes from a medium for human understanding into code for machine processing. For a job seeker writing an application today, the best path forward is not always to try to tell their professional story in a way that will be compelling to a human audience. Instead, it will often be better for them to encode keywords and phrases to maximize their score in the applicant tracking system (ATS scores). And, ironically, the best tools for creating this kind of optimized application are often algorithmic themselves: generative AI models.
This does not mean that communication has stopped. It has not. Rather, it has changed. In addition to, and sometimes in place of, human meaning, a different kind of meaning is becoming increasingly important, one that is measured in match scores, engagement rates, and ranking positions. Humans are still involved in the loop, but only at certain points, and much of the process goes on without human intervention.
Metrics Are Replacing Reality
In 1975, the British economist Charles Goodhart came up with what is now known as Goodhart’s Law—the idea that when a measure becomes the target for action, it ceases to be a good measure. The idea is that once people make decisions with the goal of meeting certain metrics, the underlying behavior that the metric was meant to measure is changed as people shift from focusing on the real, underlying goal to trying to optimize their score.
Briefly put, once we understand there is a system, we always try to game it.
Goodhart’s Law becomes increasingly relevant as we move toward autonomous algorithmic interactions. For example, ATS systems score candidates based on keyword matches, years of experience, and educational credentials. Candidates respond by using AI tools to optimize for exactly these metrics.
But high scores in the assessment system then lose their intended meaning: Where a high score once meant that a candidate was probably a good fit for the job, now it may just mean that the candidate has access to tools that are good at gaming the scoring system.
Tacit Knowledge Erodes
Teachers and sports coaches have long known that much of the most important learning for their students or athletes happens in the process of doing the work rather than in a flash of insight when an explanation is given.
When managers write performance reviews, they aren’t just documenting performance; they are also developing their ability to observe, evaluate, and articulate feedback. When teams craft project proposals, in addition to bidding for work, they are clarifying their thinking, discovering gaps in logic, and building shared understanding.
This tacit knowledge—the skills and insights that emerge from doing rather than consuming information—erodes when AI takes over the process.
Purpose Shifts
Our current business functions evolved in a human-driven world. They contain processes designed by humans, for humans, to achieve some human goal. When these processes are outsourced to autonomous algorithmic interactions, often they stop serving the original purpose. In fact, the whole point of doing them can be lost.
Take performance reviews. These originally had the clear goal of assessing employee capabilities to support actions aimed at increasing the effectiveness of the human worker. But if we end up with AI on both sides of the interaction, the whole process becomes performative. For instance, if a knowledge worker uses AI to write his reports, and his managers uses AI to generate the worker’s performance reviews, the original purpose of the review process is no longer being served.
This doesn’t mean that nothing valuable is taking place: an AI assessment of the quality of AI outputs can still tell us something useful. But it does mean that the reason for carrying out the reviews is now a pretense—improving the effectiveness of the human worker has become irrelevant to the process that is actually being conducted.
FOUR STRATEGIC RESPONSESAs algorithms increasingly transact with algorithms, business now operates on two levels at once: an algorithmic layer where signals are exchanged between machines, and a human layer where meaning and value are created. Leaders must guide the interaction between these layers so that efficiency gains do not come at the expense of judgment, learning, or purpose. Here are four practical steps:
Protect Human Judgment: Not every decision can or should be automated. Leaders must deliberately ring-fence certain domains—final hiring calls, creative development, setting organizational purpose—and ensure that human judgment retains the final say in these areas. Generally, where values, creativity, and culture are at stake, a human should be the final decision maker.Translate Between Worlds: As business language splits into two distinct tracks—signals for machines and meaning for humans—leaders will need translators. These are people and processes that can interpret ATS scores, SEO rankings, or engagement metrics and reconnect them with human insight. A résumé may score well, but does the candidate bring originality? A post may “perform,” but did it actually persuade? Translation layers stop organizations from mistaking algorithmic proxies for real understanding.Design for Learning: Some activities are valuable not only for their output but also for the tacit knowledge they generate. Leaders must protect key processes as sites of practice, even if they are slower or less polished. Short-term efficiency gains should never come at the cost of eroding the capabilities on which long-term success depends.Protect the Purpose: When business activities shift into algorithmic exchanges, it’s easy for the form to survive while the function disappears. A performance review still gets written, but the developmental conversation never happens. A proposal gets generated, but the shared thinking never occurs. Leaders must continually bring activities back to their underlying purpose and ensure that the process still serves that purpose rather than becoming an empty performance.Algorithms are now part of the basic fabric of business. Resisting this shift is as pointless as commanding the tide not to come in. But while this change is inevitable, it must still be managed and steered by leaders who are aware of what is at stake. By protecting judgment, translation, learning, and purpose, organizations can ensure that automation delivers efficiency without erasing the human meaning that business depends on.
[Source Illustration: Freepik]
Original article @ Fast Company.
The post Leadership in a world of autonomous algorithms appeared first on Faisal Hoque.
September 29, 2025
Three myths that undermine AI success

The real AI story in most organizations isn’t about algorithms; it’s about habits. New tools arrive with impressive demonstrations and confident promises, yet the day-to-day routines that decide what gets attention, who can take a risk, and what counts as a “good job” tend to remain the same. Leaders set up special units, roll out training, or look for quick savings, only to find that the old culture quietly resets the terms. When that happens, early gains fade, adoption stalls, and cynicism grows.
This article draws on our books, TRANSCEND, and REIMAGINING GOVERNMENT to look at three recurring myths that help prop up existing cultures and prevent the deep transformations that are needed to support successful AI implementations. Transforming a business to make the most of AI means moving past these comfortable stories and changing the conditions under which the whole organization works.
MYTH 1: ‘INNOVATION UNITS WILL SAVE US’After five years of operation, the U.K.’s Government Digital Service (GDS) seemed untouchable. Created in 2011, the GDS revolutionized Britain’s digital services. With the goal of reenvisioning “government as a platform,” it consolidated hundreds of websites into a single, easy-to-use portal, cut waste by forcing departments to unify their platforms, and showed that, with the right attitude, even government agencies could move with the speed of a startup. In 2016, the U.K.’s digital services were ranked the best in the world. Yet by 2020, the GDS had disappeared as a force within the U.K. government.
This pattern repeats regularly across corporate innovation labs: create an elite unit, give it special rules, celebrate early wins, watch it die. An innovation unit can deliver extraordinary results so long as it has senior leadership protection, free-flowing resources, and an internal culture that attracts exceptional talent. But the model also contains the seeds of its own demise. The outsider status that enables breakthrough innovation makes large-scale sustainability nearly impossible. When executive sponsors move on, the shield drops, and organizational antibodies start reasserting cultural norms.approach to building value.
This predictable lifecycle applies to AI-focused teams as much as those driving any other type of technological change. Leadership transitions are inevitable. New executives question special rules. The innovation unit that draws its power from being outside the system gets pulled back in again, and the flow of novel ideas slows to a trickle.
The lesson to take from this isn’t that we should abandon innovation units—it’s that we should use them strategically and follow up on the gains they make. Innovation units should be seen as catalysts, not permanent solutions. While these teams are forging ahead with quick wins and proving new approaches, organizations also need to transform their broader culture in parallel. The goal shouldn’t be protecting the innovation unit indefinitely but aligning organizational culture with the innovative approaches it pioneers. If innovation units are sparks, culture is the oxygen. You need both—at the same time—or the flame dies.
MYTH 2: “OUR PEOPLE JUST NEED TRAINING”Companies spend millions teaching employees to use AI tools, then wonder why transformation never happens. The reason is that the underlying problem isn’t just about skills—it’s about the imagination needed to use them effectively. You can train your workforce to operate the new technology, but you can’t train them to be excited about it or to care where it will take the business. That requires change at the cultural level.
When it comes to AI, the real gap is conceptual, not technical. Employees need to shift from seeing AI as a better calculator to understanding the role it can play as a thought partner. This requires more than tutorials. It means showcasing how AI can transform workflows and then rewarding its creative use. Show a sales team how AI can predict client needs before calls, not just transcribe them afterward. Demonstrate how legal teams can shift from document review to strategic counseling.
When organizations tell employees to “use the tools” but don’t change the social norms around using them, people can be punished for doing exactly what leadership asked. A recent experiment with 1,026 software engineers found that when reviewers believed code was produced with AI assistance, they rated the author’s competence lower by about 9% even though the work was identical. Even more concerning was that the penalty was larger for women and older engineers, groups who tended to be treated negatively in assessments already. In a companion survey of 919 engineers, many reported hesitating to use AI for fear that adoption would be read as a lack of skill—illustrating why access and training don’t translate into uptake when the culture signals that visible AI use will harm credibility.
MYTH 3: “AI MAKES IT EASY TO SLIM DOWN THE WORKFORCE”There’s a seductive promise being sold to companies right now. The way to realize AI’s value is simply to replace as many workers as you can. Fire half your staff, pocket the savings, let machines handle the rest. Simple arithmetic for simple minds.
The messy truth is that AI can and will replace many human jobs, but it won’t do it cleanly and it won’t do it easily. In most cases, the idea that you can simply swap out the human component and replace it with a machine just doesn’t work. Humans work together as parts of multilayered social structures that have evolved as ecosystems. It’s often the case that if you change one part, there will be major consequences for another. If we rush into automation too quickly, we risk pulling away the pillars that hold the whole structure up.
Think about the tedious hours that junior analysts spend cleaning data, checking figures, and building models from scratch. Or the work a newly appointed manager will do overseeing performance and filling in paperwork. We call it grunt work, but it’s actually how humans develop the skills they will need in more senior roles. Take away the entry-level jobs and you lose the career path that delivers the highly skilled senior leaders you need. Allow AI-powered “deskilling” to take place and you lose the human judgment and oversight that institutions rely on.
Klarna’s trajectory shows both sides of this equation. In early 2024, its AI assistant handled two-thirds of customer chats, delivering resolution times under two minutes and a 25% drop in repeat inquiries. By 2025, Klarna’s leadership was publicly acknowledging the limits of an AI-only approach and began reopening human roles and emphasizing the customer experience alongside automation.
The real question isn’t how many people you can eliminate. For effective AI implementation, you need to understand that humans make essential contributions that don’t appear in their job descriptions.
THE CULTURE TRANSFORMATION PLAYBOOK: FIXING THE MYTHSCulture change depends on habits, incentives, and expectations, not just adding new tools. The playbook that follows presents concrete steps that leaders can take now to avoid the pitfalls many companies are running into.
Run Parallel Transformations (Fixes Myth 1). The innovation unit delivers quick wins while a separate initiative transforms broader culture. These must happen simultaneously, not sequentially. Use the innovation unit’s protected status and early victories to create organizational belief in change but invest equally in preparing the mainline culture for what’s coming. Without parallel tracks, the innovation unit becomes an isolated island of excellence that will eventually be washed away.Transform the Middle Layer (Fixes Myth 2). Middle managers are the real gatekeepers of culture change. Stop wasting energy trying to convert skeptics. Instead, identify the curious and give them authority to experiment, budget to fail, and cover from meeting traditional metrics. Try giving selected managers a micro-charter to implement change in their team, along with a weekly “show-the-work” session (what AI was used, what was accepted or overruled, and why) to share what they’ve learned with peers.Build Alternative Learning Paths (Fixes Myth 3). If AI eliminates the experiences that build judgment, you must consciously re-create them. High-fidelity simulations, rotation programs, and “human days” working without AI become existential necessities. Explicitly preserve activities that develop pattern recognition and business instinct. The investment might seem wasteful until you realize the alternative is a workforce that can operate tools but can’t respond when something breaks.THE CHOICECulture transformation is harder than technology implementation. It’s messier, slower, and impossible to fully control. Most companies will choose the easy path: buy the AI, train on the tools, create an innovation lab, and hope for the best.
The few who choose the hard path—parallel transformation, cultural evolution, preserved learning experiences—will gain powerful competitive advantages. They’ll have workforces that don’t just use AI but think with it, cultures that don’t just tolerate change but expect it, and organizations that don’t just survive disruption but drive it.
[Source Photo: Pexels]
Original article @ Fast Company.
The post Three myths that undermine AI success appeared first on Faisal Hoque.
September 26, 2025
Stop Outsourcing Your Brain

KEY POINTSEvery shortcut you take with AI is growth you never earn.Mastery isn’t a moment of genius; it’s built through deliberate, repeated practice.AI makes the skilled unstoppable—and the unskilled overconfident.
Build apps without coding experience! Create McKinsey-quality presentations in minutes! Write novels that sell! Trade stocks like a Wall Street veteran!
The promises are familiar to anyone who spends any time online. And really, there is just one promise underpinning all the others. This is the most seductive promise of the artificial intelligence revolution: that generative AI can turn novices into experts.
Sound too good to be true? Well, it probably is.
The truth is that AI doesn’t create expertise— it amplifies it. Give an expert access to effective generative AI tools and they become superhuman in their domain. Give a novice the same tools and they become … a dangerously confident novice.
For example, a novice might give ChatGPT the prompt: “Write an article about innovation.” In response, the AI will produce something fluent: a text with perfect grammar, logical flow, all the right buzzwords. But it’s empty calories. There will be no original insight, no authentic voice, no understanding of what the writer’s specific audience needs.
An experienced writer will take a different approach. For example, they might start by using AI to research opposing viewpoints they may have missed. They will prompt for structural alternatives to the outline they have prepared or will ask the model to find holes in their argument. They spot immediately when the AI makes mistakes or hallucinates, because they are already experts in their subject matter, and they know how to correct such errors quickly.
In the hands of the expert, AI becomes a tool to improve the quality of writing while also allowing the writer to produce high-quality work more rapidly. But in the hands of the novice, AI becomes a tool that helps generate mediocre slop very quickly and in almost boundless quantities.
To extract real value from AI, expertise isn’t optional—it’s essential. So the question becomes: What do we need to do to become experts?
The Boring Truth About ExpertiseHere’s a fundamental and highly disappointing truth about expertise: it takes time and effort to develop. There is no magic prompt that will allow you to shortcut the process of becoming an expert. If you want expertise, you have to do the work.
In Buddhism, the term bhavana is often used to refer to the idea of cultivation of a particular mindset or state through sustained practice. It’s how meditation practices work; you don’t sit down for 30 minutes one day and then stand back up transformed. You sit every day, over and over again, always trying to bring your mind back to the object of attention, and, over a period of decades, that unglamorous daily practice cultivates certain qualities of mind and of being.
This is exactly how expertise works. You don’t become a great cook or artist or scientist or writer as the result of a single flash of epiphany. You gain the expertise by learning over time, by practicing, by failing and succeeding.
The Danger of AIAI makes it tempting to skip over the cultivation and jump straight to the results. Why struggle through writing badly when AI can produce something clean immediately? Why fumble through analysis when AI can generate frameworks instantly?
These shortcuts are seductive, but pursuing them can be devastating for your real development. Every time you let AI write instead of wrestling with the words yourself, you miss the chance to develop your voice. Every time you copy-paste AI’s analysis instead of thinking through the problem, you forfeit the opportunity to build your own powers of judgment.
Now, this doesn’t mean that we shouldn’t use AI. But it does mean that we should use AI with discernment and intention. We should understand that using AI often means missing an opportunity to develop expertise, and we should wisely choose when it would be better for us to go through the struggle so that we learn. We should certainly never default to using AI thoughtlessly, as the result will be a thinning out of our capacity to think.
Building ExpertiseWe build expertise not through grand gestures but through mindful repetition.
Choose one domain and go deep. The temptation with AI is to become a generalist in everything, writing code one day, designing graphics the next. Resist this. Pick the field in which you want true mastery, and stick to it. Without mastering a specific domain, you’re a tourist everywhere.Practice without AI regularly. You need sessions where it’s just you and the work. Write drafts in a blank document. Solve problems with pen and paper. Cook without checking recipes. This isn’t luddism; it’s building the fundamental skills that make AI useful rather than necessary.Work at your edge. Comfort is the enemy of expertise. Find problems slightly beyond your current ability, work with constraints that force new solutions. The struggle is where expertise is formed.Establish rituals. Expertise grows through consistency. Create practices and rituals: Write your morning pages before checking your email; do your code reviews before lunch; conduct analysis sessions every Friday afternoon. The regularity matters more than the duration. Fifteen minutes daily beats three hours sporadically across the week.The Joy Is in the StruggleAI promises to make things easy. But what if the struggle is the point?
Humans are designed for effort. We need challenge, we need something to push against, to work towards. Watch a child finally tie their shoes after fifty failed attempts. That triumphant “I did it!” isn’t about the tied shoe; it’s about the earned victory.
The quest for expertise is about more than satisfaction. It’s also about self-discovery and self-development. When we struggle, we learn who we really are. Each struggle is a mirror, revealing strengths and weaknesses, tearing down our image and showing us the truth. And in the struggle we grow.
In fact, it may even be the case that growth is impossible without struggle.
And there is a joy in this. There is a deep joy in committing to something, to putting the work in, to staying true and constant to our commitment, through the good times and the bad.
Why allow AI to steal this joy from us?
[Photo: Juliana Valkovskaya/Adobe Stock]
Original article @ Psychology Today.
The post Stop Outsourcing Your Brain appeared first on Faisal Hoque.
September 16, 2025
There isn’t an AI bubble—there are three

Wheneven Sam Altman thinks there is an AI bubble, then there most likely is an AI bubble. But it’s even worse than that. There isn’t just one AI bubble: there are three.
First, AI is almost certainly in what economists call an asset bubble or a speculative bubble. As the name suggests, this is when asset prices soar well above their fundamental value. A classic example of this kind of bubble is the Dutch “tulip mania” of the 17th century, when speculators drove up the price of tulip bulbs to astronomical heights, convinced that there would always be someone willing to pay more than they had.
As I write, Nvidia is trading at 50 times earnings, Tesla at an astounding 200 times, despite falling revenues, while the rest of the Magnificent 7 (Google, Amazon, Apple, Microsoft, and Meta) are enjoying significant boosts thanks to the bets they are taking on an AI-led future. The chances of this not being a bubble are between slim and none—and while Slim hasn’t quite left town, he’s booked his ticket and is packing his bags.
Second, AI is also arguably in what we might call an infrastructure bubble, with huge amounts being invested in infrastructure without any certainty that it will be used at full capacity in the future. This happened multiple times in the later 1800s, as railroad investors built thousands of miles of unneeded track to serve future demand that never materialized. More recently, it happened in the late ’90s with the rollout of huge amount of fiber optic cable in anticipation of internet traffic demand that didn’t turn up until decades later.
Companies are pouring billions into GPUs, power systems, and cooling infrastructure, betting that demand will eventually justify the capacity. McKinsey analysts talk of a $7 trillion “race to scale data centers” for AI, and just eight projects in 2025 already represent commitments of over $1 trillion in AI infrastructure investment. Will this be like the railroad booms and busts of the late 1800s? It is impossible to say with any kind of certainty, but it is not unreasonable to think so.
Third, AI is certainly in a hype bubble, which is where the promise claimed for a new technology exceeds reality, and the discussion around that technology becomes increasingly detached from likely future outcomes. Remember the hype around NFTs? That was a classic hype bubble. And AI has been in a similar moment for a while. All kinds of media—social, print, and web—are filled with AI-related content, while AI boosterism has been the mood music of the corporate world for the last few years. Meanwhile, a recent MIT study reported that 95% of AI pilot projects fail to generate any returns at all.
But what does all this mean for your organization? What should you be doing to respond?
DO THE BUBBLES MATTER?Bubbles are fine things in bottles of champagne, but in business contexts they are generally viewed as something to be avoided. So, when we see that AI is likely in three bubbles simultaneously, the immediate instinct may be to swerve urgently away from AI.
I recommend resisting that instinct. Two of the three bubbles are largely irrelevant for most organizations and should simply be ignored.
The speculative bubble is the result of a modern version of tulip mania—investors bidding up the price of equities on the hope of future performance. The overheated valuations and crazy multiples are only problems for organizations that are involved in or directly exposed to the financial speculation—and most organizations are not. A market crash may cause broader pain for the economy, but that is a potential environmental issue that all businesses will need to navigate. It should have little direct effect on a carefully planned AI implementation strategy.
And as for the infrastructure bubble, well, if it turns out we really are building too much, the problem will be one of overcapacity, not overvaluation. For most organizations this is not only irrelevant, it may also lead to positive outcomes, because overcapacity will mean falling prices for those who want to use that infrastructure.
This leaves the hype bubble, and this is where things get interesting. The hype bubble does have an important lesson for most organizations, but it isn’t the one we might think—even if 95% of AI pilot projects fail, the issue here isn’t that AI can’t deliver value, but that many companies are approaching the technology in the wrong way.
DOTCOM DEJA VUI’ve seen this before. I was in the dotcom boom (and bust) of the late ’90s and early 2000s. I saw Pets.com burn through $300 million before imploding, I saw the NASDAQ crashing by 78% and I read the articles by pundits who authoritatively declared that the internet was a fad.
Yet during that same meltdown, Amazon was methodically building fulfillment centers and refining its recommendation algorithms. Google was quietly perfecting search. PayPal was solving payment friction. And thousands of companies were developing their first e-commerce capabilities, with greater or lesser degrees of success.
The point is simple: a thing can be hyped beyond its actual capabilities while still being important (and to be fair to Sam Altman, he also makes this point in the piece quoted above). Just because AI is in a hype bubble does not mean that AI is “fake news” or that there isn’t huge value to be extracted from it. The hype bubble simply means that some people are overexcited about AI—it doesn’t mean that there isn’t something to be legitimately excited about.
I made this argument in the dotcom era and I make it again now. What happened then will happen with AI. When valuations correct—and they will—the same pattern will emerge: companies that focus on solving real problems with available technology will extract value before, during, and after the crash.
In sum, companies with systematic approaches to extracting value from the technology will thrive. What becomes crucial, then, is your approach to capturing that value. So, how do you actually achieve that goal?
THE VALUE CREATION PLAYBOOKThe companies capturing real value follow three pillars of systematic implementation:
Problem-First Architecture starts by mapping organizational friction points. Where do humans waste time on repetitive work? Where do information bottlenecks slow decisions? What processes consistently produce errors? Only after identifying these problems do successful companies consider AI solutions.
Portfolio Balance means mixing time horizons and risk levels. Quick wins (one to three months) might include off-the-shelf tools for document processing. Strategic bets (3 to 12 months) could involve custom solutions for core business processes. Moonshots (more than 12 months) explore new business models. A retailer might implement an inventory chatbot this quarter while developing predictive analytics for next quarter and testing autonomous purchasing agents for next year.
Holistic Integration connects AI initiatives to each other and to business strategy. Successful companies break down silos between IT, operations, and business units. They create feedback loops between projects. A manufacturing company’s quality control AI feeds data to predictive maintenance AI, which informs supply chain AI. Each system makes the others smarter, creating compound value that isolated pilots never achieve.
This is how you build value regardless of bubbles: systematically, purposefully, and starting today.
BENEFITTING FROM BUBBLESFar from being a threat, the AI bubble might be the best thing that could happen to pragmatic adopters. Consider what speculative excess delivers: billions in venture capital funding R&D you’d never justify to your board; the world’s brightest minds abandoning stable careers to join AI startups, working on tools that you’ll eventually be able to use; infrastructure being built at a scale no rational actor would attempt, driving down future costs through overcapacity.
While investors bet on which companies will dominate AI, you can cherry-pick proven tools at competitive prices. While speculators debate valuations, you will be implementing solutions with clear ROI. When the correction comes, you’ll also be able to benefit from fire-sale prices on enterprise tools, seasoned talent seeking stability, and battle-tested technologies that survived the shakeout.
The dotcom bubble gave us broadband infrastructure and trained web developers. The AI bubble will leave behind GPU clusters and ML engineers. The smartest response isn’t to avoid the bubble or try to time investments in it perfectly. It is to let others take the capital risk while you harvest the operational benefits. The bubble isn’t your enemy. If you play your cards strategically, it can be a major benefactor.
A VALUABLE DISTRACTIONPerhaps the greatest gift of the bubble discourse is the distraction it provides. While commentators debate whether Nvidia is overvalued and conferences overflow with “AI bubble” panels, something interesting happens: the noise creates perfect cover for serious operators to build lasting value.
This psychological dynamic creates genuine competitive advantage. The bubble debate gives skeptics intellectual permission to wait—after all, why pursue that interesting AI project if the whole AI thing will inevitably crash? Meanwhile, companies quietly pursuing systematic AI implementation face less competition for talent, less pressure on timelines, and less scrutiny of their initiatives. The louder the bubble talk, the more space opens for those willing to take a methodical approach to building value.
[Source Photos: Freepik and Freepik]
Original article @ Fast Company.
The post There isn’t an AI bubble—there are three appeared first on Faisal Hoque.
Can We Thrive in the Age of AI?

KEY POINTSAI is already taking jobs, more than 10,000 job cuts this year can be attributed to generative AI. But those who adapt can thrive. Build relationships, read emotions, ask better questions.Think collaboration, not competition; don’t fight AI, partner with it.
“AI,” the “godfather of AI”, Geoffrey Hinton, recently warned, “will make a few people much richer and most people poorer.”
Why? Well, says Hinton, it’s in part because AI is going to take away a lot of jobs from a lot of people. And he is far from alone in thinking this. For example, a few months ago, Dario Amodei, the CEO of Anthropic, warned that AI could lead to a 50% reduction in white-collar jobs; Sam Altman, the CEO of Open AI, has also repeatedly voiced this concern.
The problem is often framed in terms of the future—about what AI could do to jobs down the line. But the future has a habit of coming faster than we think.
For Annabel Beales, for example, a UK copywriter who was replaced by AI, the future is here now. Amodei’s words weren’t a prediction; they were a description. And it’s not just copywriters who are at risk. The career of voice acting might plausibly be annihilated by the use of AI to clone voices—and far from being a sci-fi possibility, it is already happening at scale.
Such stories are canaries in the coal mine, and the birds are singing ever louder. A recent report by Challenger, Gray & Christmas suggested that more than 10,000 job cuts this year can be attributed to generative AI. And perhaps the most vivid example is in the birthplace of AI, the tech industry, where giants like Microsoft, Meta, Amazon, and Oracle have all announced significant AI-driven job cuts this year.
So, what will become of our jobs and our careers and our livelihoods?
The fact is, no one knows for sure. So what follows isn’t a definitive answer but an attempt to think through the uncertainty: Can we thrive in the age of AI?
AI Isn’t Good at EverythingHere’s one basic truth that seems important to me: Generative AI is good at some things and not so good at others.
Generative AI excels at things like pattern recognition, data analysis, and generating content that follows predictable structures. So AI assistants like Claude, ChatGPT, Gemini, and Co-Pilot can write decent formulaic emails, analyze spreadsheets, and create passable marketing copy. But they’re not so good at emotional nuance, personalization, or true innovation, and they also struggle with real-world complexity and patterns they’ve never seen before.
The upshot is that AI isn’t replacing humans or knowledge work per se. Rather, it’s more useful to see AI as unbundling knowledge work.
There are some pieces that AI can do better, and it is already taking those pieces over—routine, automated work like data entry, standard customer interaction, processing invoices, scheduling.
But there are pieces that AI cannot do well, and here humans are more important than ever. For example, AI is good at routine customer service—but only humans can react with the empathy and nuance that turns someone making a complaint into a devoted and enthusiastic customer. And it is only humans who can judge when a situation calls for the routine customer service playbook and when a deviation from the norm is necessary.
To simplify for the sake of convenience: The AI revolution means that the routine parts get automated while the complex, nuanced parts are best left to humans. So to prepare for the AI age, we need to start developing the uniquely human skills that AI cannot replicate.
The Skills that MatterSo what uniquely human skills should we develop? Here are four that I think will be enormously valuable in an AI world:
Emotional Intelligence: AI can mimic emotion, but it cannot feel. It can simulate the behavior of humanity, but it cannot be human. And this is where opportunity lies. We need to develop our ability to connect, to build trust, to read unspoken dynamics, to attend to the heart as well as the head. The age of AI makes such qualities even more important. And only humans have them. Practice: Put your phone away during conversations and really listen.Relationship-Building: In a world of deepfakes and AI-generated everything, authentic human relationships become more valuable, not less. AI can send connection requests and schedule meetings, but it can’t build the genuine bonds that make someone choose you for their team or recommend you when opportunity knocks. Practice: Have one meaningful conversation daily with no agenda beyond connection.Asking Better Questions: AI can generate endless answers, but it doesn’t tell you which questions are worth asking. It doesn’t know what matters to your specific situation, your values, or your goals. The skill we need isn’t just prompt engineering; it’s developing the judgment to know what matters, what assumptions need challenging, and what questions nobody else is asking. Practice: Before using AI for any task, spend two minutes writing down what you really need to know, not just what’s easy to ask.Learn How to Learn: The learn-work-retire model is dead. The pace of technological change means specific technical skills have limited half-lives. Thriving in this world requires being a life-long learner—not just learning something but acquiring the skill of learning itself. Practice: Learn one new AI tool monthly, especially if it makes you uncomfortable. Discomfort is where growth lives.A Partnership MindsetTo prepare for the AI world, we need to move beyond competition and start thinking in terms of collaboration; it’s not AI or me, it’s AI and me. The professionals thriving now use AI for research but apply critical thinking and human judgment to its outputs.
The goal, you see, isn’t to “win”; it’s to flourish together. The aim must be to find a combination of machine efficiency and human wisdom that helps us transcend our limitations.
What precise combination is this? I don’t know for sure, and I don’t think anyone else does, either. But I do know that we need to start very urgently thinking about the answers.
[Photo: Best/Adobe Stock]
Original article @ Psychology Today.
The post Can We Thrive in the Age of AI? appeared first on Faisal Hoque.
September 15, 2025
AI: the key to human-centered business

by Faisal Hoque , Pranay Sanklecha , Paul Scade
Fears that AI will dehumanize work overlook its potential to make organizations more human. Here’s how organizations can harness the power of human-centered AI.The prevailing view of AI is that, while it promises to deliver enormous efficiency gains, it also threatens to dehumanize work and commerce. While these concerns deserve to be taken seriously, the counterintuitive truth is that AI systems can make organizations more intensely and accessibly human. Rather than stripping away the personal touch, AI can amplify the very qualities that make human interaction valuable: empathy, understanding, and the ability to connect across differences.

However, AI can also be used to enhance and elevate, rather than replace, human qualities.
It’s not AI vs humanity – one enhances the otherWe cannot afford to be complacent about AI. The technology is too powerful, and it is undeniable that it does threaten human participation in the workforce. However, AI can also be used to enhance and elevate, rather than replace, human qualities. And it can do so in a way that can deliver significant business advantages. This nexus of human and machine capabilities should be the focal point of attention for forward-thinking CEOs.
This isn’t a consolation prize for leaders who can’t bring themselves to squeeze every last drop of humanity out of their businesses. On the contrary, this kind of human-centered approach is essential for delivering value in a world in which humans consume services, buy goods, and generally drive economic activity. AI can help push humanity and the best features of humans to the forefront of business operations, thus creating competitive advantages that are absent from purely efficiency-focused approaches.

In this context, there are three areas where AI can help organizations bridge cultural gaps and transcend operational constraints by focusing on amplifying human qualities. The following three brief cases illustrate how AI can:
Scale empathy in customer interactionsDissolve knowledge silos within organizationsSupport service delivery across linguistic and cultural boundariesThe pattern that emerges from these examples is clear: AI is at its most powerful not when it replaces humans but when it amplifies human connection.
1] Scaling empathy by bridging interpersonal dividesMany organizations build their service models to optimize cost efficiency and maximize throughput. Hitting these targets often means using strategies that consumers have come to dread: offshoring contact centers or replacing humans entirely with automated systems. Over time, this kind of approach reshapes both organizational culture and customer expectations, making empathy feel like a luxury rather than a standard. The consequences of these attitude shifts are very real: according to TCN’s 2024 survey, nearly two-thirds (63%) of Americans say they’re likely to abandon a brand after a single poor service experience – a nearly 50% increase over the past four years. At the same time, consumer expectations for empathy and responsiveness are rising. In most narratives about “the rise of the machines,” AI is the villain in these kinds of situations, responsible for accelerating the move away from empathy and connection. Yet the truth is that, when AI is implemented thoughtfully, it can help bridge this gap by supporting warmer and more personalized customer experiences.
Here are two live examples of how AI can boost rather than dilute feelings of empathy and connection.
AI can be unexpectedly effective at scaling empathy even in high-stakes settings like healthcare. The shift towards a “digital front door” for healthcare encounters in the US presents physicians with an enormous challenge: tens of thousands of patient messages arriving via Electronic Health Record inboxes every day. Many require responses that not only contain medically accurate information but that are also emotionally nuanced. A recent study from NYU found that AI-generated responses to patient messages were rated as more empathetic than those written by physicians, scoring higher on warmth, tone, and relational language. While not always as clinically precise, the AI replies were more likely to convey positivity and build connections. This suggests a powerful new role for generative tools. Instead of impersonal templated responses or terse replies from overburdened healthcare providers, AI can deliver personalized responses, relieving cognitive load on doctors while reinforcing a culture of compassion.
AI-powered contact center platforms like Genesys provide agents with on-screen hints about customer tone, journey stage, and emotional context as a call unfolds, then suggest phrasing for responses. On the surface, this is a technical solution to improve efficiency and global staffing flexibility. But its deeper value lies in its ability to help humans tailor responses to provide personalized customer engagement, thus scaling the emotional intelligence embedded in their customer interactions.

The larger an organization gets, the harder it is to keep everyone connected, informed, and on the same page.
2] Scaling individual capabilities by bridging organizational gapsMany organizations struggle with inefficiencies that are rooted in intra-departmental skill gaps and knowledge silos. When your engineering team can’t articulate the project’s business case and your product managers don’t understand the technical constraints you’re facing, projects stall, sales are lost, and innovation opportunities go to waste. Obstacles like these typically emerge when different parts of an enterprise develop their own languages, priorities, and ways of working.
AI can now act as a cognitive exoskeleton that augments employee knowledge by delivering contextually relevant information.
The larger an organization gets, the harder it is to keep everyone connected, informed, and on the same page. But AI offers a whole suite of new tools that can help overcome these barriers. AI can now act as a cognitive exoskeleton that augments employee knowledge by delivering contextually relevant information and personalized training as needed. The ability to pull information across boundaries and then instantly repackage and recontextualize it to meet individual needs creates a two-way bridge across silos.
These two real-world examples are already transforming how organizations operate.
The author team’s US defense contractor partner, CACI, an $8bn Fortune 500 company, has developed and implemented a generative AI hub that audits each new planning document for cultural fit and organizational alignment. By centralizing data at the whole-enterprise level, and then using this composite construct as a reference point for each document, local blind spots, misunderstandings, and divergent threads can proactively be identified and corrected before a project goes live.Cornerstone’s learning platform connects employees to bite-sized lessons that can be drawn from any internal file across the enterprise. This turns previously static knowledge into personalized, on-demand learning. Rather than relying on formal training alone, the platform embeds contextual support directly into daily workflows. When combined with real-time labor market intelligence, these kinds of AI tools can also help organizations rapidly identify skill gaps, support cross-functional mobility, and adapt workforce strategies.
For companies expanding into new markets, access is often limited not by infrastructure or cost but by barriers to communication.
3] Scaling market reach by bridging cultural gapsFor companies expanding into new markets, access is often limited not by infrastructure or cost but by barriers to communication. Cultural norms, language differences, and digital fluency can become obstacles to the smooth delivery of services. The challenge for businesses is finding ways to overcome these barriers without duplicating capabilities for each cultural group. AI now offers a range of tools for navigating this kind of cultural complexity at speed and scale.
Here’s a powerful example of how AI can help overcome barriers to market entry while connecting humans with services that would not otherwise be available to them.
Language diversity and low literacy levels in emerging markets are classic examples of barriers to market entry. Nigeria alone has more than 500 native languages and more than 100 million potential customers who do not speak English, the official language. India has more than 1,500 languages and hundreds of millions who speak neither English nor Hindi. Serving these populations with traditional tools would require separate teams, scripts, and platforms for each language – making broad service access operationally untenable. But by leveraging AI, companies can bridge these cultural gaps by scaling the ways customers can access their services.In India, companies are beginning to adapt chatbot and voice systems to accommodate linguistic diversity. In Africa, telecom leaders like Orange are partnering with companies like Meta and OpenAI to overcome similar problems. This trend reflects a growing recognition that long-term market growth depends on embedding cultural specificity at the infrastructure level. Multimodal AI models trained on low-resource languages allow service providers to connect directly with people in underserved groups who can neither read nor type fluently in dominant national languages. Voice bots that read forms aloud, answer in local idioms, and hand off to human reps only for edge cases will slash call center costs and raise customer intimacy.
Three pillars of AI deploymentCEOs wanting to make the most of this cooperative link between AI and a human-centric approach can start with three basic steps, which, if followed thoroughly, will set the organization on a positive track to success.
Designate a cross-functional GenAI knowledge broker. Nominate one tech lead and one business lead. Give them a 30-day brief to map your richest data silos and prototype a retrieval-augmented chatbot capable of sharing and contextualizing information across existing organizational boundaries.Insert AI coaching into one frontline workflow. Deploy real-time sentiment cues and autogenerated call summaries for agents, track handle time, transfers, and engagement scores.Publish a one-page ‘AI Empathy Charter’ prioritizing the use of AI in contexts that promote human connection and empathy. Develop principles for ensuring transparency, cultural inclusivity, and data privacy.Original article @ IMD.
The post AI: the key to human-centered business appeared first on Faisal Hoque.
September 11, 2025
Finding Inspiration in Hard Times

KEY POINTSStruggling makes sense in a world on fire; your pain reflects reality, not weakness.Meet fear and grief with curiosity, not commentary. Feel instead of pushing away.Meaning doesn’t erase suffering but situates it in a larger story worth living for.
The news is relentless at the moment.
In just the last few days, there have been headlines about price rises and job losses, about Israel bombing Qatar, the French government crumbling, Russian drones in Poland, and a new political assassination in the U.S.
Technology, which was supposed to make our lives easier, seems instead to be magnifying our problems. Serious people warn of the risk of AI taking over the world and making human beings extinct—and still we develop it unchecked. Social media is alive with howls of rage and polarization. Our phones buzz and ping without reprieve, external manifestations of the chaos inside.
We are exhausted and we are frightened. And we face our troubles alone, more alone than we have ever been before, as community fractures and loneliness rises.
When Struggling Makes SenseThere is a taboo in our culture against struggling. Being unhappy is perhaps the last mortal sin.
And this taboo is actually one of the most exhausting parts of the hard times we live in right now. It’s the source of the voice that tells us we shouldn’t be finding things hard. It’s the source of the internal critic who tells us that we should always be grateful, that we should be calm, that we need to stay positive and upbeat and use a growth mindset to turn exploding bombs into a learning opportunity.
I’ve got nothing against gratitude or a growth mindset. As a matter of fact, I practice both daily and have written about it a fair amount. But there is also a place for grief and anger.
The truth is that life is hard right now.
In a world turned upside down, a world in which black is sold as white and good is insulted as evil, suffering is not a character flaw. Exactly the opposite, in fact.
Suffering is evidence that there is still something pure and true inside us. It tells us that our eyes can still see and our hearts can still feel.
In a world like ours, suffering is evidence that we remain alive to love and beauty and hope.
Staying with What HurtsIn Buddhism, there’s a story about the two arrows that pierce human beings. We can think of the first arrow as the suffering that life directly throws up. It consists of things like physical pain, grief when we lose someone we love, anger when we see injustice, and so on. In our context, the first arrow is the pain and anger and fear and sorrow we feel right now about how the world is.
The first arrow is inevitable. The second arrow isn’t. The second arrow consists of the suffering we cause ourselves by how we react to the pain of the first arrow. Instead of simply feeling angry, we add the pain of thinking that we shouldn’t feel angry. Instead of only the suffering of sorrow, we add the pain of resisting it, of thinking we are bad or weak for feeling it.
Our usual reflex, says the Buddhist teacher Pema Chodron, is to avoid the first arrow. We distract ourselves. We numb ourselves. We try to talk ourselves into feeling better.
But we all know how that goes: badly. The more we push it away, the more we distract ourselves, the more we will eventually suffer.
So instead, suggests Chodron, we might try leaning into the pain. What does this mean?
It means having the courage to simply experience our suffering without pushing it away.
One way of doing this is to try to meet our experience with curiosity instead of commentary. If fear arises, we notice it: This is fear; I notice my breath is shorter. If grief swells, we give it space: This is grief, heavy in my throat.
And this kind of attention also reduces the suffering of the second arrow. Pain is unavoidable, but the layers of resistance, denial, and shame can be softened.
The Power of PainWhen we stay with our pain, something begins to happen.
Pain, you see, isn’t just a weight. It’s also information.
Our feelings— as philosophers and psychologists have long recognized—aren’t just natural. They’re also useful and even necessary for a well-lived life.
This is true of “negative” feelings just as much as positive ones; for example, anger is hard to feel, but sometimes it is the right response, and it can motivate fighting against injustice.
When we stay with our grief and our anger, when we face it with honesty and tenderness, it ripens into a path to meaning. It tells us what matters and it lights the path forward.
And that is why it matters that we stay with the pain. To stay with the pain means to stay with what matters. It means that we do not abandon our values; instead, we have the courage to suffer for them.
This is not abstract theory. I write from hard-won experience. As I’ve written about before, this is exactly what happened to me when my teenage son was diagnosed with cancer.
From pain came purpose.
The Gift of MeaningHuman beings have incredible, unimaginable strength. And the key to unlocking it is meaning.
Viktor Frankl gives us the most breathtaking proof of this when he writes of his experience in the concentration camps. The people who survived, he says, the people who remained whole, were those who found meaning in even those surroundings.
A person to love, a duty to fulfill, a vision of the future—just a single thread of meaning gives us the strength we need to endure almost anything.
Meaning doesn’t erase or deny suffering. In fact, it may even magnify it. But meaning situates that suffering in a larger story. It roots us in what we love and reminds us why we fight.
Meaning is love refusing to give up. It is a candle lit in the darkness, a fist waved at the heavens. It is the strength that allows us to say: this matters—and so do we.
Here are three small, ordinary practices that can help you access the power of meaning in your daily life:
Return to your body. Take three slow breaths, unclench your jaw, and place both feet firmly on the floor. Remind yourself: I am here. I can meet this moment.Name your “why.” When you feel overwhelmed, pause to recall what matters most—a person, a value, a cause. Write it down or speak it aloud.Take one deliberate step. Choose a small action aligned with your values; Check in on someone vulnerable, offer time or resources, or protect an hour for genuine rest.The world may still feel heavy. But meaning gives us the strength to bear its weight—and the energy to change it.
[Photo: SANDJAYA/Adobe Stock]
Original article @ Psychology Today.
The post Finding Inspiration in Hard Times appeared first on Faisal Hoque.