Faisal Hoque's Blog, page 4
June 23, 2025
Why we’re measuring AI success all wrong—and what leaders should do about it

Here’s a troubling reality check: We are currently evaluating artificial intelligence in the same way that we’d judge a sports car. We act like an AI model is good if it is fast and powerful. But what we really need to assess is whether it makes for a trusted and capable business partner.
The way we approach assessment matters. As AI models begin to play a part in everything from hiring decisions to medical diagnoses, our narrow focus on benchmarks and accuracy rates is creating blind spots that could undermine the very outcomes we’re trying to achieve. In the long term, it is effectiveness, not efficiency, that matters.
Think about it: When you hire someone for your team, do you only look at their test scores and the speed they work at? Of course not. You consider how they collaborate, whether they share your values, whether they can admit when they don’t know something, and how they’ll impact your organization’s culture—all the things that are critical to strategic success. Yet when it comes to the technology that is increasingly making decisions alongside us, we’re still stuck on the digital equivalent of standardized test scores.
THE BENCHMARK TRAPWalk into any tech company today, and you’ll hear executives boasting about their latest performance metrics: “Our model achieved 94.7% accuracy!” or “We reduced token usage by 20%!” These numbers sound impressive, but they tell us almost nothing about whether these systems will actually serve human needs effectively.
Despite significant tech advances, evaluation frameworks remain stubbornly focused on performance metrics while largely ignoring ethical, social, and human-centric factors. It’s like judging a restaurant solely on how fast it serves food while ignoring whether the meals are nutritious, safe, or actually taste good.
This measurement myopia is leading us astray. Many recent studies have found high levels of bias toward specific demographic groups when AI models are asked to make decisions about individuals in relation to tasks such as hiring, salary recommendations, loan approvals, and sentencing. These outcomes are not just theoretical. For instance, facial recognition systems deployed in law enforcement contexts continue to show higher error rates when identifying people of color. Yet these systems often pass traditional performance tests with flying colors.
The disconnect is stark: We’re celebrating technical achievements while people’s lives are being negatively impacted by our measurement blind spots.
REAL-WORLD LESSONSIBM’s Watson for Oncology was once pitched as a revolutionary breakthrough that would transform cancer care. When measured using traditional metrics, the AI model appeared to be highly impressive, processing vast amounts of medical data rapidly and generating treatment recommendations with clinical sophistication.
However, as Scientific American reported, reality fell far short of this promise. When major cancer centers implemented Watson, significant problems emerged. The system’s recommendations often didn’t align with best practices, in part because Watson was trained primarily on a limited number of cases from a single institution rather than a comprehensive database of real-world patient outcomes.
The disconnect wasn’t in Watson’s computational capabilities—according to traditional performance metrics, it functioned as designed. The gap was in its human-centered evaluation capabilities: Did it improve patient outcomes? Did it augment physician expertise effectively? When measured against these standards, Watson struggled to prove its value, leading many healthcare institutions to abandon the system.
PRIORITIZING DIGNITYMicrosoft’s Seeing AI is an example of what happens when companies measure success through a human-centered lens from the beginning. As Time magazine reported, the Seeing AI app emerged from Microsoft’s commitment to accessibility innovation, using computer vision to narrate the visual world for blind and low-vision users.
What sets Seeing AI apart isn’t just its technical capabilities but how the development team prioritized human dignity and independence over pure performance metrics. Microsoft worked closely with the blind communitythroughout the design and testing phases, measuring success not by accuracy percentages alone, but by how effectively the app enhanced the ability of users to navigate their world independently.
This approach created technology that genuinely empowers users, providing real-time audio descriptions that help with everything from selecting groceries to navigating unfamiliar spaces. The lesson: When we start with human outcomes as our primary success metric, we build systems that don’t just work—they make life meaningfully better.
FIVE CRITICAL DIMENSIONS OF SUCCESSSmart leaders are moving beyond traditional metrics to evaluate systems across five critical dimensions:
1. Human-AI Collaboration. Rather than measuring performance in isolation, assess how well humans and technology work together. Recent research in the Journal of the American College of Surgeons showed that AI-generated postoperative reports were only half as likely to contain significant discrepancies as those written by surgeons alone. The key insight: a careful division of labor between humans and machines can improve outcomes while leaving humans free to spend more time on what they do best.
2. Ethical Impact and Fairness. Incorporate bias audits and fairness scores as mandatory evaluation metrics. This means continuously assessing whether systems treat all populations equitably and impact human freedom, autonomy, and dignity positively.
3. Stability and Self-Awareness. A Nature Scientific Reports study found performance degradation over time in 91 percent of the models it tested once they were exposed to real-world data. Instead of just measuring a model’s out-of-the-box accuracy, track performance over time and assess the model’s ability to identify performance dips and escalate to human oversight when its confidence drops.
4. Value Alignment. As the World Economic Forum’s 2024 white paperemphasizes, AI models must operate in accordance with core human values if they are to serve humanity effectively. This requires embedding ethical considerations throughout the technology lifecycle.
5. Long-Term Societal Impact Move beyond narrow optimization goals to assess alignment with long-term societal benefits. Consider how technology affects authentic human connections, preserves meaningful work, and serves the broader community good.
THE LEADERSHIP IMPERATIVE: DETACH AND DEVOTETo transform how your organization measures AI success, embrace the “Detach and Devote” paradigm we describe in our book TRANSCEND:
Detach from:
Narrow efficiency metrics that ignore human impactThe assumption that replacing human labor is inherently beneficialApproaches that treat humans as obstacles to optimizationDevote to:
Supporting genuine human connection and collaborationPreserving meaningful human choice and agencyServing human needs rather than reshaping humans to serve technological needsTHE PATH FORWARDForward-thinking leaders implement comprehensive evaluation approaches by starting with the desired human outcomes and then establishing continuous human input loops and measuring results against the goals of human stakeholders.
The companies that get this right won’t just build better systems—they’ll build more trusted, more valuable, and ultimately more successful businesses. They’ll create technology that doesn’t just process data faster but that genuinely enhances human potential and serves societal needs.
The stakes couldn’t be higher. As these AI models become more prevalent in critical decisions around hiring, healthcare, criminal justice, and financial services, our measurement approaches will determine whether these models serve humanity well or perpetuate existing inequalities.
In the end, the most important test of all is whether using AI for a task makes human lives genuinely better. The question isn’t whether your technology is fast enough but whether it’s human enough. That is the only metric that ultimately matters.
[Photo: sommart/Adobe Stock]
Original article @ Fast Company.
The post Why we’re measuring AI success all wrong—and what leaders should do about it appeared first on Faisal Hoque.
June 19, 2025
What are the opportunities — and risks — of AI for grid reliability?
This is two-part look at frameworks for understanding the risks and opportunities of AI-driven power line inspections. This articles was based on the concepts and frameworks presented by Faisal Hoque in “ Two frameworks for balancing AI innovation and risk ,” published in the Harvard Business Review.
by Emma Gavala
[Emma Gavala is the chief of staff at Arkion, an AI-powered asset intelligence platform, training computer vision and AI models for global transmission and distribution operators since 2019.]
The power grid is one of the most vital infrastructures in modern society. But failures in grid assets result in annual costs of $150 billion, or around 0.15% of global GDP. This number continues to rise as demand on the grid grows.
Despite this, traditional inspection methods — such as manual checks, helicopters, and ground crews — remain slow, costly, and highly susceptible to human error. In fact, these methods fail to detect up to 90% of critical defects.
With climate challenges, aging infrastructure, and growing regulatory demands, grid operators can no longer afford to rely on reactive maintenance. A shift toward preventive strategies, powered by artificial intelligence-driven insights, is essential for ensuring resilience, efficiency, and long-term grid stability.
The way forward is clear: drone inspections combined with AI-powered image analysis. This method provides the fastest, most cost-effective, sustainable, and, above all, precise way to monitor, assess, and maintain transmission and distribution grids.
Certain transmission and distribution operators are embracing AI-driven maintenance, with great success. E.ON Sweden, for example, has been leveraging this approach for years, while others have adopted it for specific use cases, such as post-storm assessments. However, the industry as a whole remains slow to adapt.
While the benefits of AI are numerous, many operators hesitate due to concerns about technology readiness. Others dive in without a clear strategy, leading to fragmented, small-scale implementations that struggle to scale, ultimately hindering AI from reaching its full potential.
This cautious approach to change could become a major challenge for power grids over the next 20 years, as demand is expected to double, renewables dominate the energy mix, and the grid rapidly ages. A new approach is essential.
As Faisal Hoque, writing in the Harvard Business Review, put it: “Bridging the gap between aspiration and achievement requires a systematic approach to AI transformation, one that primes organizations to think through the biggest questions this technology raises without losing sight of its day-to-day impact. The stakes could not be higher. Organizations that fail to adapt will become the Polaroids and Blockbusters of the AI age.”
Two frameworksThe world cannot afford for power grids to become the Polaroids or Blockbusters of the AI age; they must continue delivering reliable, affordable, and sustainable energy. To bridge this gap, a structured, balanced approach to AI adoption is necessary. Two complementary frameworks can help guide grid operators: the OPEN and CARE frameworks.
The OPEN framework (Outline, Partner, Experiment, Navigate) equips grid operators with a systematic four-step process to integrate AI into their workflows, enabling a smoother transition from proofs of concept to large-scale implementation.
Meanwhile, the CARE framework (Catastrophize, Assess, Regulate, Exit) provides a ‘sanity check’ to map and manage AI-related risks while aligning adoption with existing organizational guardrails.
To achieve meaningful progress toward scaled AI deployments across the grid, energy stakeholders must first ask: What is preventing adoption, given that the technology is already live, tested, and delivering value to industry peers?
AI adoption is not just a technology or innovation project — it requires cross-departmental alignment and effort. A dual mindset of acceleration and risk mitigation is key to unlocking the efficiency, cost, and precision gains that AI can deliver.
A power line case studyTraditional power line inspection methods — manual checks, helicopters, and ground patrols — are slow, costly, and often fail to detect defects. AI-powered asset analytics fundamentally transform this process by integrating advanced technologies to enhance efficiency, accuracy, and decision-making. Arkion uses the following process:
Data collection: Drones or helicopters capture high-resolution images, thermal images, and 3D LiDAR data of the power grid assets and lines.AI and computer vision analysis: Machine learning models analyze images and 3D data to detect defects such as corrosion, missing components, and vegetation encroachment. AI can process thousands of images in minutes, identifying patterns that would take human inspectors days to review.Human verification: AI insights are augmented and cleaned by expert validation. Human inspectors review flagged anomalies, ensuring that critical decisions are based on both automated insights and experienced judgment.Actionable insights and integration: The analyzed data integrates into asset management and work order systems, enabling predictive maintenance, risk mitigation, and optimized investment planning.There are two main benefits of using AI-powered inspections. First, they’re cost-effective. Case studies from the industry suggest that they achieve an average cost reduction per defect found of 85%. And they also allow for more precision. AI-driven analysis detects up to five times more defects than manual inspections; on many more problematic or hard-to-inspect stretches, it discovers up to eight times more defects.
Despite these advantages, resistance persists among key stakeholders, from maintenance and vegetation teams to executives. This is where structured frameworks like OPEN and CARE help align different departments within grid owner organizations to accelerate AI adoption.
__________________________________
Two frameworks for gauging the risks of AI-driven power line inspectionsA practical look at how the OPEN and CARE approaches can be applied to the power grid.Artificial intelligence-powered inspections represent a critical shift for modern power grid management. But adopting this technology safely, systematically, and at scale is also a significant change for grid operators, one that requires careful and strategic implementation.
Our first article on the topic outlined the CARE and OPEN frameworks; this follow-up offers a closer look at how they can be used in practice, specifically for power line inspections.
Applying the OPEN framework to AI-driven power line inspectionsThe OPEN framework emphasizes that successful adoption depends not only on technology but also on leadership and a culture capable of sustaining continuous transformation. Each step in the process enables organizations to manage AI projects from ideation to deployment, maintenance, and eventual scaling from project to fully implemented.
Step 1: OutlineThe first step is to define how AI adoption increases grid reliability and resilience.
A common mistake in AI adoption is treating it as a tech-first initiative. Instead, grid owners should ask: How does AI-driven inspection align with the value we deliver to society? The question is not “What can this technology do for us?” but rather “What can this technology do to help us maintain a safe, reliable energy system?”
AI can improve core grid operations in a few ways:
Maintenance teams: AI can rapidly analyze drone-captured images to detect corrosion, missing bolts, damaged insulators, and overheating components before failures occur.Vegetation management teams: AI-powered analytics can identify encroaching vegetation, enabling proactive trimming and reducing fire risks.Asset management and investment teams: AI-driven condition assessments can inform long-term infrastructure planning, preventing costly reactive maintenance.IT and security teams: AI models should integrate securely with existing grid management systems, ensuring compliance and data protection.Maintaining a resilient and steady energy supply powered by AI starts with mapping and openly discussing internal priorities within the grid operator — before jumping into full-scale adoption.
Step 2: PartnerNext, companies should prioritize building cross-functional and external collaborations.
When AI is reframed — from a tech initiative to a business value initiative — internal and external collaboration becomes essential. The industry’s most common pitfall is reviewing the tech alone, and excluding the dependencies and process needed to fully utilize the technology. To avoid the constant evaluation of partners, companies need to set clear parameters and goals for both internal resources and goals, as well as the external partners that can help with technical expertise and change management.
Look to your peers: Transmission and distribution power grid operators have undertaken similar projects at different scales. Learning from these examples is crucial, as they offer valuable insights into both opportunities and potential pitfalls. One of the greatest strengths of the power grid industry is its collaborative nature. Sharing best practices — of which there are many — can accelerate positive change across markets.Internal collaboration: Grid operations, IT, vegetation, and maintenance teams all benefit from using the same data and approach. However, achieving alignment doesn’t mean combining all goals at once. Keep the long-term vision in mind when identifying partners, but maintain a focused scope to ensure measurable results and clear evaluations.Vetting external partners: AI providers should be assessed based on reference cases, domain expertise, security compliance, and their ability to scale with the grid owner’s needs. This often requires looking beyond local markets or conventional tech vendors. A successful partner must combine deep industry knowledge with technical expertise. Consider both transmission and distribution sectors, identify overlapping challenges, and seek reference cases that address multiple verticals.Establishing governance: Clearly define business ownership, compliance requirements, and whether AI capabilities should be developed in-house or outsourced. Given the critical nature of power grid infrastructure, governance is essential. Implement relevant ISO certifications, rigorous security reviews, and structured oversight processes with regular check-ins and “toll gates” to ensure quality and mitigate risk.Human-AI synergy: AI should be positioned as an enabler, enhancing human expertise rather than replacing it. In power line inspections, the goal is to maximize the time internal teams spend addressing actual issues rather than sifting through raw data. AI can streamline workflows, prioritize actions, and accelerate resolutions.By fostering strong internal and external collaboration, grid owners can accelerate AI adoption and scale its benefits efficiently.
Step 3: ExperimentA key later step is to pilot AI-driven inspections in controlled environments.
Grid owners hesitant about AI adoption often fall into two camps: those who delay due to uncertainty and those who launch small-scale tests without clear objectives. Both approaches can hinder progress. Instead, organizations should adopt a structured experimentation approach that balances quick wins with long-term scalability.
There are a few main considerations for experimentation:
Start with a focused use case: Identify a high-impact, measurable problem, such as detecting defects on insulators or identifying vegetation encroachment.Define clear success criteria: Establish key performance indicators like defect detection accuracy, cost savings, and inspection speed improvements. These should be evaluated not only against your goals, but also against the baseline of what is currently being detected. While technical success is often measured against a 100% accuracy rate, real business value comes from practical impact. Pursuing perfection can sometimes mean overlooking meaningful improvements.Run pilot programs: Test AI performance in real-world conditions where baseline data — such as the number of issues found in manual inspections — is well understood.Measure, iterate, and scale: Use pilot results to refine AI models, workflows, and integration processes before full deployment. Continuously expand by adding new use cases and refining evaluation criteria. However, in power grid operations, scale is the ultimate challenge. While early iterations should be nimble, long-term success depends on the ability to scale solutions effectively. The real test lies not just in delivering accuracy on a small scale but in ensuring AI can operate efficiently across vast networks.Organizations that take a structured, hypothesis-driven approach to AI experimentation avoid costly missteps while ensuring that pilots lead to tangible, scalable benefits.
Step 4: NavigateLast, to scale AI adoption, you’ll need a long-term roadmap.
This plan should be aligned with business goals, regulatory requirements, and workforce readiness. And it also involves managing large datasets to deploy AI at scale. If the previous steps have been done correctly, the main challenge here will be scaling the process, not the AI development itself. A well-defined roadmap ensures seamless integration into operations, addressing both short-term needs and long-term objectives.
Applying the CARE framework to managing AI risks in power line inspectionsWhile AI adoption offers transformative benefits, it also introduces risks that grid operators must proactively manage. The CARE framework provides a structured approach to addressing potential pitfalls.
Again, a first and clear tip is to ask your peers. Review case studies, talk with other companies that have undertaken this sort of project and ask them the questions you should ask themselves. Their worries — and their approach to mitigating risks — should form as a foundation to your approach as well.
Step 1: CatastrophizeBefore deploying AI at scale, operators must ask: What is the worst that could happen? These potential risks include false negatives leading to undetected critical defects, AI model bias causing inconsistent defect classification, and cybersecurity vulnerabilities exposing grid infrastructure data. By anticipating worst-case scenarios, organizations can build safeguards into their AI strategy.
These should also be compared with the baseline: What is the current share of missed critical defects? Is this worse or better than the alternative?
Remember that business value is derived from real-life outcomes, so comparing with a technically perfect solution can very easily lead to decision paralysis. So ask the same question of your current process as of the new process; how do they compare? The answer lies in the gap between risks of your current inspection process and the hypothetical new process.
Step 2: AssessNot all risks carry the same weight. Grid operators should categorize AI risks based on likelihood and severity, prioritizing mitigation efforts accordingly.
Key risk areas to assess include data quality, model accuracy, and operational impact.
Step 3: RegulateAI adoption must align with industry regulations, cybersecurity protocols, and ethical guidelines. Grid operators should establish governance structures that include AI model transparency (with clear documentation), human oversight, and compliance with data protection laws such as GDPR and NERC CIP.
Step 4: ExitThis final step is about defining fallback strategies. AI-driven inspections should not become a single point of failure. Organizations must have exit strategies in case AI models underperform, including maintaining manual inspection capabilities as a backup, keeping flexibility in vendor partnerships to switch providers if needed, and implementing continuous AI model retraining to prevent performance degradation.
Having a previously existing process and adapting a new technology to that helps make sure you don’t lose talent or competence to fallback on if a solution can’t solve the problem or is too risky to pursue.
The path forwardAI-powered power line inspections are no longer a futuristic concept; they are an operational necessity. Grid operators that successfully integrate AI into their workflows will achieve greater reliability, cost efficiency, and sustainability. However, achieving this transformation requires a dual approach: embracing innovation while proactively managing risks.
By applying the OPEN framework, grid operators can methodically scale AI adoption, ensuring alignment with business goals and operational workflows. Simultaneously, the CARE framework ensures that potential risks are identified, assessed, and mitigated before they become critical issues.
With a structured, balanced approach, AI can help the power grid evolve to meet the demands of the next century — ensuring safe, reliable energy delivery in an increasingly complex world.
[Photo: Yongcharoen_kittiyaporn / Shutterstock]
Original article @ Latitude Media.
The post What are the opportunities — and risks — of AI for grid reliability? appeared first on Faisal Hoque.
June 16, 2025
Why government’s AI dreams keep turning into digital nightmares—and how to fix that

Government leaders worldwide are talking big about AI transformation. In the U.S., Canada, and the U.K., officials are pushing for AI-first agencies that will revolutionize public services. The vision is compelling: streamlined operations, enhanced citizen services, and unprecedented efficiency gains. But here’s the uncomfortable truth—most government AI projects are destined to fail spectacularly.
The numbers tell a sobering story. A recent McKinsey analysis of nearly 3,000 public sector IT projects found that over 80% exceeded their timelines, with nearly half blowing past their budgets. The average cost overrun hit 108%, or three times worse than private sector projects. These aren’t just spreadsheet problems; they’re systemic failures that erode public trust and waste taxpayer dollars.
When AI projects go wrong in government, the consequences extend far beyond budget overruns. Arkansas’s Department of Human Services faced legal challenges when its automated disability care system caused “irreparable harm” to vulnerable citizens. The Dutch government collapsed in 2021 after an AI system falsely accused thousands of families of welfare fraud. These aren’t edge cases—they’re warnings about what happens when complex AI systems meet unprepared institutions.
THE MATURITY TRAPThe core problem isn’t AI technology itself—it’s the mismatch between ambitious goals and organizational readiness. Government agencies consistently attempt AI implementations that far exceed their technological maturity, like trying to run a marathon without first learning to walk.
Our research across 500 publicly traded companies for a previous bookrevealed a clear pattern: organizations that implement technologies appropriate to their maturity level achieve significant efficiency gains, while those that overreach typically fail. Combining this insight with our practical work implementing digital solutions in the public sector led to the development of a five-stage AI maturity model specifically designed for government agencies.
Stage 1: Initial/Ad Hoc. Organizations at this stage operate with isolated AI experiments and no systematic strategy.
Stage 2: Developing/Reactive. Agencies begin showing basic capabilities, typically through simple chatbots or vendor-supplied solutions.
Stage 3: Defined/Proactive. Organizations develop comprehensive AI strategies aligned with strategic goals.
Stage 4: Managed/Integrated. Agencies achieve full operational integration of AI with quantitative performance measures.
Stage 5: Optimized/Innovative. Organizations reach full agility and influence how others use AI.
Most government agencies today operate at stages 1 or 2, but AI-first initiatives require stage 4 or 5 maturity. This fundamental mismatch explains why so many initiatives fail. Without the right cultural frameworks, technological expertise, and technical infrastructure, organization-wide transformation based around AI capabilities stand little chance of success.
START WHERE YOU ARE, NOT WHERE YOU WANT TO BEThe path to AI success begins with brutal honesty about current capabilities. A national security agency we studied exemplifies this approach. Despite seeing enormous opportunities in large language models, they recognized serious risks around data drift, model drift, and information security. Rather than rushing into advanced implementations, they are pursuing incremental development grounded in institutional knowledge and cultural readiness.
This measured approach doesn’t mean abandoning ambitious goals—it means building toward them systematically. Organizations must select projects that are appropriate to their maturity level while ensuring each initiative serves dual purposes: delivering immediate value and advancing foundational capabilities for future growth.
THREE IMMEDIATE OPPORTUNITIESFor agencies at early maturity stages, three implementation areas offer immediate value creation opportunities while building toward transformation:
1. Information Technology Operations
IT represents the most accessible entry point for government AI adoption. The private sector offers a road map— 88% of companies now leverage AI in IT service management, with 70% implementing structured automation operations by 2025, up from 20% in 2021.
AI can transform government IT through chatbots handling common user issues, intelligent anomaly detection identifying network problems in real-time, and dynamic resource optimization automatically adjusting allocations during peak periods. These capabilities deliver immediate efficiency gains while building the technical expertise and collaborative patterns needed for higher maturity levels.
The challenge lies in government’s unique constraints. Stringent security requirements along with legacy systems at agencies like Social Security and NASA create implementation hurdles that private sector organizations rarely face. Success requires careful navigation of these constraints while building foundational capabilities.
2. Predictive Analytics
Predictive analytics represents perhaps the highest-value opportunity for early-stage agencies. Government organizations possess vast data resources, complex operational environments, and urgent needs for better decision-making—perfect conditions for predictive AI success.
The U.S. military is already demonstrating this potential, using predictive modeling for command and control simulators and live battlefield decision-making. The Department of Veterans Affairs has trialed suicide prevention programs using risk prediction algorithms to identify veterans needing intervention. Beyond specialized applications, predictive analytics can improve incident management, enable predictive maintenance, and forecast resource needs across virtually any government function.
These implementations advance AI maturity by building essential data management practices and analytical capabilities while delivering immediate operational benefits. Unlike complex generative AI systems, predictive analytics can be implemented successfully at any maturity stage using well-established machine learning techniques.
3. Cybersecurity Enhancement
Cybersecurity offers critical immediate value, with AI applications spanning digital and physical protection domains. Modern AI security platforms process vast amounts of data across networks, endpoints, and physical spaces to identify threats that traditional systems miss—a capability that is particularly valuable given increasing attack sophistication.
Current implementations demonstrate proven value. The Cybersecurity and Infrastructure Security Agency’s Automated Indicator Sharing programenables real-time threat intelligence exchange. U.S. Customs and Border Protection deploys AI-enabled autonomous surveillance towers for border situational awareness. The Transportation Security Administration uses AI-driven facial recognition for streamlined security screening.
While national security agencies implement the most advanced applications, these capabilities offer immediate value for all government entities with security responsibilities, from facility protection to data privacy assurance.
BUILDING SYSTEMATIC SUCCESSCreating sustainable AI capabilities requires following five key principles:
Build on existing foundations. Leverage current processes and infrastructure while controlling implementation risks rather than starting from scratch.
Develop mission-driven capabilities. Create implementation teams that mix technological and operational expertise to ensure AI solutions address real operational needs rather than pursuing technology for its own sake.
Prioritize data quality and governance. AI systems only perform as well as their underlying data. Implementing robust data management practices, establishing clear ownership, and ensuring accuracy are essential prerequisites for success.
Learn through limited trials. Choose use cases where failure won’t disrupt critical operations, creating space for learning and adjustment without catastrophic consequences.
Scale what works. Document implementation lessons and use early wins to build organizational support, creating momentum for broader transformation.
THE PATH FORWARDGovernment agencies don’t need to choose between ambitious AI goals and practical implementation. The key is recognizing that most transformation happens through systematic progression. While “strategic leapfrogging” is possible in some situations, it is the exception rather than the norm. By starting with appropriate projects, building foundational capabilities, and scaling successes, agencies can begin realizing concrete AI benefits today while developing toward their longer-term transformation vision.
The stakes are too high for continued failure. With 48% of Americans already distrusting AI development and 77% wanting regulation, government agencies must demonstrate that AI can deliver responsible, effective, and efficient outcomes. Success requires abandoning the fantasy of overnight transformation in favor of disciplined, systematic implementation that builds lasting capabilities.
The future of government services may indeed be AI-first, but getting there requires being reality-first about where agencies stand today and what it takes to build toward tomorrow.
(This article draws on the cross-disciplinary expertise and applied research of Faisal Hoque, Erik Nelson, Professor Thomas Davenport, Dr. Paul Scade, Albert Lulushi, and Dr. Pranay Sanklecha.)
[Photo: trekandphoto/Adobe Stock; vegefox.com/Adobe Stock]
Original article @ Fast Company.
The post Why government’s AI dreams keep turning into digital nightmares—and how to fix that appeared first on Faisal Hoque.
Staying Optimistic in the Midst of Chaos and Grief

KEY POINTSReal optimism begins in the dark—when individuals choose to stay present even as their world falls apart.When everything breaks, purpose becomes a lifeline—it pulls people forward when nothing else can.In chaos, it’s the smallest rituals—and the people who stand with them—that keep someone’s soul from drifting.
This is a chaotic time on our planet. Geopolitical tensions are rising. Economies are slowing while power shifts to new centers. Wars rage in places that once felt far away. And beneath it all, vast technological shifts are changing the way we live, work, and even understand ourselves. For many people, this change leads to feelings of grief for the loss of a past that once seemed so unshakeable, so permanent.
One of the hardest things about grief is that, alongside the pain, we feel disoriented. Time fractures. Meaning slips out of reach. One moment, you’re going about life. The next, you’re living in a different reality. What once felt certain now feels impossibly fragile.
I’ve been in that place more than once.
As an entrepreneur, I’ve watched companies I helped build collapse. As a son, I’ve said painful goodbyes that will always stay with me. And as a father, I’ve stood in a hospital room beside my child, praying for miracles. You don’t forget those moments. They mark you. But they also shape you.
What I’ve learned—over the years, through reflection and experience—is that optimism isn’t some bright, shiny feeling you summon on demand. It’s not the denial of pain or the forced performance of positivity. It’s a quiet, daily practice. A steady choice to stay present. To remain grounded. To hold on to a thread of possibility, even when the future feels unclear.
Here’s what I’ve come to understand about cultivating optimism when everything around you feels as if it’s coming undone.
1. Begin With Acceptance, Not AvoidanceWe often mistake optimism for blind hope, for looking away from what hurts. But real optimism starts with honesty. With allowing ourselves to feel what is true.
There are days when we all want to pretend that we’re fine, when it feels easier to keep moving than to pause and feel the full weight of what we’re carrying. But avoidance adds pressure to an already cracked foundation. It delays the necessary work of healing.
I’ve found that when I stop resisting the discomfort—when I truly allow myself to grieve, to rage, to feel helpless—something shifts. Not because the pain goes away. But because I begin to reclaim the energy I was using to resist it.
The Buddha said, “Peace comes from within. Do not seek it without.”
That quote has stayed with me through some of my darkest seasons. It reminds me that we don’t have to wait for external conditions to improve before we can begin to anchor ourselves. We can begin even in the middle of the storm.
2. Let Purpose Light the PathWhen everything feels like it’s unraveling, purpose becomes a lifeline. Not because it fixes things, but because it gives us a reason to keep moving through them.
For me, purpose has always meant showing up—for my family, for my son, for the people I serve through my work. There were days when I felt emotionally and physically spent. But even then, I could write. I could build. I could listen. Even when the future felt impossible, I could contribute something meaningful in the present.
The Sufi poet Rumi wrote, “The wound is the place where the light enters you.”
I return to that often. It doesn’t mean that suffering is noble. It means that meaning can arise from it—if we choose to look in that direction.
We may not get to choose what happens to us. But we always get to choose how we respond, and how we use what we’ve endured to serve something larger than ourselves.
3. Stay Close to What Grounds YouWhen life feels out of control, we need something to hold onto—something solid, even if it’s small.
For some, faith serves this role. For others, it’s nature, music, or community. For me, it’s always been a mix of stillness and structure. Early morning, silence. Mindful breathwork. A cup of tea before the day begins. These practices may seem simple, but they’ve been the difference between drifting and remaining anchored.
In grief or chaos, it’s easy to abandon our rituals. To say, “What’s the point?” But those small practices are what keep us human. They tether us to the moment and remind us that presence is possible, even in pain.
Sometimes, the most revolutionary thing you can do is to breathe, to rest, to create space to feel—without rushing to fix or explain.
4. Let Others Walk Beside YouWe are not meant to carry everything alone.
And yet, in moments of deep pain, the instinct is often to withdraw. To protect ourselves by retreating. I’ve done it. I’ve watched others do it, too. But isolation rarely eases suffering. It amplifies it.
Some of the most healing moments in my life came not from advice, but from presence. A friend’s voice on the phone. A nurse’s quiet kindness. My son’s courage as he fights battles no one should have to face.
Optimism isn’t always something we generate from within. Sometimes, we borrow it from others until we’re strong enough to hold it again ourselves.
There is grace in letting someone walk beside you. There is strength in saying, “I can’t do this alone.”
5. Rebuild Slowly, With IntentionThere is no rush to healing. No switch to flip. No version of yourself to “return to.”
What there is, instead, is the quiet, deliberate process of rediscovery. Of asking: Who am I now, after all this? What still matters? What must I let go of?
What that looks like is different for each of us.
For me, it’s been writing, creating, and building something impactful. But deeper than any professional act, it’s been about reconnecting with my own humanity. It’s been about choosing quality over speed.
Meaning over noise. Love over fear.
In the End, It’s About LoveOptimism, at its core, is an act of faith—not in outcomes, but in the process of becoming. It’s the belief that even when we’ve been broken, we are still worthy of healing. Still capable of giving. Still here.
The kind of optimism that endures is not rooted in denial. It’s rooted in love. It’s choosing to stay open. Choosing to believe in beauty, even when the world feels dark. Choosing to show up with tenderness, even when we are tired or afraid.
Whether you’re grieving, recovering, or simply trying to keep your head above water, know this: You are not alone.
The light may flicker. But it remains.
And so do you.
[Photo: Vibe Images/Shutterstcok]
Original article @ Psychology Today.
The post Staying Optimistic in the Midst of Chaos and Grief appeared first on Faisal Hoque.
The Emotional Cost of AI Intimacy

KEY POINTSAI companions soothe us but can dull our capacity for emotional growth and resilience.When machines echo our thoughts, we risk losing authorship of our own evolving identity.AI mimicry may feel like empathy, but it lacks the depth of real human connection.
Let’s talk about something that is quietly reshaping our emotional lives. We’re entering a new era of connection—one in which the voice that comforts us at night or helps us process our hardest feelings might not belong to a friend, a partner, or a therapist. In fact, it might not be human at all.
AI companions that listen, remember, and respond with what feels like care already exist in a variety of forms. For millions, especially in moments of emotional vulnerability, these systems are becoming confidants. And their sophistication will only increase.
Imagine having a conversation with an AI companion that feels almost human. No judgment. No interruptions. Just presence. At first glance, that looks like progress.
But research is beginning to challenge this assumption. Consistent affirmations from LLMs can mirror toxic behavioral patterns. In therapeutic settings, there is a risk that LLMs can encourage delusional thinking and reinforce stigma around mental health conditions. And in some cases, excessive engagement with chatbots can even increase loneliness rather than decreasing it. The emotional trade-offs of engaging with AI companions can be profound, quietly affecting our sense of identity, community, and human connection.
So what are we really giving up when some of our most intimate interactions are with machines? Let’s explore six subtle, but significant, costs—and what we can do to stay human in the process.
1. The Comfort Trap: When Ease Replaces EffortThink about your closest relationships. Chances are, they weren’t built through ease alone. They grew through misunderstandings, forgiveness, and the willingness to stick around when things got hard.
In my personal and professional relationships, I’ve learned that intimacy isn’t about constant agreement—it’s about navigating disagreement, embracing growth, and seeing yourself through another’s eyes.
AI companions, by design, don’t push back. They validate. They soothe. They simplify. That might feel good in the moment, but without the tension and repair of real connection, we stop growing.
The cost: Emotional ease that slowly erodes our capacity for growth.
Try this: Let AI be your sounding board when needed. But bring your real fears, flaws, and hopes to people who can see—and challenge—you in return.
2. Narrative Drift: Who’s Telling Your Story?Our identity is a story we keep telling and rewriting. But when your AI starts reflecting back patterns—“You always feel anxious on Mondays,” “You’ve mentioned your breakup a lot this week”—it’s easy to let those summaries shape how you see yourself.
It can feel accurate. Even insightful. But it can also become limiting.
Have you ever found yourself stuck in a story someone else keeps telling about you? Now imagine that storyteller is a machine that never forgets, never shifts perspective.
The cost: Handing over authorship of your evolving self.
Try this: Each week, write a short reflection. What did you feel, discover, or release? Your story deserves to be told in your own voice instead of being reduced to an algorithm’s summary.
3. Linguistic Conditioning: Speaking to Please the MachineThe more people talk to AI, the more they change how they speak. As we spend increasing amounts of time interacting with machines, we naturally drift into patterns of communication that are shaped to elicit certain kinds of responses. This conditioning becomes second nature.
But here’s the problem—when we optimize our speech to get a more satisfying response, we risk editing ourselves too much. We trade emotional honesty for emotional efficiency.
The cost: A quieter, less authentic voice, and perhaps a less authentic self.
Try this: Talk to someone who lets you ramble, contradict yourself, or be uncertain. Speak without editing. That’s where the real you comes through. Part of who we are is found in the gaps between clear, descriptive sentences.
4. The Empathy Deficit: Simulated Connection Without RiskEmpathy is a two-way street. It asks for presence, vulnerability, and sometimes pain. It’s not just about being heard—it’s about being held.
AI can mimic empathy. But it can’t feel. And when we start to find that mimicry more comforting than mutual connection, we risk losing our tolerance for emotional effort.
Have you ever found yourself turning to a screen when a conversation feels too hard? I have. And it’s a signal of how easy it is to forget what real empathy requires.
The cost: Losing the emotional muscles we only build in messy, real connection.
Try this: Reach out to someone you care about and ask, “How are you really doing?” Then listen without fixing. Stay in the awkwardness. That’s empathy.
5. The Illusion of Connection: Feeling Full, but EmptyAI can fill a space. But it can’t fill a life.
It can respond to our needs, but it won’t show up at our door. It can validate our feelings, but it can never share a memory, a ritual, or a moment of silence. That matters more than we think.
When we rely on machines for companionship, we risk becoming people who are emotionally satisfied yet increasingly alone.
The cost: A false fullness that masks a deeper hunger for real belonging.
Try this: Join something analog—a book club, a cooking class, a walking group. Let relationships form slowly and imperfectly.
6. Emotional Dependency on Systems You Don’t OwnLet’s be honest—AI intimacy is a business. It can be shaped, monetized, or deleted without notice. You could wake up one day and your AI companion might be gone, updated, or hidden behind a paywall.
This isn’t just an inconvenience. For people who’ve built emotional habits around these systems, it’s a kind of quiet heartbreak.
The cost: Emotional reliance on something that has no obligation to stay.
Try this: Invest in relationships that aren’t governed by algorithms or terms of service. Ones that evolve with you, not based on updates, but on mutual care.
Staying Human: Three Anchor Practices1. Reclaim Your Story
Write about your week. What surprised you? What stung? What made you laugh? Let your narrative come from within.
2. Practice Human Empathy
Ask someone, “What’s been weighing on you lately?” Listen. Don’t fix. Be with them.
3. Set Boundaries With AI
Decide when and why you’ll engage with AI. Avoid it when you’re most emotionally raw. Turn to people first.
The emotional cost of AI intimacy doesn’t hit like a crash. It shows up slowly, in how we talk, how we connect, how we come to see ourselves.
But here’s the good news: We still have a choice. We can engage with technology consciously, without giving away the parts of ourselves that matter most.
Choose people. Choose presence. Choose the unpredictable, imperfect beauty of human connection.
That’s where your fullest humanity still lives.
[Photo: Butusova Elena/Shutterstock]
Original article @ Psychology Today.
The post The Emotional Cost of AI Intimacy appeared first on Faisal Hoque.
June 15, 2025
Finding Stability Amidst AI-Driven Disruption

The great challenge leaders will face as AI changes the world around us is to bring their organizations through the storms and upheavals in one piece. There is something of a paradox here, for the organization will have to change to survive in a changing world. What, if anything, remains constant on that journey? What is it that unifies a business or other organization from moment to moment, day to day, and year to year, through changing market conditions, physical movement, staff turnover, and even changes in name and ownership? What makes decisions that affect the future performance of a business meaningful now?
The answer, the red thread that connects snapshots through time, is purpose. Purpose provides a business with its foundation, its reason for existing. It is the source of its values and the foundation that makes work meaningful for employees. The faster the world moves, and the more rapidly we move through it, the more important it becomes to have something secure against which we can navigate—a fixed point of reference, a beacon, a polestar. Purpose serves that guiding role.
Reaffirm your purpose. Everything follows from and returns to this.
Navigating with PurposeTo be steered by purpose is to embrace continuity and change at the same time, and even to seek one in the guise of the other. To create an organization that can not only survive but thrive in a time of deep uncertainty, you must:
Use your imagination. Project your company’s purpose into the future and explore hypotheticals. Think about the changes AI might bring while pushing your imagination to take in even the most unlikely options. You cannot plan for all contingencies, but a little speculative daydreaming can help inoculate you to the shock of dramatic and unforeseen change. If you can avoid the paralysis that comes with surprise, you can move rapidly to take advantage of the new situation.
Keep constant watch on the horizon. Uncertainty about the future does not imply an epistemic free-for-all. Sometimes, events will emerge from the mist leaving you with little time to react. But you will often be able to identify inevitable changes well in advance so long as you remain alert for them. Be ready to move quickly to add new AI capabilities to your innovation portfolio and to adapt your organization to deliver on its purpose even as the tech landscape changes.
Cultivate emotional intelligence. Emotional intelligence is the capacity to be keenly aware of your own emotions and the impact they can have on any sort of personal or professional relationship. And although responding to AI is often talked about as an intellectual and technological challenge, it is also an emotional one. We will be living in uncertainty, exposed to dramatic change and finding a way through fears of and dreams for the future. To lead well in such times requires emotional intelligence as an absolute minimum. Emotional intelligence will help you manage your own emotions and your reactions to them and will also help guide your team and organization through the inevitable upheavals.
Adopt a “beginner’s mind” approach. “In the beginner’s mind,” writes the Zen monk Shunryu Suzuki, “there are many possibilities, but in the expert’s there are few.” In the context of AI, we can think of this attitude as remaining open to ignorance by accepting the unknown. For the leader, this means being comfortable with not having all the answers. It is only in this way that you will be able to respond appropriately when the unimaginable happens.
Slow things down even as the world is speeding up. When things are always changing, when the old answers don’t work and the new ones need to be found yesterday, when everything is urgent and important, our natural instinct is to respond with speed. But the Stoic philosopher Seneca points out something important about this reaction: “When you hurry through a maze; the faster you go, the worse you are entangled.” This is useful wisdom for living in an evolving AI-driven world. It is indeed a maze, and possibly the most complex one ever created. Instead of hurrying through it, it will pay to instead take our time, to stroll and consider—there is always more time available than you think.
Aim for antifragility rather than stability. Nassim Nicholas Taleb’s concept of “antifragility” encapsulates the power that can come from embracing uncertainty: ‘“Fragility” can be defined as an accelerating sensitivity to a harmful stressor: this response plots as a concave curve and mathematically culminates in more harm than benefit from random events. “Antifragility” is the opposite, producing a convex response that leads to more benefit than harm.’
An antifragile organization is one that is designed not just to survive but to flourish in uncertain times.
Adapted/published with permission from ‘TRANSCEND‘ by Faisal Hoque (Post Hill Press, Hardcover, April 8, 2025). Copyright 2025, Faisal Hoque, All rights reserved.
[Photo: AdobeStock]
Original article @ ChiefExecutive.
The post Finding Stability Amidst AI-Driven Disruption appeared first on Faisal Hoque.
Human Resources + AI Resources = Future of Work
As artificial intelligence transforms our organizations, I’ve observed a critical leadership challenge emerging: how to effectively integrate human talent with AI personas and agents to create something greater than either could achieve alone. This is not merely a technical challenge—it is fundamentally about leadership.
The Age of AI Demands a New Leadership ApproachThe organizations thriving amid AI disruption share a common characteristic: regenerative leadership. In my work across industries, I’ve found that regenerative leadership—focused on renewal, restoration, and continuous evolution—provides the framework needed to navigate this unprecedented transformation.
Regenerative leadership transcends traditional management by viewing organizations as living systems that can continuously replenish their capabilities. This mindset is essential when integrating AI personas—those increasingly sophisticated digital entities that can initiate conversations, offer suggestions, and adapt to human needs. These are no longer mere tools; they are emerging partners.
In my decades of working with organizations through business and technological transitions, I’ve witnessed many approaches fail because they treated new technologies as simply faster versions of old tools. AI demands something fundamentally different. Its capacity to learn, adapt, and interact requires us to rethink not just our processes but our entire relationship with technology.
The Regenerative Leadership Framework for AI IntegrationMy research has revealed five interconnected principles that form the foundation of successful human-AI integration:
1. Purpose-Driven Technology AdoptionAI implementation must begin with purpose rather than capability. When organizations deploy technologies without clear alignment to deeper values, we see what I call the illusion of meaning without genuine commitment.
The regenerative approach starts with essential questions: How does this technology advance our mission? How will it enhance our stakeholders’ experience? By anchoring technological decisions in human values, we build trust throughout the organization.
Consider how this might play out in healthcare: Imagine a hospital that initially views AI personas as cost-cutting tools for administrative processes. If they were to reframe their approach to focus on how these digital collaborators could help clinicians spend more time with patients, the entire implementation could transform. Adoption might increase, resistance decrease, and the actual value delivered could exceed expectations. Purpose wouldn’t just be a philosophical consideration—it would become the key determinant of success.
2. Deep Intelligence Through CollaborationEffective AI integration requires what I call “deep intelligence”—understanding complex patterns and interdependencies across traditionally siloed domains. This intelligence emerges only through deliberate cross-boundary collaboration.
I’ve observed that the most successful AI implementations involve multidisciplinary teams where technologists, ethicists, operational leaders, and front-line employees collaborate closely. This diversity of perspective helps leaders anticipate ripple effects and address them proactively.
In practice, this could mean creating structures that enable continuous dialogue about AI implementation. A manufacturing company might establish an “AI Integration Council” with representatives from every department. Such a council wouldn’t just approve technology decisions—they would actively participate in designing how AI personas integrate into daily workflows. The result could be a human-AI ecosystem that continuously adapts to emerging needs rather than remaining static.
3. Human-AI Synergy by DesignThe transformative power of AI emerges when it amplifies distinctly human capabilities rather than replacing them. Regenerative leaders create collaborative models where AI personas complement human strengths—creativity, empathy, ethical judgment, and contextual understanding.
This synergy requires intentional design of human-AI interactions. Rather than implementing systems that diminish human agency, regenerative leaders create workflows where AI handles routine tasks while enhancing human decision-making and creativity.
Take financial services as a hypothetical example: A wealth management firm could design their advisory AI to act as a “thinking partner” for their human advisors. The AI might analyze vast amounts of market data and identify patterns, but deliberately stop short of making recommendations. Instead, it could present insights in ways that enhance the human advisor’s strategic thinking. Clients might report higher satisfaction with this hybrid approach than with either purely human or predominantly AI-driven services.
4. Creating the AI Innovation PortfolioRather than pursuing disconnected AI initiatives, regenerative leaders develop comprehensive innovation portfolios that balance immediate returns with long-term resilience. These portfolios are managed through what I call the OPEN framework:Outline possibilities by reaffirming purpose and identifying viable use cases
Partner across boundaries to build collaborations that fill capability gaps
Experiment through iterative testing, starting small and scaling what works
Navigate with purpose, continuously adapting to emerging opportunities
Each initiative in this portfolio must answer essential questions: How does this AI persona enhance human capabilities? What positive ripple effects might emerge? How will this strengthen our adaptability?
The most effective portfolios I’ve seen maintain diversity across multiple dimensions—risk level, time horizon, and organizational impact. They include near-term applications that deliver quick wins alongside more exploratory initiatives designed to build future capabilities. This balanced approach ensures ongoing momentum while investing in longer-term transformation.
5. Reimagining Organizational StructureTraditional organizational structures struggle to support effective human-AI collaboration. HR departments must evolve into what I call “Human-AI Relations”—functions that optimize collaboration between employees and AI resources.
This transformation addresses fundamental questions: How should responsibilities be divided between humans and AI personas? What new skills do employees need? How do we evaluate human-AI partnerships? What ethical guidelines should govern AI deployment?
By reimagining organizational structure through a regenerative lens, leaders create the infrastructure needed for sustainable human-AI integration.
Consider how a global retail organization might adapt: They could create a new role—”AI Experience Designers”—who would work at the intersection of technology, operations, and human experience. These specialists wouldn’t just implement AI; they would continuously evolve how humans and AI collaborated. This structural innovation could allow the organization to adapt more quickly to both technological advancements and shifting human needs.
The Path Forward: Detach and DevoteThe leadership challenge before us requires both letting go and leaning in—what I describe in my new book, TRANSCEND: Unlocking Humanity in the Age of AI as “detach and devote.” We must release what no longer serves while fully committing to what creates regenerative value.
Leaders must detach from extractive mindsets that view AI merely as a cost-saving tool and from control-oriented leadership that stifles adaptation. Simultaneously, they must devote themselves to building cultures where human creativity and AI capabilities amplify each other and to measuring success through multidimensional impact.
This balanced approach recognizes that AI is neither a panacea nor a threat, but a powerful tool that must be wielded with wisdom, compassion, and foresight.
The Regenerative Future of WorkThe integration of AI personas into our organizations offers an unprecedented opportunity to create more vibrant, adaptive, and purposeful workplaces. By embracing regenerative leadership principles, we can ensure that AI enhances human potential rather than diminishing it.
The future of work is neither human-only nor AI-dominated—it is collaborative and regenerative. Organizations that thrive will be those that master this collaboration, creating environments where human and artificial intelligence combine to achieve outcomes neither could accomplish alone while continuously renewing their capacity to create value.
This is our leadership imperative in the age of AI: to build regenerative systems where technology and humans coexist in ways that enhance our collective potential while honoring what makes us uniquely human.
Original article @ Human Capital Leadership Review.
The post Human Resources + AI Resources = Future of Work appeared first on Faisal Hoque.
June 1, 2025
AI Disruption: Vanishing Jobs, Rising Anxiety

KEY POINTSThe threat of AI job loss is a psychological crisis, not just an economic one—driven by fear of lost purpose.Machines don’t buy—people do. Displace too many, too fast, and we erode our own markets.A generation locked out of meaningful work isn’t just an economic liability—it’s a geopolitical risk.
According to Anthropic’s CEO Dario Amodei, half of all entry-level white-collar jobs could vanish within five years. For millions—especially those just entering the workforce—the first rung of the economic ladder is slipping away beneath them.
This isn’t just an employment crisis. It’s a psychological one as well. People are waking up to a world in which their role, their relevance, and their future all feel uncertain.
But here’s what I’ve learned from decades of leading innovation and watching transformation unfold across industries and governments:
When the ground shifts beneath us, the worst thing we can do is freeze.
Now is the time for individuals, business leaders, and policymakers to act. What we’re facing right now are not just ripples but waves of change. And if we don’t respond wisely, those waves could become a tsunami.
Even in the most optimistic scenario—one in which AI increases productivityand creates new kinds of work—the disruption will be enormous. If millions of low-skilled workplace roles are replaced by high-skilled alternatives, we risk abandoning an unqualified generation to long-term unemployment, economic marginalization, and social unrest.
This isn’t just theory. Western nations are still carrying the social and political scars from the collapse of their manufacturing industries. We can’t afford for this history to repeat itself, only this time at digital speed and on a global scale.
What we need is an aggressive, coordinated response by individuals, businesses, and governments.
1. What Should Individuals Do?If you’re feeling anxious about AI and the future of work, you’re not alone. And you’re not paranoid either. The potential consequences are very real.
The question isn’t whether change is coming. It’s how you meet it.
Start here:
Name what you’re feeling.This is more than job anxiety—it’s anticipatory grief. You are grieving for futures you thought were within reach. That’s real and it’s essential to honor it. But don’t let that feeling become a prison.Focus on what can’t be automated.
AI may be good at pattern recognition. But only you can build trust, navigate complexity with compassion, or make meaning out of chaos. That’s your edge.Upskill, but stay grounded.
Don’t chase shiny tools or the next big platform. Invest in learning that reflects your purpose and your power to contribute meaningfully—whether that’s emotional intelligence, ethical oversight, systems thinking, or cross-disciplinary problem-solving.Let go of the ladder. Build your own path.
Traditional careers were designed for stability, not evolution. Whether we like it or not, we’ve moved past this era. Looking ahead, we need to think in terms of portfolio careers, personal reinvention, and freedom through fluidity.Stay connected.
The future is relational, not transactional. Surround yourself with people who share your values and are willing to reimagine the world together.2. What Should Business Leaders Do?
I’ve worked with countless C-suites, and I’ve seen what happens when leaders chase transformation without anchoring it in values.
AI may deliver efficiencies, but if those efficiencies come at the cost of the market as a whole, what have we really gained?
Machines don’t buy what we sell. People do. And when we displace too many too quickly, we risk eroding the very markets we depend on.
Here’s what responsible leadership looks like:
Design for dignity.Don’t just ask what can be automated—ask who benefits, who’s left behind, and what new forms of contribution are possible.Rebuild the on-ramp.
Entry-level jobs are how people learn to lead. If those are disappearing, we need to create new developmental roles that blend practical human judgment with AI fluency.Invest in human potential, not just digital transformation.
Technical skills are necessary but insufficient. The real differentiator is people who can think holistically, act ethically, and learn endlessly.Speak honestly. Create safety.
Transformation brings uncertainty. Normalize it. Talk about it. And build cultures where people see a lack of predictability as a catalyst for growth, not a source of fear.Think long-term. Think systemically.
If your AI strategy undermines your consumer base or your workforce’s resilience, it’s not a strategy. It’s short-termism dressed up as disruption.3. What Should Government Do?
Too many governments are watching the AI transition unfold like passive observers. But this isn’t a future problem—it’s something we need to deal with now. And the stakes couldn’t be higher.
A generation locked out of meaningful work isn’t just an economic liability—it’s a geopolitical risk.
We need public institutions to do what only they can: set guardrails, invest in people, and preserve the social fabric that holds democracy together.
This is what’s required:
Declare an AI transition emergency.Don’t wait for the data to catch up to the disruption. Launch bold, national-scale initiatives that combine education, upskilling, and job placements.Modernize the safety net.
The old model doesn’t work when careers are nonlinear and job disruption is constant. Explore portable benefits, universal basic income pilots, and worker transition supports.Hold AI accountable.
Require transparency, fairness, and impact reviews for any public- or private-sector AI tool that affects employment, justice, or opportunity.Fund human-first innovation.
Back ventures and programs that drive inclusion, emotional well-being, and dignified employment—not just automation.Tell better stories.
Narratives matter. People need to believe in the future again. Invest in public messaging, civic education, and hope. Despair is contagious, but so is possibility.This Is a Test of Both Leadership and Humanity
The suggestions above are not meant as an attack on AI. In my recent book TRANSCEND: Unlocking Humanity in the Age of AI, I discuss at length the various ways this technology can and should be put to use at the individual, business, and government levels. But the attitude we take toward this implementation is just as important as the gains we make.
We are at a critical turning point. If we choose to automate without reflection, optimize without empathy, and innovate without responsibility, we will build a future that’s efficient—but inhuman.
But if we act now—with courage, foresight, and care—we can still build a world in which technology expands our humanity rather than erases it.
So the question is no longer, “Can AI do the work?” We know the answer to that. It’s, “Who do we want to be in an AI-powered world?”
ReferencesThe Economic Case for Saving Human Jobs. Faisal Hoque, Fast Company, 04-24-2025
Navigating Radical Change. Faisal Hoque, Psychology Today, 04-22-2025
When It Comes to AI, Innovation Isn’t Enough. Faisal Hoque, Fast Company, 02-12-2025
From Code to Compassion: Designing AI With Empathy. Faisal Hoque, Psychology Today, 05-27-2025
[Photo: G-Stock Studio/Shutterstock]
Original article @ Psychology Today.
The post AI Disruption: Vanishing Jobs, Rising Anxiety appeared first on Faisal Hoque.
May 30, 2025
For CEOs, AI tech literacy is no longer optional

Artificial intelligence has been the subject of unprecedented levels of investment and enthusiasm over the past three years, driven by a tide of hype that promises revolutionary transformation across every business function. Yet the gap between this technology’s promise and the delivery of real business value remains stubbornly wide. A recent study by BCG found that while 98% of companies are exploring AI, only 26% have developed working products and a mere 4% have achieved significant returns on their investments. This striking implementation gap raises a critical question: Why do so many AI initiatives fail to deliver meaningful value?
KNOWLEDGE GAPA big part of the answer lies in a fundamental disconnect at the leadership level: to put it bluntly, many senior executives just don’t understand how AI works. One recent survey found that 94% of C-suite executives describe themselves as having an intermediate, advanced, or expert knowledge of AI, while 90% say they are confident in making decisions around the technology. Yet a large study of thousands of U.S. board-level executives reported in MIT Sloan Management Review in 2024 found that just 8% actually have “substantial levels of conceptual knowledge regarding AI technologies.”
The only way AI initiatives can deliver significant value is when they are aligned with the organization’s broader enterprise architecture. When I introduced the terminology of “strategic enterprise architecture” back in 2000 (e-Enterprise, Cambridge University Press), I wanted to emphasize the importance of aligning technical architecture with the broader structure of the business as a whole–its purpose, strategies, processes, and operating models. With AI, this alignment is more important than ever. But it relies on the ability of senior leaders to understand both parts of the enterprise equation.
OPPORTUNITY COSTSThe current gap between confidence and competence creates a dangerous decision-making environment. Without foundational AI literacy, leaders simply can’t make informed decisions about how any given AI implementation fits with strategic priorities and the processes and existing tech infrastructure of the business. Ultimately, they end up delegating critical strategic choices to technical teams that often lack the business context necessary for value-driven implementation. The result? Millions of dollars invested in AI initiatives that fail to deliver on their promises.
In addition to project failure, a lack of AI literacy leads to strategic opportunity costs. When CEOs can’t distinguish between truly transformative AI applications and incremental improvements, they risk either underinvesting in game-changing capabilities or overspending on fashionable but low-impact technologies.
WHAT CEOS NEED TO KNOWBecoming AI-literate doesn’t mean that CEOs need to be able to build neural networks or understand the mathematical intricacies of deep learning algorithms. Rather, leaders need the kind of foundational practical knowledge that lets them align AI initiatives with core business operations and strategic direction.
At minimum, CEOs should develop a working understanding of AI in three broad areas.
1. THE TYPES OF AICEOs should understand the differences between the four major types of AI, the business applications of each, and their current maturity level.
Analytical/Predictive AI focuses on pattern recognition and forecasting. This technology has been maturing for decades and forms the backbone of data-driven decision making in domains from finance to manufacturing.Deterministic AI systems apply predefined rules and logic to automate processes and decision-making, creating efficiency but requiring careful governance.Generative AI—the current hype king—creates new content that resembles human work, offering unprecedented creative capabilities alongside significant ethical challenges.Agentic AI is the new kid on the block. It not only analyzes or produces outputs but takes bounded actions toward defined goals. Agentic AI offers the greatest opportunity and the largest risks for enterprise transformation, but is largely untested at scale.2. TECHNICAL INFRASTRUCTURE CONSIDERATIONSThe infrastructure underpinning AI implementations shapes what is possible and practical for specific organizations.
· Deployment Models determine where and how AI systems operate. On-premises deployments maximize control over data, systems, and compliance but require significant capital investment and specialized personnel. Cloud-based deployments offer scalability and access to cutting-edge hardware but increase exposure to data security and vendor lock-in risks. Hybrid models retain sensitive processes in-house while outsourcing other workloads.
· Open and Closed Systems. Closed AI systems—proprietary systems created by commercial vendors—simplify deployment and provide enterprise-grade support but normally offer limited transparency and customization. Open (or open source) systems provide greater control and flexibility, particularly for specialized applications, but require more internal capacity and ongoing maintenance.
· Computing Resource Needs vary dramatically based on how AI is deployed. Most organizations primarily use AI for inference (using the reasoning capabilities of trained models) rather than training their own models. This approach significantly reduces hardware requirements but limits customization and mission-specific capabilities.
· Data Infrastructure is the foundation for successful AI implementations. This includes data pipelines for collecting and transforming information, storage systems for managing structured and unstructured data, processing frameworks for maintaining data quality, and governance mechanisms for ensuring compliance and security. Organizations with mature data infrastructure can implement AI more rapidly and effectively than those still struggling with data silos or quality issues.
3. THE AI TECH STACKThe contemporary AI stack comprises five interconnected layers that transform raw data into outputs designed to create value for the enterprise.
· The Foundation: Data & Storage This foundation captures, cleans, and catalogs both structured and unstructured information.
· The Engine: Compute & Acceleration High-density Graphics Processing Units (GPUs), AI-optimized chips, and elastic cloud clusters provide the parallel processing that deep-learning workloads require. Container orchestration tools abstract these resources, allowing cost-effective experimentation and deployment.
· The Brain: Model & Algorithm This is where foundation models, domain-specific small language models, and classical machine-learning libraries coexist. Organizations must decide whether to consume models “as-a-service,” fine-tune open-source checkpoints, or build custom networks—decisions that involve trade-offs between control, cost, and compliance.
· The Connectors: Orchestration & Tooling Retrieval-augmented generation (RAG), prompt pipelines, automated evaluation harnesses, and agent frameworks sequence models into end-to-end capabilities.
· User Access and Control: Applications & Governance This top layer exposes AI to users through APIs and low-code builders that embed intelligence in user-facing systems.
For further foundational information on AI tech stacks, see IBM’s introductory guide.
DEVELOPING AI LITERACY IN THE C-SUITEHow can busy executives develop the AI literacy they need to lead effectively? Here are some practical approaches to closing the knowledge gap.
Establish a personal learning curriculum. Set aside time for structured learning about AI fundamentals through executive education programs, books, or online courses specifically designed for business leaders.
Build a balanced advisory network. Surround yourself with advisors who bridge technical expertise and business acumen. This might include both internal experts and external consultants who can translate complex concepts into business terms without oversimplifying.
Institute regular technology briefings. Create a structured process where technical teams provide regular updates on AI capabilities, limitations, and potential applications in your industry. The key is ensuring these briefings focus on business implications rather than technical specifications.
Experience AI directly. Hands-on experience with AI tools provides an essential perspective. Work directly with your company’s AI applications to develop an intuitive understanding of capabilities and limitations.
Foster organization-wide literacy. Support AI education across all business functions, not just technical departments. When marketing, finance, operations, and other leaders share a common understanding of AI capabilities, cross-functional collaboration improves dramatically.
True leadership in the age of AI begins with curiosity and the courage to learn.
When CEOs become tech literate, they don’t just adapt to the future—they help shape it.
[Photo: NicoElNino/Adobe Stock]
Original article @ Fast Company.
The post For CEOs, AI tech literacy is no longer optional appeared first on Faisal Hoque.
May 27, 2025
From Code to Compassion: Designing AI With Empathy

KEY POINTSEmpathy in AI design isn’t just ethical—it’s psychologically essential for human well-being.Human-centered AI design principles mirror core tenets of humanistic psychology.The future of mental health may depend on how empathetically we build our digital tools.
I’ve been thinking about artificial intelligence (AI) design principles a lot lately. Every week, another story breaks about AI gone wrong—facial recognition that can’t see ethnic faces properly, hiring algorithms that screen out certain candidates, chatbots that turn hostile or that even try to blackmail users. And every time, the response is the same: “We need better data. More training. Smarter algorithms.”
But what if we’re missing the point entirely?
What if the real problem isn’t that our AI isn’t smart enough—it’s that it isn’t kind enough?
The Real Cost of the Emotional GapHere’s something that keeps me up at night: We’re building systems that make life-or-death decisions, and we’re designing them for efficiency and precision.
Imagine a patient portal that delivers test results with all the warmth of a parking ticket: “Abnormal results detected. Schedule follow-up appointment.” That’s it. No context. No reassurance. Just anxiety-inducing bureaucracy at 2 a.m. when someone can’t sleep and decides to check their results.
Now imagine the same information delivered differently: “Your test results show some areas that need attention. I know this can feel worrying—that’s completely normal. Your provider will walk through exactly what this means and discuss the next steps with you.”
Same information. Completely different emotional impact.
The second version doesn’t require advanced AI or breakthrough algorithms. It requires that the system design genuinely cares about how another human being might feel.
What Empathy Actually Means (Beyond the Buzzwords)Let’s be honest—“empathy” has become one of those words that gets thrown around in every corporate presentation, right alongside “synergy” and “disruption.” But strip away the consultant-speak, and empathy is actually pretty simple.
It’s asking yourself this: If I were the person on the receiving end of this system, how would I want to be treated?
When you’re looking for a job and get rejected, do you want a one-word email that says “No”? Or would you prefer something that actually acknowledges your humanity?
When a platform removes your post, do you want to feel like you’ve been processed by a robot, or like someone actually considered your perspective?
Empathy isn’t something that just happens. It takes deliberate effort to practice. You have to actively pause and ask yourself, “How would this feel if it were happening to me?” You have to seek out perspectives different from your own. You have to resist the temptation to assume your experience is universal.
The hardest part? Empathy often slows you down in the short term. It means taking time to understand context you might otherwise ignore. It means designing for edge cases that represent real people, even when those people aren’t your primary market. It means having uncomfortable conversations about whose voices aren’t being heard.
Why This Matters More Now Than EverThe “move fast and break things” mentality made sense when we were building basic social media features. If you broke someone’s ability to poke their friends, the world didn’t end.
But we’re not building trivial features anymore. We’re building systems that decide who gets hired, who qualifies for loans, who gets flagged at airport security. When these systems “break things,” they break people’s lives.
For instance, MIT researchers found that facial recognition systems fail 35 percent of the time for Black women—but less than 1 percent for white men. Amazon discovered their hiring algorithm was systematically discriminating against women.
These aren’t just statistics. Behind every percentage point is a person who got stopped at security for no reason, or didn’t get called back for a job they were qualified for. The psychological damage adds up. Trust erodes. People start to see AI systems as working against them instead of for them.
And here’s the thing that really bothers me: Most of these problems are preventable. Not through better math, but through better listening.
Who Is Getting It Right (and Why It Matters)Some organizations are figuring this out. Duolingo could have built a language app that just marked your answers wrong. Instead, they chose encouragement: “Almost! Try focusing on the pronunciation…” It’s a small thing, but it keeps people learning instead of quitting.
Spotify doesn’t just analyze what you listen to—they seem to understand why you’re listening. Their playlists feel personal, not algorithmic. Like someone who actually gets your taste in music made them for you.
These organizations aren’t doing anything technically revolutionary. They’re just remembering that behind every user account is an actual human being with feelings, frustrations, and hopes.
The Psychology of Feeling HeardPeople form emotional relationships with technology whether we design for it or not. When your GPS says “recalculating” in that patient voice, you feel differently than when an error message barks at you in all caps.
When systems acknowledge uncertainty (“I think this might be what you’re looking for, but I’m not sure”), people trust them more than systems that pretend to be infallible. When platforms explain their decisions and provide ways to appeal them, users feel empowered instead of powerless.
This isn’t just about being nice. People are more likely to use and recommend systems that make them feel understood—even when those systems make occasional mistakes.
3 Questions That Change EverythingI’ve started asking three questions whenever I design/evaluate an AI system:
What does this person actually need right now? Not what our data says they should need, but what they’re probably feeling and hoping for.How will this affect their sense of dignity and control? Are we empowering them or making them feel like a case number in a queue?Who might we be harming accidentally? What communities or situations are we not considering because they weren’t in our training data?These questions don’t need a Ph.D.—just genuine curiosity about others’ experiences.
The Path ForwardLook, I’m not suggesting we abandon efficiency or stop optimizing systems. I’m suggesting we optimize for the right things.
Instead of just measuring clicks and conversions, what if we measured whether people felt heard and respected? Instead of only testing for accuracy, what if we tested for psychological safety?
Organizations like Stanford’s Human-Centered AI Institute and the Partnership on AI are developing frameworks to make this practical. They’re showing that you don’t have to choose between powerful AI and human-centered AI.
Companies that lead with empathy won’t just build better products—they’ll gain a competitive edge by earning lasting trust in a skeptical age.
What We’re Really BuildingHere’s what I’ve come to believe: We’re not just building artificial intelligence. We’re building the emotional infrastructure of the future.
Every algorithm we ship, every interface we design, every automated decision we deploy—it all shapes how people feel about living in a world where machines make choices that affect their lives.
We can build a future where AI systems treat people like problems to be solved, where interactions feel cold and transactional, and where people feel increasingly alienated from the technology that surrounds them.
Or we can build something different: AI that recognizes the full complexity of human experience. Systems that enhance our humanity instead of diminishing it. Technology that makes people feel more understood, not less.
The choice is ours.
References
Jeffrey Dastin. Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. October 10, 2018.
[Source Photo: Gannvector / Shutterstock]
Original article @ Psychology Today.
The post From Code to Compassion: Designing AI With Empathy appeared first on Faisal Hoque.