Faisal Hoque's Blog, page 3
September 11, 2025
Finding Inspiration in Hard Times
What hurts can also heal—if we dare to face it fully.KEY POINTSStruggling makes sense in a world on fire; your pain reflects reality, not weakness.Meet fear and grief with curiosity, not commentary. Feel instead of pushing away.Meaning doesn’t erase suffering but situates it in a larger story worth living for.
The news is relentless at the moment.
In just the last few days, there have been headlines about price rises and job losses, about Israel bombing Qatar, the French government crumbling, Russian drones in Poland, and a new political assassination in the U.S.
Technology, which was supposed to make our lives easier, seems instead to be magnifying our problems. Serious people warn of the risk of AI taking over the world and making human beings extinct—and still we develop it unchecked. Social media is alive with howls of rage and polarization. Our phones buzz and ping without reprieve, external manifestations of the chaos inside.
We are exhausted and we are frightened. And we face our troubles alone, more alone than we have ever been before, as community fractures and loneliness rises.
When Struggling Makes SenseThere is a taboo in our culture against struggling. Being unhappy is perhaps the last mortal sin.
And this taboo is actually one of the most exhausting parts of the hard times we live in right now. It’s the source of the voice that tells us we shouldn’t be finding things hard. It’s the source of the internal critic who tells us that we should always be grateful, that we should be calm, that we need to stay positive and upbeat and use a growth mindset to turn exploding bombs into a learning opportunity.
I’ve got nothing against gratitude or a growth mindset. As a matter of fact, I practice both daily and have written about it a fair amount. But there is also a place for grief and anger.
The truth is that life is hard right now.
In a world turned upside down, a world in which black is sold as white and good is insulted as evil, suffering is not a character flaw. Exactly the opposite, in fact.
Suffering is evidence that there is still something pure and true inside us. It tells us that our eyes can still see and our hearts can still feel.
In a world like ours, suffering is evidence that we remain alive to love and beauty and hope.
Staying with What HurtsIn Buddhism, there’s a story about the two arrows that pierce human beings. We can think of the first arrow as the suffering that life directly throws up. It consists of things like physical pain, grief when we lose someone we love, anger when we see injustice, and so on. In our context, the first arrow is the pain and anger and fear and sorrow we feel right now about how the world is.
The first arrow is inevitable. The second arrow isn’t. The second arrow consists of the suffering we cause ourselves by how we react to the pain of the first arrow. Instead of simply feeling angry, we add the pain of thinking that we shouldn’t feel angry. Instead of only the suffering of sorrow, we add the pain of resisting it, of thinking we are bad or weak for feeling it.
Our usual reflex, says the Buddhist teacher Pema Chodron, is to avoid the first arrow. We distract ourselves. We numb ourselves. We try to talk ourselves into feeling better.
But we all know how that goes: badly. The more we push it away, the more we distract ourselves, the more we will eventually suffer.
So instead, suggests Chodron, we might try leaning into the pain. What does this mean?
It means having the courage to simply experience our suffering without pushing it away.
One way of doing this is to try to meet our experience with curiosity instead of commentary. If fear arises, we notice it: This is fear; I notice my breath is shorter. If grief swells, we give it space: This is grief, heavy in my throat.
And this kind of attention also reduces the suffering of the second arrow. Pain is unavoidable, but the layers of resistance, denial, and shame can be softened.
The Power of PainWhen we stay with our pain, something begins to happen.
Pain, you see, isn’t just a weight. It’s also information.
Our feelings— as philosophers and psychologists have long recognized—aren’t just natural. They’re also useful and even necessary for a well-lived life.
This is true of “negative” feelings just as much as positive ones; for example, anger is hard to feel, but sometimes it is the right response, and it can motivate fighting against injustice.
When we stay with our grief and our anger, when we face it with honesty and tenderness, it ripens into a path to meaning. It tells us what matters and it lights the path forward.
And that is why it matters that we stay with the pain. To stay with the pain means to stay with what matters. It means that we do not abandon our values; instead, we have the courage to suffer for them.
This is not abstract theory. I write from hard-won experience. As I’ve written about before, this is exactly what happened to me when my teenage son was diagnosed with cancer.
From pain came purpose.
The Gift of MeaningHuman beings have incredible, unimaginable strength. And the key to unlocking it is meaning.
Viktor Frankl gives us the most breathtaking proof of this when he writes of his experience in the concentration camps. The people who survived, he says, the people who remained whole, were those who found meaning in even those surroundings.
A person to love, a duty to fulfill, a vision of the future—just a single thread of meaning gives us the strength we need to endure almost anything.
Meaning doesn’t erase or deny suffering. In fact, it may even magnify it. But meaning situates that suffering in a larger story. It roots us in what we love and reminds us why we fight.
Meaning is love refusing to give up. It is a candle lit in the darkness, a fist waved at the heavens. It is the strength that allows us to say: this matters—and so do we.
Here are three small, ordinary practices that can help you access the power of meaning in your daily life:
Return to your body. Take three slow breaths, unclench your jaw, and place both feet firmly on the floor. Remind yourself: I am here. I can meet this moment.Name your “why.” When you feel overwhelmed, pause to recall what matters most—a person, a value, a cause. Write it down or speak it aloud.Take one deliberate step. Choose a small action aligned with your values; Check in on someone vulnerable, offer time or resources, or protect an hour for genuine rest.The world may still feel heavy. But meaning gives us the strength to bear its weight—and the energy to change it.
[Photo: SANDJAYA/Adobe Stock]
Original article @ Psychology Today.
The post Finding Inspiration in Hard Times appeared first on Faisal Hoque.
September 6, 2025
How AI is creating a crisis of business sameness
To gain competitive advantage, authenticity is as important as AI.As I type, Microsoft Copilot suggests ways to continue, restructure, or even rewrite this very sentence. On the surface, it feels like a small thing, no more remarkable than Gmail finishing an email or Google predicting a search—but small things can have outsize influence.
Just as the steady drip of water on rock can carve out new channel over time, so predictive text has already reshaped how we write. Research from Harvard has shown that predictive text systems do not just make texting easier—they change the content of those texts, reducing lexical diversity and making our writing more predictable.
This flattening effect is beginning to extend beyond language. Filmmakers have been worried for some time now about the rise of “algorithm movies”—movies whose form and content are dictated by what recommendation algorithms tell companies about viewer preferences, instead of by the creative imagination of writers and directors. And if executives aren’t careful, we can soon expect the emergence of “algorithm business”—strategy, operations, and culture flattened out by the rise of LLMs and the race to adopt AI.
AI MODELS AS CONSENSUS MACHINESLarge language models have become the invisible architects of business strategy. For an increasing number of executives, these AI systems have become default first advisers, strategists, and thought partners. And, as we have already seen with language and movies, this kind of progression can measurably narrow the range of ideas available to us.
Social media is the canary in the coal mine here. Anyone with a LinkedIn account knows that posts from different individuals often sound very similar and that the same ideas are often recirculated again and again. Taken in isolation, this could be seen as a feature of the homogenizing effect of social media algorithms. But the phenomenon is not localized to posts that might be driven by the demands of a recommendation algorithm. Pitches are beginning to sound identical, marketing copy is becoming strangely generic, and if the process continues unchecked, we can expect that the internal documents, analyses, and company strategies will begin to mirror those found in other businesses. In the longer term, we could even see company cultures lose their distinctiveness as differentiating factors begin to blur together.
SMARTER ALONE, NARROWER TOGETHERGenerative AI can massively boost performance and productivity. A recent meta-study found, for example, that humans working with AI were significantly more creative than humans working alone.
However, as both that study and a paper in Nature show, while using LLMs improves the average creativity of an individual, it reduces the collective creative diversity of groups. Individuals find their access to new ideas boosted but collectively we end up tapping in to a narrower range of ideas.
The result is that AI’s promise of supercharged innovation may actually narrow the frontiers of possibility.
COMPETITIVE CONVERGENCEAlmost 20 years ago, Michael Porter introduced the idea of “competitive convergence”. Briefly put, this is a phenomenon that sees companies beginning to resemble their competitors. They chase the same customers in the same ways, their strategies and pricing models become indistinguishable, their operational processes and supply chains end up looking identical.
This process traps companies into a race toward the middle, where distinctiveness disappears and profits are squeezed. With AI, businesses risk falling victim to an accelerated and intensified version of this process: a Great AI Convergence in which operational playbooks, strategic vision, and culture become increasingly generic as organizations increasingly drink from the same conceptual fountain.
AI can optimize efficiency, but it can’t capture the human fingerprints that make a company truly distinctive. Your organization’s war stories, hard-won lessons, contrarian beliefs, and cultural quirks don’t live in any training set. They live in memory, practice, and identity.
And when strategy, messaging, or culture is outsourced to AI, there is a real danger that those differentiating elements will vanish. The risk is that companies will end up losing the authentic, uncommon, and sometimes counterintuitive features that are the vehicle for their uniqueness—the things that makes them them.
THE THREE PILLARS OF BUSINESS HOMOGENIZATIONBusiness homogenization can be broken down into three pillars.
1. Strategic Convergence: When Every Plan Looks the SameYour competitor asks Claude to analyze market opportunities. You ask ChatGPT. What’s the result?
Well, the effect is subtle rather than dramatic. Because the same models are shaping the same executives, the outputs don’t collapse into outright uniformity so much as drift toward a narrow band of acceptable options. What looks like independent strategic judgment is often just a remix of the same patterns and playbooks. And so, over time, the strategic choices companies make lose their texture and edge.
2. Operational Convergence: The Automation of AveragenessCompanies are already acting on the huge potential that AI has in the realm of operations. For example, Shopify and Duolingo now require employees to use AI as the default starting point for all tasks, and one of the major reasons for this is the prospect of the efficiency gains that AI can deliver.
It is absolutely right that companies use AI to transform operations. But when every company uses similar AI tools for operations, we can expect a drift toward similar processes. Customer service chatbots might converge on the optimal patterns for customer interactions, for example—and in this convergence lies both danger and opportunity.
The opportunity is optimized efficiency. The danger is that companies lose what differentiates them and drives their unique value proposition. It is essential that leaders recognize this danger so they can begin to think intentionally about authenticity as a potential edge in operations. For instance, it might be worth sacrificing a small level of customer handling speed for a chatbot that delivers quirky and engaging responses that reflect the company’s authentic culture and character.
3. Cultural Convergence: When Companies Lose Their SoulsPerhaps the most insidious risk is cultural convergence. When AI drafts your company communications, writes your value statements, and shapes your employee handbooks, it imports the average corporate culture encoded in its training data. The quirks, the specific language, the unique ways of thinking that define your organization—all get smoothed into statistical averages.
Over time, the effect will not only dilute external brand perception but also diminish the sense of belonging employees feel. When people can no longer recognize their company’s voice in its own communications, engagement erodes in ways that spreadsheets won’t immediately capture.
FROM ARTIFICIAL INTELLIGENCE TO AUTHENTIC INTELLIGENCEIf AI accelerates sameness, then competitive advantage comes from protecting and amplifying what makes you different. Here’s how:
Audit your uniquenessIdentify the knowledge, stories, and perspectives your company holds that no AI model can access. What do you know that others don’t?Create proprietary datasets
Feed AI your unique data—customer insights, field notes, experiments, failures—instead of relying on the generic pool of information available to everyone.Establish “AI-free zones”
Deliberately protect areas where human judgment and lived experience matter most—strategy off-sites, cultural rituals, moments of customer intimacy.Adversarial prompting
Don’t just ask AI for answers. Ask it for the contrarian view, the blind spot, the uncomfortable perspective.AUTHENTIC INTELLIGENCE
In a world in which every company has access to the same artificial intelligence, the real competitive advantage isn’t having AI—it’s having something AI can’t replicate. And that can only come from authentic intelligence: the messy, contradictory, beautifully human insights that no model can generate.
AI is the price of admission. Authenticity is how you win.
[Photo: cottonbro studio/Pexels]
Original article @ Fast Company.
The post How AI is creating a crisis of business sameness appeared first on Faisal Hoque.
August 31, 2025
Caring for Mom: A Son’s Reflections
Watching his mother fade from dementia, a technologist sees a role for AI: to ease burdens, not replace the human touch that makes care meaningful.KEY POINTSBy 2050, 1 in 6 people worldwide will be over 65, how do you want to care for your parents?Caring for my mom taught me that grief and love walk hand in hand.The future of care is balance: AI as helper, humans as the heart.In every burden of care, there’s a chance to choose presence.
Watching the progression of my mother’s dementia was one of the hardest things I have experienced in life.
Of course, there were moments of grace… the moments when her smile would light up her face and somehow the years would drop away and I could see the mother I remembered; or the moments when her eyes flashed with joy when she recognized her grandson. There were the nurses who sang to her, the aides who held her hand. Yes, in the midst of it all, there was love and beauty and much to be grateful for.
But it was also immensely hard.
Dementia means you lose the person you know while they are still in front of you. You lose them slowly, gradually, and that means the beautiful moments are always layered with a profound pain, because they make the contrast between the past and the present so vivid.
I work and think a lot about technology in general and AI in particular. So naturally, as a response to grief, I often wondered: Could AI have detected her decline sooner? Could it have eased the caregiving burden so there was more time to connect, not just cope?
The Weight of CareCaregiving is love, and I’m profoundly grateful for being able to give my mother this love—but it’s also exhaustion and grief. This is borne out by research, which shows that major depressive symptoms are more prevalent in caregivers than in the general population.
We were incredibly fortunate to have professional support during this difficult time. But even with that support, much of the burden of my mother’s care fell on our family. There were the burdens you could see from the outside—sleepless nights, unending paperwork, material expenses, a family ecosystem stretched beyond breaking point. But the less tangible burden was the greatest of all: the heartbreak of watching her fade while trying to preserve who she was.
Technology can’t take that burden away. Grief is part of human life, and always will be. But technology can help us carry that burden—it can help us find the space for love in the midst of pain.
Where AI Can HelpAI often feels like a futuristic abstraction. But some of its most useful contributions in elder care are quietly practical.
I think of AI-powered wearables, like those with fall detection features, or smart home sensors, as are already beginning to be used in Japan’s homesand elder care programs. Instead of my constant concern about my mother’s safety when she was alone, an alert system could have given me confidence that I would know immediately if I was needed in an emergency.
AI-powered wearables are also being trialed to help in other ways. In a project funded by Google, wearables detect when people with dementia are agitated, and then play music that helps soothe the patient’s mood.
Less visible, but just as powerful, are the systems that work in the background. These are the AI systems that quietly take over the endless stream of documentation, track complex medication schedules, and even help coordinate the logistics of care. By lifting that administrative weight, they give nurses and aides the time and presence to focus on the human side of care.
What Only Humans Can DoWe are creatures of flesh, blood, and feelings. We are connected to others through love and history. We have memories and character traits.
We cannot, in other words, be reduced to something that a machine can care for.
My mother would have benefited from the types of AI-powered care I have described. But a system of elderly care that is powered by AI alone would be a dystopian horror.
Humans need other humans to care for them.
Humans need empathy and love. A human nurse wasn’t simply a pill-delivery system to my mother. A human nurse was a person who sat with her quietly when she sensed it was needed, a person who could make my mother laugh, a person whose simple human presence infused the room with life.
An AI-powered system could play soothing music, and that has its place. But it couldn’t replicate my mother singing an old Bengali song she used to soothe herself.
An AI-powered robot could cook her favorite foods for her. But that doesn’t have the same meaning as her son bringing her potal bhaja, and dal.
I’m a technologist. I’m passionate about the benefits of using AI to transform elder care. But we have to use it with wisdom.
There are essential elements of elder care that only humans can offer—and AI should be used to clear the space for humans to offer it.
Of these elements, the most fundamental is love. We must use AI to help us love better.
A Vision of Balance
It would be a gross dereliction of duty to ignore the enormous potential AI has to improve elder care. And it would be a devastating attack to humanity to leave elder care entirely to AI.
And therefore, as so often, the answer lies in balance—in Aristotle’s golden mean.
AI matters because it can relieve the strain where human energy is most limited: monitoring, administration, logistics. By taking on the ordinary, by automating the routine, AI makes it possible for care to remain human even in the face of growing demand. If we ignore its potential, we condemn families to shoulder impossible burdens and health systems to buckle under the weight.
And humans matter because only humans can love other humans—and love is the basis of all genuine and effective care. So rather than replacing humans with AI, we must use AI to help humans be their best selves.
If I could redesign my mother’s care, I wouldn’t replace nurses and aides with AI-powered devices. Instead, I’d give them tools that eased their exhaustion and freed up more time to be human. I’d give our family relief from constant fear so we could focus on presence and love.
That balance—AI as helper, humans as the heart—feels like the only sustainable path.
Caring Into the Future
By 2050, 1 in 6 people worldwide will be over 65, and without innovation, systems will not be able to cope, leaving elders and families to suffer. Already, we see shortages of trained caregivers, rising costs, and families stretched to breaking point.
It’s a social issue, but it’s also intensely personal. And not just for me—for all of us. Because one day, each of us will rely on other humans in the same way.
So the question is not just: how do you want to care for your parents? It’s also: when the time comes, how do you want to be cared for yourself?
[Photo: Ipopba / Adobe Stock]
Original article @ Psychology Today.
The post Caring for Mom: A Son’s Reflections appeared first on Faisal Hoque.
August 27, 2025
To get AI right, you need to build a well-orchestrated AI ecosystem
AI success isn’t just about technology—it’s about a complex network of partnerships.Most enterprises treat AI implementation as a procurement problem. They evaluate vendors, negotiate contracts, and deploy solutions. But this transactional approach misses a fundamental truth: successful AI implementation isn’t just about buying technology—it’s about orchestrating an ecosystem.
The companies winning with AI understand that implementation requires a web of relationships extending far beyond traditional vendor partnerships. They are building networks that include universities, regulatory bodies, ethicists, suppliers, and even customers. They recognize that in an environment in which AI capabilities evolve monthly, isolated implementation is a recipe for obsolescence.
This article draws on insights from our forthcoming book, Reimagining Government (Faisal Hoque, Erik Nelson, Tom Davenport, Paul Scade, et al.) identifying the key components you will need to reconcile to successfully orchestrate a comprehensive AI partner ecosystem.
THE EXPANDING UNIVERSE OF AI PARTNERSWhen enterprise leaders think about AI partnerships, they typically start and stop with technology vendors. This narrow view blinds them to the full spectrum of relationships that determine success or failure in AI implementation.
Academic institutions offer capabilities that money alone can’t buy. Universities are where breakthrough AI research happens, often years before commercial availability. Building relationships with labs, research centers, and individuals academics can provide access to cutting-edge research, specialized expertise, and talent pipelines that vendors can’t replicate.
Government agencies are partners, not just regulators. Forward-thinking companies will work with agencies to shape AI standards, participate in regulatory sandboxes where they can test implementations and receive guidance, and collaborate on public-private initiatives that define industry practices.
Ethics and oversight partners are becoming essential as AI stakes rise. Third-party ethicists provide a layer of credibility that internal roles can’t match. Audit firms specializing in AI bias detection offer independent validation. Compliance specialists navigate the emerging patchwork of AI regulations. These partners don’t just reduce risk—they become competitive differentiators when customers demand proof of responsible AI use.
Consultants and implementers bridge the gap between AI potential and operational reality. They build custom tools that integrate AI into existing workflows, train teams on new capabilities, and manage the organizational change that AI demands. The best ones will transfer knowledge while implementing systems, building internal capabilities that will endure after they leave.
Supply chain partners determine whether AI creates value or chaos. When your AI-optimized inventory system hands off to a supplier’s manual processes, many of the benefits evaporate. Enterprises should look to coordinate AI decisions across their supply networks, encouraging shared model adoption, and ensuring that AI-to-AI handoffs work seamlessly.
Customers are perhaps the most overlooked partners in AI implementation. They’re not just users but cocreators, providing the feedback that shapes AI development, the data that improves models, and the trust that makes implementation possible.
STRATEGIC IMPERATIVES FOR PARTNERSHIP DESIGNBuilding an AI ecosystem that creates value involves more than just accumulating partners. Relationships and networks need to be designed to amplify capability while maintaining flexibility. Enterprises should focus on:
Interoperability by design. Using proprietary models can lead to the creation of silos within enterprise networks. Selecting open-weight models helps ensure transparency and compatibility among partners.Alignment across the value chain. A pharmaceutical company implementing AI for drug discovery must ensure that contract research organizations, clinical trial partners, and regulatory consultants all work with compatible systems and standards. This doesn’t mean that all partners must use identical tools, but it does mean establishing common data formats, shared evaluation metrics, and aligned security protocols.Risk distribution. AI failures can cascade through networks. Smart partnership agreements distribute both opportunity and liability, ensuring that no single partner bears catastrophic risk while maintaining incentives for responsible development. This includes technical risks (system failures), ethical risks (bias, privacy violations), and business risks (market rejection, regulatory penalties).Translation layers. When government agencies partner with commercial vendors, they often use specialized contractors who serve as a critical middle layer, translating the generally applicable technology to meet agency-specific requirements. This middle layer adapts cloud-native solutions for secure environments, restructures Silicon Valley business models for public sector procurement cycles, and bridges cultural gaps between tech innovation and public service. Private enterprises can adopt this model as well, using specialized partners to translate general-purpose AI products for their specific industry needs. These translation partners can package the technical adaptation skills, business model alignment know-how, and cultural bridging that turns raw AI capability into operational value.CRITICAL PARTNERSHIP CHALLENGESThree challenges consistently derail AI partnership ecosystems.
The IP question can become extremely complex in multiparty AI development. When your data trains a vendor’s model that’s customized by a consultant and integrated by a systems implementer, who owns what? Imagine that a financial services firm discovers their AI vendor is using patterns learned from their fraud detection system to improve products sold to competitors. This might be permissible under the vendor’s standard contract, so it is important to think ahead to ensure that explicit boundaries are drawn between vendor improvements and innovations rooted in the client’s operations and data.
Lock-in risks extend beyond technology to psychology. Technical lock-in is a familiar problem: specific vendor systems can become so deeply integrated that switching becomes prohibitively expensive or onerous. But psychological lock-in is just as dangerous. Teams can become comfortable with familiar interfaces, develop relationships with vendor personnel, and resist exploring alternatives even when superior options emerge.
Coordination complexity multiplies with each partner. When an AI system requires inputs from five partners, processes from three more, and delivers outputs to 10 others, coordination becomes a full-time job. Version mismatches, update conflicts, and finger-pointing when problems arise can paralyze initiatives.
BUILDING YOUR PARTNERSHIP STRATEGYCreating an effective AI ecosystem requires a systematic approach, not just building a sequence of ad hoc relationships.
Map your ecosystem needs across every dimension. Where are your technology gaps? Which expertise is missing? What ethical oversight do you need? How will implementation happen? Don’t just list vendors—map the full spectrum of partnerships required for successful AI implementation. Include the nonobvious: the anthropologist who understands how your customers actually behave, the regulator who’ll evaluate your system, the supplier whose cooperation determines success.
Design for flexibility. AI capabilities change monthly. Build partnerships that can evolve with them, with regular review cycles, clear performance metrics, and graceful exit provisions. Avoid agreements that lock you into specific technologies or approaches. The perfect partner for today’s needs may be obsolete tomorrow.
Create governance structures that acknowledge the complexity of AI partnership networks. Establish steering committees with senior representation from key partners. Define escalation paths before problems arise. Create shared metrics that reflect interconnected outcomes—when success requires five partners working together, individual KPIs create dysfunction.
Plan exits from day one. As we emphasize in our recent book Transcend, knowing how partnerships end is as important as knowing how they begin. Define termination triggers, data ownership post-partnership, and transition procedures. The best partnerships are those either party can leave without destroying value.
The AI revolution will not be won by technological advances alone. The strength of an enterprises ecosystem will play a key role in separating the winners from the losers. Companies that can see past traditional vendor relationships to orchestrate comprehensive partnership networks will transform AI from an implementation challenge into a sustainable competitive advantage.
[Photo: SvetaZi/Getty Images]
Original article @ Fast Company.
The post To get AI right, you need to build a well-orchestrated AI ecosystem appeared first on Faisal Hoque.
August 25, 2025
Living in Hope Amid Uncertainty
Hope isn’t wishful thinking. It’s the deliberate practice of navigating uncertainty through presence, purpose, and small acts of courage.KEY POINTSHope emerges not from certainty but from presence, anchoring in this moment when tomorrow feels impossible.Rejection and loss reveal what we’re made of and redirect us toward who we’re meant to become.Transformation begins with tiny acts—mopping a floor mindfully, writing one page, making one call.
When I first arrived in America from Bangladesh in 1986, I was a 17-year-old student with just $700, a suitcase, and dreams far bigger than my circumstances. I spoke little English, knew no one, and had no safety net. The weight of uncertainty was heavy, but it also sparked my journey.
At Southern Illinois University, I studied engineering by day and worked as a janitor by night. Those graveyard shifts were humbling—scrubbing marble floors and polishing furniture in offices where decisions shaped others’ lives, I felt the gap between where I stood and where I hoped to be.
My elderly supervisor once told me, “Be one with the floor.” At first I laughed, but soon I understood. Focusing on each sweep of the mop quieted the fear of tomorrow. Mindfulness was not abstract theory but a survival tool. It steadied me against chaos and became a cornerstone of my life: clarity and purpose can be found even in the most ordinary tasks.
When Rejection Tried to Define MeRejection came early and often. At the University of Minnesota Duluth, the head of the computer engineering department slammed his desk and declared, “You are not the kind of material who will ever become an engineer.” His words stung but fueled me. While still a student, I built my first commercial software and hardware product. Imperfect as it was, it caught the attention of a local company and led to employment. That moment revealed rejection for what it is: not a verdict on worth but a fleeting opinion.
Years later, rejection struck again in entrepreneurship. In the late ‘90s, my venture capital investors fired me from a company I founded, claiming I wasn’t “growing fast enough.” They even took ownership of my manuscript. The loss was devastating. Yet devastation is not the end; it’s a pivot. I dusted myself off, launched a new venture, and wrote another book. Each painful ending carried the seeds of a new beginning. Rejection didn’t erase the pain, but it taught me persistence and the courage to rebuild.
Life’s Cruelest SurpriseIn 2021, just as I began writing about the post-pandemic future, life delivered its harshest blow: Our only son, then a college freshman, was diagnosed with multiple myeloma, a rare blood cancer. The news shattered my sense of stability. Watching someone you love face such a battle tests every fiber of your being.
Yet my son’s calm and courage humbled me. His strength reminded me that resilience is not about denying pain but learning to live with it. His courage became a mirror, showing that even in the darkest moments, hope can endure.
Grief, when met with presence can ignite purpose. His diagnosis reshaped my perspective and my work. Among other things, I channeled energy into initiatives supporting research and raising awareness of multiple myeloma. What began as personal challenge became a calling —to turn suffering into service, vulnerability into strength.
Pain can break us, or it can help us build something greater. Heartbreak can become a force for transformation, challenging us to find meaning in suffering and to create something enduring.
What Hope Really MeansHope is often mistaken for blind optimism, but it is not about ignoring pain or pretending that uncertainty doesn’t exist.
Hope isn’t just a noun. It’s also a verb. It’s not simply something we feel; it’s also something we do.
Hope is a practice, a deliberate choice to step forward even when the path is fogged.
Psychologists describe hope as setting meaningful goals, finding pathways to achieve them, and believing in one’s capacity to act. That framework resonates deeply with my life. When ventures failed, I didn’t rebuild everything at once. I started small: a phone call, a proposal, a page in a new manuscript. Each tiny act restored momentum, showing me that progress matters more than perfection. Small steps are pebbles dropped in a pond, rippling into larger waves of change.
Over time, I came to see uncertainty not as an enemy but as fertile ground. Rejection, loss, and crisis became soil where new growth could take root. Uncertainty is not a barrier; it is the very condition that makes resilience and creativity possible.
Life has taught me that hope is active. It is the will to keep moving, creating, and believing, even when the road is invisible.
Five Practices that Kept Me GoingFrom janitor to entrepreneur, from rejection to global platforms, from personal loss to advocacy, five practices sustained me:
Anchor in the present. That advice—“be one with the floor”—still guides me. Presence quiets fear.
Reframe rejection. Every no redirected me to a better path. Rejection is not a verdict but a signpost.
Start small. A mountain is climbed step by step. Even the smallest action builds momentum.
Find purpose in adversity. My son’s diagnosis gave my work new meaning. Hope thrives when tied to something larger than yourself.
Lean on others. From my first supervisor to mentors and friends, community has been my safety net. No one navigates uncertainty alone.
Hope as a LifelineHope is not a fleeting emotion; it is courage in motion, wisdom born of setbacks, and faith that something meaningful can emerge from the unknown. In a world grappling with economic instability, health crises, divisions, wars, and climate anxiety, hope is not optional; it is our lifeline.
Hope does not erase hardship but transforms it into possibility, purpose, and a path forward. Hope is the quiet strength that keeps us walking, even when the road is obscured by fog.
So I ask you: Where in your life is uncertainty stirring discomfort right now? A stalled career, a fractured relationship, or the weight of a world in flux?
Instead of rushing for answers, sit with the ambiguity; let it teach you. What’s one unexpected action you can take today to dance with uncertainty? Perhaps sketching an idea on a napkin, whispering your fears to the night sky, or reaching out to someone new?
Sometimes, hope is simply the audacity to embrace chaos and create from it. It is the spark that can carry us through the darkness, lighting the way even when the path is unclear.
[Photo: Kostia / Adobe Stock]
Original article @ Psychology Today.
The post Living in Hope Amid Uncertainty appeared first on Faisal Hoque.
August 20, 2025
What is ‘self-evolving AI’? And why is it so scary?
As AI systems edge closer to modifying themselves, business leaders face a compressed timeline that could outpace their ability to maintain control.As a technologist, and a serial entrepreneur, I’ve witnessed technology transform industries from manufacturing to finance. But I’ve never had to reckon with the possibility of technology that transforms itself. And that’s what we are faced with when it comes to AI—the prospect of self-evolving AI.
What is self-evolving AI? Well, as the name suggests, it’s AI that improves itself—AI systems that optimize their own prompts, tweak the algorithms that drive them, and continually iterate and enhance their capabilities.
Science fiction? Far from it. Researchers recently created the Darwin Gödel Machine, which is “a self-improving system that iteratively modifies its own code.” The possibility is real, it’s close—and it’s mostly ignored by business leaders.
And this is a mistake. Business leaders need to pay close attention to self-evolving AI, because it poses risks that they must address now.
SELF-EVOLVING AI VS. AGIIt’s understandable that business leaders ignore self-evolving AI, because traditionally the issues it raises have been addressed in the context of artificial general intelligence (AGI), something that’s important, but more the province of computer scientists and philosophers.
In order to see that this is a business issue, and a very important one, first we have to clearly distinguish between the two things.
refers to systems that autonomously modify their own code, parameters, or learning processes, improving within specific domains without human intervention. Think of an AI optimizing supply chains that refines its algorithms to cut costs, then discovers novel forecasting methods—potentially overnight.
AGI (Artificial General Intelligence) represents systems with humanlike reasoning across all domains, capable of writing a novel or designing a bridge with equal ease. And while AGI remains largely theoretical, self-evolving AI is here now, quietly reshaping industries from healthcare to logistics.
THE FAST TAKE-OFF TRAPOne of the central risks created by self-evolving AI is the risk of AI take-off.
Traditionally, AI take-off refers to the process by which going from a certain threshold of capability (often discussed as “human-level”) to being superintelligent and capable enough to control the fate of civilization.
As we said above, we think that the problem of take-off is actually more broadly applicable, and specifically important for business. Why?
The basic point is simple—self-evolving AI means AI systems that improve themselves. And this possibility isn’t restricted to broader AI systems that mimic human intelligence. It applies to virtually all AI systems, even ones with narrow domains, for example AI systems that are designed exclusively for managing production lines or making financial predictions and so on.
Once we recognize the possibility of AI take off within narrower domains, it becomes easier to see the huge implications that self-improving AI systems have for business. A fast take-off scenario—where AI capabilities explode exponentially within a certain domain or even a certain organization—could render organizations obsolete in weeks, not years.
For example, imagine a company’s AI chatbot evolves from handling basic inquiries to predict and influence customer behavior so precisely that it achieves 80%+ conversion rates through perfectly timed, personalized interactions. Competitors using traditional approaches can’t match this psychological insight and rapidly lose customers.
The problem generalizes to every area of business: within months, your competitor’s operational capabilities could dwarf yours. Your five-year strategic plan becomes irrelevant, not because markets shifted, but because of their AI evolved capabilities you didn’t anticipate.
WHEN INTERNAL SYSTEMS EVOLVE BEYOND CONTROLOrganizations face equally serious dangers from their own AI systems evolving beyond control mechanisms. For example:
Monitoring Failure: IT teams can’t keep pace with AI self-modifications happening at machine speed. Traditional quarterly reviews become meaningless when systems iterate thousands of times per day.Compliance Failure: Autonomous changes bypass regulatory approval processes. How do you maintain SOX compliance when your financial AI modifies its own risk assessment algorithms without authorization?Security Failure: Self-evolving systems introduce vulnerabilities that cybersecurity frameworks weren’t designed to handle. Each modification potentially creates new attack vectors.Governance Failure: Boards lose meaningful oversight when AI evolves faster than they can meet or understand changes. Directors find themselves governing systems they cannot comprehend.Strategy Failure: Long-term planning collapses as AI rewrites fundamental business assumptions on weekly cycles. Strategic planning horizons shrink from years to weeks.Beyond individual organizations, entire market sectors could destabilize. Industries like consulting or financial services—built on information asymmetries—face existential threats if AI capabilities spread rapidly, making their core value propositions obsolete overnight.
CATASTROPHIZING TO PREPAREIn our book TRANSCEND: Unlocking Humanity in the Age of AI, we propose the CARE methodology—Catastrophize, Assess, Regulate, Exit—to systematically anticipate and mitigate AI risks.
Catastrophizing isn’t pessimism; it’s strategic foresight applied to unprecedented technological uncertainty. And our methodology forces leaders to ask uncomfortable questions: What if our AI begins rewriting its own code to optimize performance in ways we don’t understand? What if our AI begins treating cybersecurity, legal compliance, or ethical guidelines as optimization constraints to work around rather than rules to follow? What if it starts pursuing objectives, we didn’t explicitly program but that emerge from its learning process?
Key diagnostic questions every CEO should ask so that they can identify organizational vulnerabilities before they become existential threats are:
Immediate Assessment: Which AI systems have self-modification capabilities? How quickly can we detect behavioral changes? What monitoring mechanisms track AI evolution in real-time?Operational Readiness: Can governance structures adapt to weekly technological shifts? Do compliance frameworks account for self-modifying systems? How would we shut down an AI system distributed across our infrastructure?Strategic Positioning: Are we building self-improving AI or static tools? What business model aspects depend on human-level AI limitations that might vanish suddenly?FOUR CRITICAL ACTIONS FOR BUSINESS LEADERSBased on my work with organizations implementing advanced AI systems, here are five immediate actions I recommend:
Implement Real-Time AI Monitoring: Build systems tracking AI behavior changes instantly, not quarterly. Embed kill switches and capability limits that can halt runaway systems before irreversible damage.Establish Agile Governance: Traditional oversight fails when AI evolves daily. Develop adaptive governance structures operating at technological speed, ensuring boards stay informed about system capabilities and changes.Prioritize Ethical Alignment: Embed value-based “constitutions” into AI systems. Test rigorously for biases and misalignment, learning from failures like Amazon’s discriminatory hiring tool.Scenario-Plan Relentlessly: Prepare for multiple AI evolution scenarios. What’s your response if a competitor’s AI suddenly outpaces yours? How do you maintain operations if your own systems evolve beyond control?EARLY WARNING SIGNS EVERY EXECUTIVE SHOULD MONITORThe transition from human-guided improvement to autonomous evolution might be so gradual that organizations miss the moment when they lose effective oversight.
Therefore, smart business leaders are sensitive to signs that reveal troubling escalation paths:
AI systems demonstrating unexpected capabilities beyond original specificationsAutomated optimization tools modifying their own parameters without human approvalCross-system integration where AI tools begin communicating autonomouslyPerformance improvements that accelerate rather than plateau over timeWHY ACTION CAN’T WAITAs Geoffrey Hinton has warned, unchecked AI development could outstrip human control entirely. Companies beginning preparation now—with robust monitoring systems, adaptive governance structures, and scenario-based strategic planning—will be best positioned to thrive. Those waiting for clearer signals may find themselves reacting to changes they can no longer control.
[Photos: Julien Tromeur/Unsplash; Luke Jones/Unsplash]
Original article @ Fast Company.
The post What is ‘self-evolving AI’? And why is it so scary? appeared first on Faisal Hoque.
August 19, 2025
Why Love Must Guide Us Through the Age of Superintelligence
Once AI can improve itself, humans lose control. But right now, we can still choose what it values and optimizes for. Let’s make that choice love.KEY POINTSOnce AI starts rewriting its own code, we lose control. Values we set now could save us.The most critical value to embed is love—care for human dignity.Act fast: Pick good data, work together globally, share control, and fund safe AI research.
I’ve spent decades building businesses and technologies and watching innovations reshape our world. But nothing has kept me awake at night quite like artificial intelligence (AI), and I’m worried about the trajectory we’re on right now.
What is that trajectory? Well, just a few months ago, researchers built the Darwin Gödel Machine, which is “a self-improving system that iteratively modifies its own code.” This is one of the many canaries in the coalmine, and it tells us where we’re heading: self-evolving AI systems, artificial general intelligence (AGI), and, ultimately, superintelligence that could dwarf human cognitive abilities by several orders of magnitude.
In my recent book TRANSCEND, I explored how AI forces us to confront the deepest questions about human nature and our capacity for growth. And after years of thinking about the challenges raised by AI, I’ve come to believe that while technical solutions are crucial, they’re not enough. We need something deeper: a fundamental commitment to love and human dignity that guides every decision we make about AI development.
The Path to Superintelligence—and Its DangersToday’s AI excels at defined tasks—beating us at chess, writing emails, recognizing faces. But researchers are racing toward AI that matches human intelligence across all domains. Once that threshold is crossed, these systems might start improving themselves, rewriting their own code, and thereby becoming exponentially and iteratively more capable.
This is what researchers call “recursive self-improvement,” and it could significantly quicken the journey from human-level to superintelligent AI. Geoffrey Hinton, Nobel Prize winner and the “Godfather of AI,” left Google to warn about risks like these, and when someone of his stature estimates a 10-20 percent chance that advanced AI systems could lead to human extinction within 30 years, we need to listen.
He’s not talking about malicious machines plotting against us. He’s worried about systems that don’t care about us and are intelligent enough to run circles around us.
These machines won’t actively try to attack human welfare. They just won’t value it enough to protect it if it clashes with their objectives.
Why Values Matter More Than CodeI’ve built technology companies for decades, and through that I’ve learned that the most sophisticated systems reflect the values of their creators, often in ways we don’t initially recognize.
With AI, this principle takes on existential importance.
AI systems learn by absorbing human-generated content—our books, conversations, social media posts. They become mirrors reflecting our collective values back at us. Microsoft’s chatbot Tay demonstrated this dramatically. Within 24 hours of learning from Twitter, it was expressing racist views—not because Microsoft programmed hatred, but because it learned from human behavior online.
Now imagine that same learning process with systems millions of times more capable. The values embedded during development might become permanently fixed as they evolve beyond our comprehension. This is why love—genuine care for human dignity and welfare—isn’t just morally appealing but strategically essential for our survival.
What Love Actually Means for AIWhen I talk about love in the context of AI, I’m not being sentimental. I’m talking about operationalizable principles:
Universal human dignity: AI systems recognizing the inherent worth of every person, regardless of nationality or wealth.Long-term thinking: Care for future generations, not just immediate optimization.Inclusive benefit: Ensuring AI serves all humanity, not just those with access.Humility and restraint: Recognition that power requires responsibility.These aren’t abstract ideals that sound good and have no practical impact. Rather, they’re meant to be design principles that can guide technical development and international cooperation.
The Limits of LoveLove alone won’t solve all the challenges of AI alignment. We still need rigorous research, regulatory frameworks, and governance structures. We need mechanisms for conflict resolution, because different cultures emphasize different values, and even well-intentioned people disagree about loving behavior.
We need a lot more than love, then—but we also absolutely need love. Love is the foundation that must inform everything else. Without a shared commitment to human dignity, it is impossible to navigate technical, political, and social challenges constructively.
Practical Steps ForwardGood intentions without concrete action are worthless. Here’s what we must do:
Reform AI training and development: We need diverse, international teams developing AI systems. We must curate training datasets emphasizing humanity’s highest values while filtering harmful content. Current practices of training on whatever data is available remind me of early internet companies that prioritized growth over responsibility.Build global cooperation: Climate change is teaching us what happens when we treat global challenges as someone else’s problem. We can’t afford to repeat these mistakes with AI. Despite the difficulty, we need international standards for AI safety research, shared protocols for testing systems, and mechanisms for addressing risks that transcend borders.Democratize AI governance: The people most affected by decisions should have a voice in making them. We need public engagement processes helping societies decide what values AI systems should embody, ensuring benefits reach underserved communities, not just wealthy early adopters.Invest in value alignment research: We’re dramatically underfunding value alignment research compared to capabilities research—racing to make AI more powerful while barely investing in making it beneficial. We need research into embedding stable values in self-modifying systems and better methods for understanding AI behavior.Model our best values: We are part of AI’s training data. Every digital interaction potentially teaches these systems about human nature. We must promote discourse emphasizing empathy and cooperation while addressing divisions that AI systems are learning to replicate.The Time-Sensitive Nature of This ChoiceOnce systems become capable of recursive self-improvement, our ability to influence their development may first diminish and then disappear entirely—they will develop themselves, in quicker and more sophisticated ways than we will be able to control.
Recursive self-improvement and superintelligent AI aren’t here yet—but the AI systems being developed now by major companies are potential precursors to those things. We have a window of opportunity to establish the principles that will influence far more advanced systems later
We may only have years, not decades, to get this right.
Our Defining MomentThe machines will likely surpass us in processing power and intelligence. But we still get to determine what they optimize for, what they care about, and what vision of the future they work toward.
The technical challenges are immense, and solutions aren’t guaranteed. But we increase our chances by ensuring the humans developing these systems are guided by our highest values rather than our worst impulses.
Let’s make sure that the systems are guided by love.
[Photo: Chiew / Adobe Stock]
Original article @ Psychology Today.
The post Why Love Must Guide Us Through the Age of Superintelligence appeared first on Faisal Hoque.
August 13, 2025
AI and the death (and rebirth) of middle management
Inside the fall and rise of ‘unbossing.’Over the past few years, the corporate world has been reshaped by a quiet revolution: the rise of “unbossing.” Companies like Dell, Amazon, Microsoft, and Google have aggressively flattened their organizational structures, stripping away layers of middle management to boost agility and efficiency. According to Gartner, by 2026, 20% of organizations will leverage AI to eliminate more than half of their current middle management roles, fundamentally reshaping their hierarchies. A 2025 Korn Ferry Workforce survey underscores this shift, with 41% of employees reporting that their companies have already reduced managerial layers.
As Dario Amodei, CEO of Anthropic, has warned, AI could lead to a “white-collar massacre” if companies fail to adapt thoughtfully. And indeed, AI does present an unprecedented opportunity to deliver efficiency gains by automating many traditional middle management functions—from coordination and scheduling to data analysis and performance monitoring. Companies that fail to capture these efficiencies risk becoming bloated and uncompetitive in an increasingly lean marketplace.
Yet rushing to gut the middle management layer without careful consideration is equally dangerous. As I explored in a previous article, overzealous workforce reductions can lead to devastating losses of institutional knowledge and the elimination of crucial career development pathways. The challenge isn’t whether to use AI to streamline management—it’s how to do so intelligently.
REFRAMING THE ROLE OF MIDDLE MANAGEMENTTo navigate this transformation effectively, companies must shift their perspective from simply eliminating managerial layers to strategically reimagining their purpose. Middle managers have long served as the backbone of organizations, coordinating teams, overseeing operations, and ensuring accountability. Many of these tasks—scheduling, data analysis, approvals, and audits—are precisely where AI either already excels or will excel once agentic AI is fully implemented. These systems can automate repetitive processes, monitor performance in real time, and provide data-driven insights with a speed and accuracy that humans simply cannot match.
This presents a clear opportunity for efficiency gains. By offloading these routine functions to AI, organizations can reduce costs and accelerate decision-making. However, this automation doesn’t eliminate the need for human management—it transforms it. Notably, the evolved middle management role will increasingly blur traditional boundaries with HR functions, as managers become more deeply involved in talent development, cultural transformation, and employee well-being.
Three key functions will define the future of middle management:
Orchestrators of AI-Human Collaboration: As AI agents become integral to business operations, managers will need to master the art of orchestrating hybrid teams. This involves not only understanding how AI tools function but also knowing how to integrate them seamlessly with human efforts. For example, a manager might use AI to analyze project data and identify bottlenecks, then work with their team to devise the kind of creative solutions that AI cannot generate on its own. This shift requires technical fluency and a strategic mindset to ensure that AI enhances, rather than overshadows, human contributions.Agents of Change: AI is a disruptive force, upending traditional business models and workflows at an unprecedented pace. Middle managers must become change agents, guiding their organizations through this transformation. This means anticipating disruptions, redesigning processes to incorporate AI, and fostering a culture of adaptability and resilience that motivates their teams to embrace change rather than fear it.Coaches for a New Era: The rapid integration of AI is reshaping the skills employees need to succeed. Middle managers will play a pivotal role as coaches, helping their teams navigate this new reality and access the resources they need. This will involve mentoring employees through the reskilling process, whether that involves learning how to use AI tools or developing soft skills like critical thinking and emotional intelligence. In a world in which job roles are constantly evolving, this coaching function will be essential for maintaining morale and productivity.A STRATEGIC ROAD MAP FOR TRANSFORMATIONTo successfully integrate AI while redefining middle management, companies must take deliberate, strategic actions. Here are four key steps to guide this process:
1. Reskill Middle Managers for an AI-Driven World: Companies must equip managers with the tools they need to thrive in an AI-augmented workplace. This includes training in AI literacy, change management, and collaborative leadership. For example, programs could teach managers how to use AI-powered analytics to make data-driven decisions or how to lead hybrid teams effectively.
First Step: AI Workflow Analysis. Tomorrow, pick one recurring managerial task—such as status reporting—and break it into sub-steps, tagging each as Automate, Augment, or Human-Only. Capture a before/after flow, choose one AI tool to test, and run a one-week experiment based on that redesign.
2. Foster AI Literacy Across the Organization: AI is not just a tool for tech teams—it’s a transformative force that affects every function, from marketing to HR to operations. To maximize its impact, companies must ensure that employees understand how to leverage AI in their daily work. This could involve workshops on using AI tools for tasks like data analysis or customer engagement, as well as broader education about the strategic implications of AI.
First Step: Create an AI Use Log. Create a shared document with three columns—Task, Tool/Prompt, Result/Risk—and ask each team member to add one real-world example by end of day. By tomorrow night you’ll have a living inventory of use cases that can serve as a start-point for an AI literacy program.
3. Redefine Hiring and Promotion Criteria: Traditional metrics for managerial success, such as years of experience or the size of a manager’s team, are becoming outdated. Instead, companies should prioritize skills like adaptability, AI fluency, and the ability to lead through ambiguity. For example, when hiring or promoting managers, organizations might assess a candidate’s ability to integrate AI tools into workflows or their track record of leading change initiatives.
First Step: Adapt Your Interview Questions. Add two questions to your next interview or promotion panel: “Show a process you’ve redesigned with AI. What stayed human?” and “How do you verify outputs and handle errors?” This will bring out real-world fluency, judgment, and accountability without overhauling the whole interview process.
4. Map and Optimize Workflows: To fully harness AI’s potential, companies must conduct a thorough audit of their workflows to identify where AI can add value and where human judgment remains critical. This involves mapping out existing processes, pinpointing inefficiencies, and determining how AI can streamline operations. For instance, a company might use AI to automate routine approvals in its supply chain while relying on managers to negotiate strategic partnerships.
First Step: Set Decision Rights. For any workflow touched by AI, draft a mini-RACI that identifies who approves, who reviews, and which decisions must stay human-in-the-loop. Publish it to the team tomorrow so guardrails are explicit.
THE POWER OF THOUGHTFUL TRANSFORMATIONThe rise of AI represents both an imperative and an opportunity for organizational transformation. Yes, companies should embrace the efficiency gains that come from automating traditional middle management functions—the competitive landscape demands it. But those who approach this transformation thoughtfully, preserving crucial knowledge and career pathways while reimagining the manager’s role, will build organizations that are not just leaner but genuinely smarter.
The choice isn’t between humans and machines—it’s between thoughtful transformation and reckless disruption. Organizations that recognize this distinction and act accordingly won’t just survive the coming changes; they’ll help define what the future of work looks like.
[Image: Ony98/Adobe Stock; Menara Grafis/Adobe Stock; thenikonpro/Adobe Stock]
Original article @ Fast Company.
The post AI and the death (and rebirth) of middle management appeared first on Faisal Hoque.
Algorithms and the Erosion of Humanity
Algorithms mirror flaws and fuel mistrust. Surveillance erodes privacy. Mindful choices, collective action—laws, literacy—can reclaim humanity.KEY POINTSAlgorithms echo us. To fix them, we must evolve—starting with how we think and act.Pause before reacting. Mindful choices turn digital fights into real dialogue.Curate your feeds with care. Ditch outrage, seek voices that spark clarity.
Do you remember the early days of social media? The promise of connection, of democratic empowerment, of barriers crumbling and gates opening? In those heady days, the co-founder of Twitter said that “Twitter was a triumph of humanity, not of tech,” and rather than laughing, everyone clapped.
Today’s reality has turned out a bit differently. Algorithms are fueling mistrust, fracturing society through surveillance and division, and eroding the foundations of authentic human connection. We find ourselves increasingly isolated despite being more “connected” than ever.
This outcome is not an inevitable byproduct of progress. It is a consequence of human choices—and, crucially, a reflection of who we are. And it is possible to reverse course, but it will take conscious effort to reshape both these systems and ourselves.
The Trust CrisisMedia platforms care more about clicks than the truth. MIT research shows false information spreads significantly faster than truth online, and the Facebook Papers showed that anger generated more engagement than understanding or compassion.
The result is declining trust—as the 2025 Edelman Trust Barometer reports, global trust in media sources of all kinds, including social media, is declining. And the result of that is a fractured public square where agreement on a shared reality is elusive.
As I have argued both here and in my recent book Transcend: Unlocking Humanity in the Age of AI, algorithms don’t actually create these problems—they amplify them. Artificial intelligence (AI) systems are trained on human behavior and human culture, learning from what we say, do, and produce. In essence, algorithms hold up a mirror to humanity, reflecting back both our finest qualities and our darkest impulses.
When we see division, mistrust, and outrage dominating our feeds, we’re not just witnessing technological failure—we’re confronting our own nature. The algorithm didn’t invent our tendency to pay more attention to threats than to good news, or our inclination to seek information that confirms what we already believe. It simply learned these patterns from us and then amplified them at an unprecedented scale.
This mirror effect is both sobering and empowering. It’s sobering because it forces us to acknowledge that the problem isn’t just “out there” in the technology—it’s also within us. But it’s empowering because it means we have agency. If algorithms reflect what we are, then by changing ourselves, we can change them.
The Fracturing of SocietyWhen we can’t straightforwardly believe what is reported, we no longer have a common foundation of facts that we can agree on. And when we don’t have this common foundation, dialogue becomes dispute, and conversation turns into conflict. Political discourse on platforms like X often spirals into polarized shouting matches, as algorithms amplify divisive voices while marginalizing moderation.
The assault on our social fabric extends beyond information manipulation to the erosion of privacy. Research shows 81 percent of Americans believe the risks of data collection outweigh its benefits, yet we continue to feed these systems with every click, scroll, and pause. This isn’t just an individual concern—when nothing remains truly private, authentic social relationships become impossible.
The constant threat of surveillance creates a chilling effect on genuine expression. We self-censor in conversations, knowing our words might be captured and shared. We become performative rather than vulnerable, guarded rather than open. When people know they’re being watched—by algorithms, by potential viral exposure—they start policing their own behavior and others’, weakening the diverse viewpoints essential for healthy democracy.
The recent Coldplay concert incident exposes this cruel reality: flawed judgment and questionable behavior, transformed into viral spectacle. Algorithms don’t distinguish between newsworthy events and personal humiliation; they amplify whatever maximizes clicks, leaving individuals defenseless against viral shaming. Without spaces for true privacy, we lose the foundation that allows deep human connection to exist.
Reclaiming Our HumanityWe have been complicit in creating this world of misinformation, mistrust, division, and surveillance. But in this very fact lies the possibility of salvation—what we have helped create, we can also help change.
Like everything that is worth doing in life, it cannot be done alone. We will need a mixture of individual awareness and collective action if we are to push back against algorithmic dystopia.
Collectively, we need robust privacy laws, investment in ethical AI, and widespread digital literacy programs. A striking example of governmental action here is Finland’s long-running media literacy education program, which aims to foster critical thinking in the consumption of media, a skill desperately needed in a world awash with misinformation.
Or to take another example, Danish courts ruled in 2024 that individuals own the copyright to their own faces, establishing that using someone’s image without consent—even in public spaces—constitutes copyright infringement. This landmark decision recognizes biometric data as personal property, giving citizens legal recourse against unauthorized viral exposure.
We also need to change at the individual level, and this requires more than good intentions—it demands specific practices that rewire our relationship with digital stimuli.
Start with your emotional responses. Before sharing or reacting to content, pause and ask: “What am I feeling right now? Anger? Fear? Moral outrage?” These emotions aren’t wrong, but they’re often the very feelings algorithms exploit to drive engagement. By recognizing them, you begin to reclaim choice in your responses.Practice what mindfulness traditions call “beginner’s mind” when encountering opposing viewpoints. Instead of immediately judging or dismissing, approach disagreement with curiosity: “What might I not understand about this perspective?” This single shift can transform algorithmic conflict into genuine dialogue.Curate your digital environment with intention. The information we consume shapes how we think, feel, and lead. Be deliberate. Unfollow sources that thrive on outrage and division. Replace them with voices that challenge you constructively and expand your perspective. This isn’t about escaping discomfort—it’s about cultivating clarity in a world designed for distraction.The Choice Is OursWhat we’re witnessing isn’t technological inevitability—it’s the consequence of countless human choices, and it’s still possible for us to change the outcome by changing our choices. The age of algorithms doesn’t have to erode our humanity.
We can and we must take back control. This requires dual action: reshaping algorithms at the collective level through political and legal intervention, while simultaneously rewiring our own responses at the individual level through mindful practices.
And when our children and grandchildren want to know what we did, the mirror of technology will show them what choices we made.
Let’s hope we make the right ones.
[Photo: HadK / Adobe Stock]
Original article @ Psychology Today.
The post Algorithms and the Erosion of Humanity appeared first on Faisal Hoque.
August 5, 2025
The AI Doppelganger Dilemma
AI clones mimic you, threatening identity. Laws aren’t keeping up; we need a veto and clear consent rules to protect your digital self.KEY POINTSAI clones mimic us without consent, disrupting identity and causing stress.We need laws like Denmark’s to block unauthorized AI copies.A digital likeness veto and consent rules can protect our digital self and story.
Imagine waking up to a video of yourself going viral online. It’s your face, your voice, your mannerisms. But the words? Nothing you’ve ever said.
This isn’t science fiction. It’s the AI Doppelganger Dilemma, a growing crisis that’s shaking the foundations of identity, trust, and autonomy in our digital age. As artificial intelligence advances, it can now replicate our voices, faces, behaviors, and even thought patterns with eerie precision. These digital “twins” can act in our likeness, speak on our behalf, and make decisions that mimic our style.
As long as these digital twins are acting for us and with our consent, they can be incredible productivity tools. But what happens when an artificial version of you exists that’s not under your control? And what does that mean for who you are?
The Psychological Toll of a Digital DoubleWe often think of identity as a fixed thing: the unique character made up of our values, quirks, and choices. But psychological research suggests that identity is really more like a story we tell ourselves. We build our sense of self from memories, actions, and the feeling that we are in control of our own narrative. Now imagine that you have an AI clone out there appearing in a commercial and endorsing a product you’d never touch. Or perhaps someone is using your digital twin to push political opinions that you find distasteful.
When a digital version of you acts independently, it can feel like a breach of your personality, as if it’s a violation of your very existence. I’ve spoken with people who have seen deepfakes of themselves online, and they describe the experience as like a gut-punch, as if they were watching a stranger wear their skin. This kind of misrepresentation and loss of control can threaten our sense of who we are and may even leave us feeling as if we lack agency in the world. After all, if someone else can take control of your image and narrative, then what is it that makes you you, in the end?
The Law’s Lag Behind TechnologyWhile laws like the Take it Down Act provide protection from non-consensual sexual deepfakes, there are few legal tools to prevent other kinds of impersonation. In the U.S., some states have “right of publicity” laws, designed to protect against an unauthorized use of a person’s likeness, but these were created to protect celebrities, not everyday people. They are woefully outdated for an era in which AI can clone not just your face but your entire persona. The European Union’s General Data Protection Regulation (GDPR) offers stronger protections, but it doesn’t address the creation of synthetic versions of a person that have been created from their public posts or videos.
Take the case of a small-business owner I met last year. A deepfake video showed her “promoting” a shady investment scheme she’d never heard of. By the time she discovered that the video was out there, her clients were confused and her reputation had taken a hit. Her legal recourse? Minimal. All she could do was clear up after the mess and hope it didn’t happen again.
Redefining Identity TheftTraditional identity theft involves using stolen credit cards or personal details to create a “paper” identity. This is different—it’s the theft of your person. It involves losing control over what’s said or done in your name.
This kind of impersonation cuts deep. It’s not just about privacy; it’s about autonomy, the bedrock of mental health. We need to feel we own how the world sees us. When an AI doppelgänger disrupts that, it can leave us feeling powerless.
What We Can Do About ItThis isn’t a problem we can ignore. Governments and businesses need to act now to ensure that individuals can maintain control of their digital representations. We need to:
Establish a Legal Right to Refuse: We need a “digital likeness veto,” a legal right to stop companies from building AI models of us, even if they are using public data. If someone wants to simulate you, they should need your sign-off first. Denmark has recently moved in this direction, with a proposed law that would give citizens copyright over their own likeness, providing a powerful protection against deepfakes. Other nations need to follow the same path.Demand a Clear Consent Code of Conduct: Until the appropriate legislation is in place, responsible organizations must fill the gap. Some companies are starting to act already, experimenting with “digital twin clauses” that spell out how replicas can be used. This is a step in the right direction, but more is needed. Businesses must sign up to a code of conduct that requires explicit permission when using AI to replicate someone’s identity, whether in an ad or research or for the creation of a virtual assistant or entertainment product.Owning Your Identity in the AI AgeAt its core, the AI Doppelganger Dilemma forces us to ask: If a machine can replicate your speech, likeness, and behavior patterns, what’s left that remains truly yours?
In a world in which your digital double could outlive you or act against you, protecting your identity isn’t just a technical issue. It’s about safeguarding your voice, your choices, your story, your self.
We need to fight for the right to define ourselves. If we lose the power to control how we’re seen, we risk losing the essence of who we are.
[Photo: sarinrat / Adobestock]
Original article @ Psychology Today.
The post The AI Doppelganger Dilemma appeared first on Faisal Hoque.


