Faisal Hoque's Blog, page 9

November 30, 2024

Implementing AI: The Paradox of Devotion and Detachment

AI simultaneously demands our deepest devotion and most disciplined detachment.

“We are here to awaken from our illusion of separateness.”


—Thich Nhat Hanh, The Heart of Understanding


In the rush to embrace artificial intelligence (AI), organizations and individuals often oscillate between blind enthusiasm and paralyzing fear.

Rightfully so.

AI represents a profound paradox in human innovation. It is a technology that simultaneously demands our deepest devotion and most disciplined detachment.

To understand AI, we must grasp this essential duality.

Let me share how we’ve integrated the principles of Devotion and Detachment into our AI strategy and implementations, leveraging the OPEN and CARE frameworks outlined in our upcoming book, TRANSCEND.

Defining Devotion and Detachment:Devotion, known as Bhakti (भक्ति) in Sanskrit and Sadhana (সাধনা) in Bengali, embodies a disciplined effort toward growth and infinite possibilities, as described by Rabindranath Tagore. It emphasizes mindfulness and continuous self-improvement, guiding personal, leadership, and innovative journeys.Detachment, known as Moksha (मोक्ष) in Sanskrit and Mokkho (মোক্ষ) in Bengali, emphasizes emotional resilience and inner tranquility, as seen in both Eastern philosophies and Stoicism. By releasing attachments to outcomes, it fosters clarity, objective decision-making, and sustainable growth in leadership and personal development.

This dual mindset of detachment and devotion offers a powerful framework for navigating the AI revolution, enabling us to implement our framework OPEN (Outline, Partner, Experiment, Navigate) and CARE (Catastrophize, Assess, Regulate, Exit) with greater effectiveness.

The OPEN Framework

The OPEN framework’s first phase, Outline, requires a careful balance of both mindsets. Detachment from preconceptions about AI’s capabilities and limitations allows us to see possibilities we might otherwise miss. Meanwhile, devotion to our core purpose ensures we don’t chase technological possibilities that don’t serve our genuine needs. A healthcare organization implementing AI for patient care, for instance, must detach from traditional methods while remaining devoted to patient outcomes.

In the Partner phase, detachment from the need to control everything internally opens the door to valuable collaborations. Many organizations fail here because they can’t let go of their proprietary mindset. Yet successful AI implementation often requires multiple partnerships – with technology providers, domain experts, and even competitors. Devotion to the partnership’s success, rather than just our own interests, creates the trust necessary for these relationships to flourish.

The Experiment phase particularly benefits from detachment from fear of failure. Organizations often limit their AI experiments because they’re too attached to perfect outcomes. True experimentation requires the courage to fail and learn. However, this detachment must be balanced with devotion to rigorous testing and evaluation. Without this dedication to thorough assessment, experiments become merely playful exercises rather than strategic learning opportunities.

In the Navigate phase, detachment enables the flexibility needed to adapt to AI’s rapidly evolving landscape. Organizations must be willing to pivot when necessary, abandoning approaches that no longer serve their purpose. Simultaneously, devotion to long-term strategic goals prevents drift, ensuring that tactical adjustments don’t compromise overall direction.

The CARE Framework

The CARE framework, focused on risk management, benefits equally from these dual mindsets. In the Catastrophize phase, detachment from optimism bias enables clear-eyed assessment of potential risks. Many organizations fail to identify serious AI risks because they’re too attached to positive outcomes. Yet this detachment must be balanced with devotion to finding solutions rather than just identifying problems. The goal isn’t to paralyze with fear but to prepare effectively.

During Assessment, detachment from defensive reactions allows honest evaluation of organizational vulnerabilities. Many organizations struggle here because they’re too attached to their self-image as highly capable and secure. Yet devotion to protecting stakeholders requires this honest self-assessment. Organizations must be willing to acknowledge their weaknesses to address them effectively.

The Regulate phase demands detachment from quick fixes and easy solutions. Effective risk management requires sustained effort and often significant organizational change. Here, devotion to building robust systems and processes becomes crucial. Organizations must commit to ongoing monitoring and adjustment of their risk management strategies, even when immediate threats aren’t apparent.

Perhaps most challenging is the Exit phase, where detachment from sunk costs becomes crucial. Organizations must be willing to shut down AI initiatives that prove too risky or unaligned with their values, regardless of previous investment. This requires devotion to organizational values and risk parameters over short-term gains or the desire to prove initial decisions correct.

Practical Implementation

Cultivating these complementary mindsets requires conscious effort and practice. Organizations can start by incorporating both perspectives into their decision-making processes.

For example, when evaluating AI projects, teams should explicitly consider what they need to let go of (detachment) and what they need to maintain focus on (devotion).

Regular reflection exercises can help teams maintain this balance. Questions like “What assumptions are we holding onto that we should release?” help cultivate detachment, while “What core values must we protect as we innovate?” strengthen devotion.

Spotify’s approach to AI development exemplifies this balance. Their detachment from being solely a music streaming platform allowed them to embrace AI’s potential in creating deeply personalized experiences, while their devotion to their mission of connecting creators with audiences has kept them focused on enhancing rather than replacing human creativity. Through their AI-driven discovery features and recommendation algorithms, they’ve maintained the delicate balance between technological innovation and preserving the human elements of music curation and artistic expression. This balanced approach has positioned them to use AI not just for efficiency, but for deepening the connection between artists and listeners in meaningful ways.

The Path Forward

As AI continues to transform our world, the ability to balance detachment and devotion will become increasingly crucial. Organizations that master this balance will be better positioned to implement both the OPEN and CARE frameworks effectively, maximizing AI’s benefits while managing its risks.

By consciously cultivating both detachment and devotion, organizations can navigate the AI revolution with greater confidence and effectiveness, creating sustainable value while maintaining their essential character.

The challenge ahead isn’t just technological – it’s fundamentally human. By cultivating detachment and devotion in equal measure, we can ensure that our journey with AI could enhance rather than diminish our humanity, creating a future that’s both technologically advanced and deeply meaningful.

Adapted from my book TRANSCEND.

Adapted/published with permission from ‘TRANSCEND’ by Faisal Hoque (Post Hill Press, March 25, 2025). Copyright 20204, Faisal Hoque, All rights reserved.

Look for my latest book TRANSCEND available for pre-order now.

The post Implementing AI: The Paradox of Devotion and Detachment appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on November 30, 2024 04:20

November 17, 2024

Unlocking Agility: Thriving in the AI-Driven Business Landscape

Agility has become the new organization currency in the age of AI.

Agility, by definition, is an organization’s response to change and challenges driven by macro- and microeconomic conditions.

In 2006, working with a dozen leading academics and Fortune 100 executives, we defined ‘Organizational Agility’ in our book, Winning The 3-Legged Race, as:

“Agile organizations possess the processes and structures, or what we call ‘‘intangible assets,’’ that give them situational awareness into the macroeconomics and the competitive and operational trends inside and outside their four walls. They also have the management and technology mechanisms needed to act on that knowledge rapidly.

Success requires innovation in services and products. It also requires the continuous improvement of business processes within and across organizational boundaries. These two mandates are mirror images. Innovation of services and products cannot occur without well-defined and aligned processes; nor can business processes be improved without attention to changes in markets, and customer needs.”

BUSINESS AGILITY INDEX

In a follow-on study conducted by my team in 2010, publicly traded U.S. companies were examined across multiple industry groups, using a range of financial measures—including value, performance, growth, margin, capital efficiency, and stock price volatility—to measure the financial effect of business agility against a maturity scale (see Table 8–1, [Source: The Power of Convergence, Faisal Hoque, et al]).

This research confirmed the economic value and financial performance advantages of companies that practice Business Agility. The overall results show that companies with highly mature business agility characteristics—the Business Agility leaders—exhibited superior financial performance:

= 13 percent to 38 percent performance advantage in capital efficiency and value.

= 10 percent to 15 percent performance advantage in margin.

= Up to 5 percent performance advantage in revenue and earnings growth.

= Up to one-third less stock price volatility compared to non-agile publicly traded organizations.

Moreover, the study showed that the agility performance advantage was sustainable; these numbers reflect both the one-year view as well as a five-year view.

We called this the Business Agility Index.

Much of that theory and research not only remains relevant today, in the advent of AI, “agility” will be the defining advantage for any organization.

Business leaders often use the term ‘‘agility’’ to describe their business plans and strategic initiatives, but it is often little more than a word—and a fervent wish. Thriving under constant market pressure requires business leaders to identify, understand, and respond in real time to change and disruption.

Organizations must find new ways to compete by streamlining business processes to eliminate redundancy and costly exceptions, while creating higher value. Despite the fact that the cost of doing business continues to rise, agile companies are mastering cost containment by increasing their ability to respond and adapt to frequently changing market conditions.

On the other hand, we don’t have to look far for enterprises that have stumbled or disappeared because they lost their agility. Sun Microsystems, once the superstar of Silicon Valley, was sold to software giant Oracle at a near fire sale price because it hadbecome too fragmented in its products and failed to capture the imperative position among customers. And Newsweek, one of the more venerable weekly news and analysis magazines of the last century, was sold for $1.00 after having its value position eroded by new, Internet-based media.

Could these organizations have reinvented themselves to stave off disaster and collapse? Absolutely. It would have requiredrethinking their business models, making strategic investments in new technologies and markets, and taking a risk on immature ventures. Precipitating this change is situational awareness, or the ability to recognize change, opportunities, and competitive threats. But it takes more than recognition to act upon these changes. It takes agility and a willingness to respond.

AI FOR BUSINESS AGILITY

Agility has become the new organization currency in the age of AI because the rapid pace of technological advancement, market shifts, and evolving customer expectations demand organizations to be highly adaptive.

AI enhances organization agility by enabling rapid, data-driven decision-making, optimizing operations, and allowing organizations to adapt quickly to changing conditions. Here’s how:

Predictive Analytics: AI provides real-time insights from vast data, allowing organizations to anticipate trends and make informed adjustments swiftly. Automation of Routine Tasks: Automating repetitive processes increases efficiency and frees up employees for higher-value work, improving overall responsiveness. Customer Personalization: AI tailors experiences to individual customer preferences, boosting satisfaction and enabling organizations to adapt to evolving customer needs. Supply Chain Optimization: AI forecasts demand, identifies potential disruptions, and recommends alternatives, making supply chains more flexible and resilient. Risk Management and Dynamic Workforce Allocation: AI assesses risks and supports flexible resource allocation, empowering organizations to pivot quickly in response to internal and external changes.

Together, these capabilities make organizations more agile, resilient, and responsive in a competitive, fast-paced market.

THE JOURNEY TOWARD AGILITY

Achieving agility is not exclusive to organizations of a certain size by revenue or industry sector but is available to all organizations. Further, our research finding specifies the behaviors and constructs that drive agility.

Those behaviors and constructs are defined as a repeatable management practice that can be implemented utilizing various management framework as we defined in our recent books, REINVENT and TRANSCEND.

Organizations discover profitable opportunities—new market spaces or gaps in existing market spaces—by considering:

Signals regarding product/service, customer, technology, socio- economic, and cultural trends. Competitors’ current and future strategic positions. The organization’s internal competencies. The competencies it might gain through access to partners.

An organization’s initial position in the product market must then be regularly augmented such that it continues to offer a valueproposi- tion beyond those provided by competitors. Business and technology together play a critical role in establishing a strategic position or in sustaining it once established; failing to understand these roles across each of an organization’s product markets can—and often does—lead to inappropriate levels of business and technology investment.

An organization that has the ability to successfully negotiate its path through the ever-shifting competitive landscape has developed the talent to continuously transform itself as opportunities and threats appear. This organization maintains three characteristics:

Ongoing assessment of activities, eliminating those that don’t serve the core business strategy. Continual refinement of activities for greater efficiency and productivity. Redirection of resources to new products, processes and business models.

Building an agile organization is not easy. Change must become a part of an enterprise’s fabric. Bringing in a persuasive leader might help the transformation, but it is the change management skills they bring to the table that will help make the journey a successful one.

More crucial to that success, however, is the need to incorporate new ways of doing business into the company’s management systems and business processes.

This means new organizational structures, the creation and sharing of new types of information, and establishment of new decision-making processes.

From my book TRANSCEND.

Adapted/published with permission from ‘TRANSCEND’ by Faisal Hoque (Post Hill Press, March 25, 2025). Copyright 20204, Faisal Hoque, All rights reserved.

Look for my latest book TRANSCEND available for pre-order now.

The post Unlocking Agility: Thriving in the AI-Driven Business Landscape appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on November 17, 2024 04:28

October 29, 2024

AI: Legislators Have NO Time to Waste

The consequences of allowing ourselves to be distracted from the most important challenge humanity may ever face could be fatal.

Whatever approach we take, “We need to do something quickly,” as Senator Blumenthal says. “And so far, we haven’t been doing it.” With the first land war in Europe for more than a generation, concerns about the rise of China, and political turmoil is many Western nations, it is easy to be distracted by immediate dangers that appear more pressing.

The U.S. presidential election in November will have an enormous impact on the trajectory our country takes over the next four years. The two candidates have staked out often starkly different positions on some of the most important issues of our time. Yet on one key question the silence has been deafening. Over the next decade, the development and application of Artificial Intelligence will change the way we live and work. It may even change the way we think and vote. But the question of how we should regulate this technology barely registers as a blip on the nation’s political radar.

This is not to say that the candidates don’t care about Artificial Intelligence. Kamala Harris served as the current administration’s AI czar while Donald Trump has promised to repeal the federal government’s current policy framework. But few voters would be able to tell you where either party stands on even the biggest picture issues.

We cannot afford to remain complacent about this paradigm-breaking technology.

The impact of AI is already with us. Large language models, such as ChatGPT 4, Claude, and Gemini, have already begun to replace human workers in many industries. Machine learning algorithms are powering new advances in our healthcare system at the same time as they risk entrenching old biases. And while it is unlikely that AI policy will play any role in the outcome of the election, AI itself is already being used by bad actors as a tool to try to sway the results.

Legislation to regulate the development and application of AI is needed and it is needed now. Unfortunately, with partisan divisions feeding the current congressional gridlock, there is little appetite to expend political capital on an issue that does not yet matter to voters. But we cannot afford for this shortsightedness to prevail. The stakes are simply too high.

Over the last year, I have been working on a book about AI – about its enormous potential and its equally enormous dangers, and about how we can harness the former and manage the latter (Transcend: Unlocking Humanity in the Age of AI will be published by Post Hill Press in early 2025). As part of this process, I have been sitting down with distinguished thinkers in the fields of science, philosophy, and politics to hear their views on how we should be tackling the defining issue of our time. I recently met with United States Senator Richard Blumenthal to talk about how we can effectively regulate a technology that we don’t yet fully understand. 

Senator Blumenthal is co-author of the Bipartisan Framework for U.S. AI Act, also known as the Blumenthal-Hawley Framework. Created with Senator Josh Hawley, the Bipartisan Framework, has five pillars: 

Establish a Licensing Regime Administered by an Independent Oversight Body

Ensure Legal Accountability for Harms

Defend National Security and International Competition

Promote Transparency

Protect Consumers and Kids

Blumenthal & Hawley Announce Bipartisan Framework on Artificial Intelligence Legislation | U.S. Senator Richard Blumenthal ( senate.gov )

As Senator Blumenthal put it when discussing the framework with me, “AI has enormous potential for doing good, whether in medical research and treatment, exploring space, or simply doing the rapid calculations that are necessary for all kinds of other benefits to people around the globe. But there are also perils. Some seem like science fiction, but they are very real. The idea that AI models could be smarter than human beings and could, in effect, program themselves to be out of control, is a frightening prospect.” Senator Hawley frames the need for action in similarly stark terms: “Congress must act on AI regulation … Our American families, workers, and national security are on the line.”

Any serious attempt at regulation has to address both the utopian and dystopian possibilities that the development of AI will open up. Given the extreme nature of some of the risks, which could potentially include human extinction, Senator Blumenthal argues that it is necessary for regulation to pay close attention to the dangers: “we should be very risk-oriented behind what we do.” The regulation of the pharmaceutical industry offers a model here. The United States imposes strict regulations on the development and manufacturing of pharmaceuticals, ensuring that the entire process is subject to government oversight. And yet America’s leading role in the global pharmaceuticals industry shows that this can be done without damaging market competitiveness and vital progress in the sector. 

As with pharmaceuticals, Senator Blumenthal believes the risks surrounding AI are simply too grave to permit self-regulation by either businesses or individuals. “I think the government has to support and sustain an oversight mechanism, an entity that provides this kind of set of safeguards or regulations,” he explains. This regulatory entity would have the authority to oversee AI technologies, ensuring that they are developed and deployed responsibly. However, there are important challenges to overcome in crafting the necessary legislation. I spoke to the senator just a few days after the Supreme Court’s decision to overturn the Chevron deference doctrine, which allowed administrative agencies considerable latitude to interpret ambiguous statutes. Blumenthal acknowledges these difficulties but remains steadfast in his belief that creating an effective regulatory body is not only possible but essential. “This body, whatever it’s going to be called, has to feel its way and establish first that it has the power to regulate, and second, what that will include.”

Regulation is essential, then, and it must be risk-oriented. But we must be careful that we don’t throw out the baby with the bathwater. The only way to reduce the risk of AI to zero through legislation would be to introduce a blanket ban on all development in the field. Taking this approach is neither desirable nor possible given the certainty of international competition. Instead, the need to protect against potential harms must be balanced with the imperative to foster innovation. “We don’t want to stifle innovation,” says Blumenthal. “That’s why the regimented regulation has to be risk-based. If it doesn’t pose a risk, we don’t need a regulator.”

One of the key risks of AI is that as it develops it will render vast numbers of workplace roles redundant, contributing to mass unemployment, with all its attendant economic, psychological, and social consequences.

Senator Blumenthal argues that government has a responsibility to manage this risk. Most obviously, he says, “we have to do what we’ve done in the past, which is to provide support for training programs and skills to enable workers to do jobs that are in greater supply.” We cannot and should not attempt to stop the AI revolution from making work more efficient, but we must ensure that the frameworks and support networks are in place to enable workers to retrain and upskill so that they can move from declining career roles to those that will flourish in an AI-powered future. 

Generals, they say, are always fighting the last war. Similarly, governments tend to focus their efforts on regulating the last revolution and responding to the last disaster. That is not an option with AI.

As I suggest in the book, and as Senator Blumenthal also argues, here as in few other areas, it will be essential for governments to get ahead of the curve by developing regulatory tools that are not just appropriate for the last breakthrough but for the next one as well. 

How we deal with the looming problem of AI-driven misinformation and disinformation will be an important test case. Senator Blumenthal endorses watermarking as a way to verify AI outputs. As I have discussed elsewhere, when partnered with the immutable ledgers of blockchain technology, this could indeed be a powerful way of providing a point of information stability in the shifting sands of a future in which AI can put words in peoples’ mouths in a heartbeat. But to make this kind of solution work, we need to move quickly. It is essential that we have the necessary technical and regulatory frameworks in place before we are overwhelmed by rapidly developing AI capabilities.

At the moment, regulations are lagging behind the technology, and we cannot afford for this trend to continue. 

Senator Blumenthal’s suggestion that we think about AI regulation in terms of the models we use for the pharmaceutical industry provides a valuable analogy. But in my view, it doesn’t take us far enough. There are at least two key difference between AI development and the pharmaceutical industry. The first is that, unlike the drug development process, the barriers to entry are minimal when it comes to adapting existing AI models for new purposes.

For many people with entry-level software development skills, this is something that can be done at home. Second, while the manufacturing and distribution of pharmaceuticals requires significant infrastructure, computer code can be replicated endlessly and transmitted anywhere on the planet in a fraction of a second. The possibility of problematic AI being created and leaking out into the wild is simply much higher than is the case for new and dangerous drugs.

As Mustafa Suleyman, founder of DeepMind and CEO of Microsoft AI, argues in his book The coming wave: technology, power, and the twenty-first century’s greatest dilemma, the day is not far away when individuals will be able to use AI to create potentially devastating biological organisms at low cost and in the privacy of their own homes.

The direct and indirect dangers that arise from possibilities like this go beyond anything posed by pharmaceuticals and this difference needs to be accounted for in our regulatory processes.

It is also important to consider that our current frameworks for pharmaceutical regulation are not even close to being flawless. There is a very large and flourishing black market for drugs in the United States, which shows the limits of the government’s ability to control manufacturing, while the prescription drug crisis shows that even where regulation does reach, current laws are not always able to contain possible harms.

Given the potential for AI to generate extinction-level outcomes, it may be necessary to think in terms of the regulatory frameworks surrounding nuclear weapons and nuclear energy rather than those that apply in the drug industry.

Whatever approach we take, “We need to do something quickly,” as Senator Blumenthal says. “And so far, we haven’t been doing it.” With the first land war in Europe for more than a generation, concerns about the rise of China, and political turmoil is many Western nations, it is easy to be distracted by immediate dangers that appear more pressing. But we cannot afford to shunt AI to the back of the line. The consequences of allowing ourselves to be distracted from the most important challenge humanity may ever face could be fatal.  

From my book TRANSCEND.

Adapted/published with permission from ‘TRANSCEND’ by Faisal Hoque (Post Hill Press, March 25, 2025). Copyright 20204, Faisal Hoque, All rights reserved.

Look for my latest book TRANSCEND available for pre-order now.

The post AI: Legislators Have NO Time to Waste appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on October 29, 2024 01:46

October 10, 2024

To Understand AI We Must Understand Ourselves

The core value of humanity is our ability to make choices about what matters to us.

AI will have all manner of effects over the course of its development.

Faced with this, it is important to ask a simple question: should we care about all of those effects?

In short, the answer is no.

But how do we get there?

Let’s imagine that one of the effects of AI is that autocomplete functionality increases the usage of the letter “s” in English-language text and chat messaging by 8%. Let’s stipulate that this change is completely irrelevant to the interests of human beings or any other living beings or any systems that affect sentient beings. Let’s further stipulate that the fact that the letter “s” appears 8% more frequently also has no effect on anything relevant to the interests of sentient beings.

Now, from the perspective of human beings, the only reasonable response to The Great S Revolution is: So what? Who cares? The frequency of the use of this letter just doesn’t matter. We simply end up residing in a slightly more sibilant environment.

This thought experiment shows something that is almost banal but that is nonetheless profoundly important in figuring out how to respond to AI. AI will have a bunch of effects, and not all of them will matter to human beings. And even among the effects that do matter, there will be differences in how much they matter. Any wise response to AI must take both these things into account.

To Understand AI We Must Understand Ourselves

Put simply, if we want to understand what matters about AI, we first need to gain some clarity about what matters for humans and about how much different things matter. Or, to put it another way, we need to start trying to understand what is valuable to human beings and about human beings. If we don’t do this, it is almost inevitable that we will waste a great deal of time, energy, and resources on actions and policies that do little or nothing to unlock human potential. Worse, we will likely fail to protect what does matter about humanity and to humanity.

To start ourselves down this essential road, we need to engage with even more fundamental philosophical questions:

What is a human being? What does it mean to be human? What is the proper aim of human life? What should human beings be striving towards?

Our answers to these questions will have an enormous effect on what we think we need to do about AI.

The ancient Greek philosopher Aristotle famously defined man as a rational animal. Let’s say, then, that this is the “essence” of human beings and that what is special about human beings is our capacity for rational thought and our ability to act based on rationality rather than impulse. If this is what we believe, then when we think about how to respond to AI, we will place enormous importance on what we need to do to make sure that human beings continue to exercise and cultivate those capacities. For instance, we may argue that it is necessary to keep AI companions out of educational settings, because it is important to make sure that children develop the capacities for rational thought that are the essence of humanity rather than always and immediately turning to an AI assistant for answers.

On the other hand, let’s say that instead of thinking of human beings as rational animals, we believe that the secret sauce of being human is our ability to feel. Further, let’s say we narrow this down to our ability to feel love. If this is what (we believe) is the most special and important thing about human beings, if this is the feature of human beings that we need to protect, then we may not care at all about AI assistants helping children with mathematical calculations. On the humans-as-lovers model, the ability to maneuver symbols according to certain conventions would be seen as irrelevant to the value of humanity. We would instead see the opportunities and threats of AI through the lens of what it is about AI that can help and harm human beings in developing and exercising their capacity to love.

In order to respond well to AI:

we need to understand what to respond to, we need to understand what effects of AI will be relevant, and we need to understand their relative importance.

This requires understanding more than AI. It requires understanding human beings and the things that matter to human beings. And to understand this, we need to go back one step further and ask: What does it mean to be human?

We often talk about AI as if it is “Coming soon to a reality near you!” Just around the corner is tech that will improve the accuracy of cancer diagnoses, power self-driving cars, control bionic limbs, and provide the backbone for smart, adaptive work environments. In a couple of years, we can expect AI to be writing readable books, planning corporate strategies, and providing all sorts of direct assistance to individuals. But it’s not just a coming attraction. The truth is, it’s already here. AI hasn’t up-ended society just yet or fundamentally altered our experience of what it is to be human, but it is hard at work in our daily lives.

Most of us have been carrying some basic AI functionality around in our pockets for years now. Facial recognition technology to unlock our phones, the autocomplete tool in messaging apps, and the algorithms that suggest the next song or video on a streaming platform all rely on some form of machine learning. For the most part, the way AI has moved into our lives so far seems pretty innocuous. It simply enhances or extends existing human capabilities, letting us do things a little faster or a little more effectively. These changes may raise some ethical issues here and there, but there’s nothing particularly dramatic that we need to worry about. No one is really concerned about losing their humanity to Google Maps or Netflix’s algorithm, right? It’s the big picture changes that are the stuff we should worry about: the conscious machines enslaving us or the intelligent nanobots reducing the planet to a fine paste as they replicate endlessly.

But if we think a little harder about the apparently trivial ways in which AI intersects with our lives, it quickly becomes clear that there are bigger issues at play. Take autocomplete, one of the most ubiquitous and apparently innocuous functions of all. Who could possibly object to their phone suggesting the most likely word to use next in a sentence, saving the user dozens of taps on their screen for every message they send? Not us.

We like the convenience as much as the next person. But we do want to point out something important here. Autocomplete makes life a little bit easier. And that’s great. But in making things easier, it also creates a motivating force that feeds into our decision-making processes. It is easier to use this functionality than to not use it, and this convenience gives us a reason to accept the word offered by the algorithm rather than expressing ourselves in a more detailed, more nuanced, or less statistically commonplace way.

Sometimes we’ll be laser focused on using a certain phrase, so we’ll ignore the suggestion that comes up. If we’re really punctilious about our prose, we might have several goes at typing out a word even if the spelling is challenging. But often … very often … we’ll accept the easy option rather than spending ten or twenty times as long to get our preferred word just right. Without making any conscious decision, and without anything being forced on us, we find our use of language constrained, variety and precision sacrificed on the altar of convenience.

And while this is only a minor instance, it points the way to how scaled up interactions with AI systems could lead to much more significant constraints. Once we begin thinking down this path, we quickly find ourselves confronted with questions about the value of freedom versus efficiency, about what counts as help versus what counts as control, about which uses of AI are truly enhancements of human beings and which will ultimately end up harming us.

As soon as we start thinking in any serious way about AI in even its most trivial forms, we immediately become embroiled in some very deep questions – philosophical questions – about human beings, about values, and about why we make the choices we do. We will argue that these philosophical questions are both foundational and deeply practical. They are foundational because our philosophical views regarding the nature and value of humanity are the basis from which we will think about how to respond to AI. And they are practical simply because it is necessary to think about them when deciding how to respond well to the emergence of AI.

We will not argue that there is something essential to humanity or something unique about it that needs to be protected or enhanced – that may be the case, but it is not a line we think useful to pursue. Instead, we will make a much more basic claim. We will claim that the core value of humanity is our ability to make choices about what matters to us. When AI enhances this ability, we should actively pursue it; but when AI detracts from this ability, we should flee as fast and as far as we can.

Most of the capabilities AI is likely to develop will fall into a more neutral middle ground. Sometimes, these capacities won’t matter at all to the question of what it means to be human in the age of AI. And sometimes humans will make them matter even though there is nothing intrinsic to them which demands that they must. Ultimately, it will turn out that understanding the importance of AI – and understanding its potential for good and ill – will depend first and foremost on improving our understanding of ourselves.

From my book TRANSCEND.

Adapted/published with permission from ‘TRANSCEND’ by Faisal Hoque (Post Hill Press, March 25, 2025). Copyright 20204, Faisal Hoque, All rights reserved.

Look for my latest book TRANSCEND available for pre-order now.

The post To Understand AI We Must Understand Ourselves appeared first on Faisal Hoque.

 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2024 05:21

March 24, 2024

Paradox of Devotion and Detachment

"In the balance of devotion and detachment, let me be open to infinite possibilities."
 •  0 comments  •  flag
Share on Twitter
Published on March 24, 2024 09:39

March 20, 2024

3 Questions To Guide Cost-Effective Technology Modernization

Leaders need to differentiate between modernization that keeps systems up to date and modernization that impacts the entire business model.
 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2024 04:36

March 19, 2024

Practicing Detachment: A Pathway to Fostering Sustainable Growth

Detachment is not about disconnecting from the world; it's about finding inner peace amidst chaos, and embracing change with grace.
 •  0 comments  •  flag
Share on Twitter
Published on March 19, 2024 11:59

January 27, 2024

Future Depends on Cross-boundary Collaboration with AI

Working with enabling technologies such as AI-generated personas, can power meaningful, sustained innovation.
 •  0 comments  •  flag
Share on Twitter
Published on January 27, 2024 03:28

November 17, 2023

Journey of Gratitude: A Personal Tale

I remain a firm believer that it is through knowledge sharing that we may provide the greatest clarity on how to improve our collective future.
 •  0 comments  •  flag
Share on Twitter
Published on November 17, 2023 07:56

November 9, 2023

AI for Cancer Treatment: Better Outcomes and Lower Costs

Ground-breaking developments in immunotherapy benefit greatly from the introduction of AI while diagnosis is more precise and cost-effective.
 •  0 comments  •  flag
Share on Twitter
Published on November 09, 2023 01:13