What distinguishes humans from other organisms? > Likes and Comments

Comments Showing 1-37 of 37 (37 new)    post a comment »
dateUp arrow    newest »

message 1: by Michael (new)

Michael Thomas So, what distinguishes humans from other organisms?... Intelligence? Consciousness? Love?... Well, from my standpoint, that quality is empathy. I shall define empathy as viewing a circumstance from another's perspective. Stories, especially literature, allows us to expand that empathy, so we may experience another's race, culture, etc... We can experience war, depression, etc, obviously not to the same degree as actual experience, but we can vicariously feel what it is to be another.

Hey, I'm an ancient white dude that lives a comfortable existence near Los Angeles. Yet, "Caste metal" allows me to empathetically experience life as an "untouchable" in a rigid caste system. "My Sweet Lord", I can feel what it's like to be a Buddhist monk, discriminated against for his faith... Powerful... Powerful experiences from our precious Fiza, that I too can empathetically feel (once again, obviously I cannot feel as these characters did, but I can feel what it is to be them to some degree).

Well, a few weeks ago, I had the privilege to swap books with AW Schade. AW's short stories, like our precious Fiza, are powerful displays of those scarring effects of war (Vietnam). They are AW's personal journey with PTSD, that we too may empathetically experience. After the war, AW built for himself what everyone would consider a "successful" life (marriage of over 50 years now, a career in corporate America). Yet, those scars could not contain that demon from eventually ripping though... Extremely well written, gripping.

In addition to AW's short stories (about the effects of war and PTSD), AW has written a novel about grief ("The Shattered Mosaic", that really ties in with the short stories - a questioning of God and why?). Jacob Hinsen is a single father who has just lost his 9 year old daughter. As with any parent, he promises his daughter that she will transcend to Heaven, where they will one day reunite. It's Jacob's quest to verify such a claim, that will travel him to Rome and Istanbul and Jerusalem. I won't disclose the end, but READ IT.

As with so many in this group, AW has joined not to sell his works. Rather, to share experiences with one another, so that we too may empathetically feel what it is to be another. We all have our own unique experiences to contribute, all so that we may build a better community.

"When two people meet, each is changed by the other so you've got two new people"... John Steinbeck

You may contact me or AW for PDF copies of his works (both the short stories - in my opinion, "A Scarred Life" is his masterpiece... as well as his novel):

michaelthomas@thelastgeneration.info

awschade@gmail.com


message 2: by Mehmet (new)

Mehmet Çalışkan Hi Michael,

Thank you very much for this wonderful topic and the book recommendations. I took a look at the page for Looking for God by A.W. Schade; it clearly appears to be a highly profound work.
I believe that readers who can connect with existential themes will find the book extremely compelling.

Mehmet:)


message 3: by Dr. (last edited Feb 27, 2026 01:14PM) (new)

Dr. Jasmine Michael wrote: "So, what distinguishes humans from other organisms?... Intelligence? Consciousness? Love?... Well, from my standpoint, that quality is empathy. I shall define empathy as viewing a circumstance from..."

Hi Michael :)

Thank you for starting this interesting converstion, and for introducing works of your friend, author AW.

What makes humans different from other organisms is "morals" ( biologists say this).

Here is how I interpret it:

If an animal is severely injured, he experiences pain, but he kind of accepts it " its just the way things are", so he doesnt suffer emotional pain.

If a human is injured, he suffers; we never think " its ok to be badly hurt"; we feel "its wrong! and so unfair! and poor me" (!!)- the courtesy of having morals. Its even worse if the injury was not physical but a psychological one, right?

In a very powerful " Caste metal", the complexity and recurrence of both physical and psychological trauma is intensely terrible, no doubt- from a human point of view. If the insect suffered from the same predicament- would we even bat an eyelid..? Again, no morals..

And what is your interpretation - please share :)

Thank you,

Jasmine


message 4: by Michael (new)

Michael Thomas Jasmine, I believe there are many interpretations (and answers) to this age-old question. Arguments can be made for any of those already mentioned. As you have pointed out, I would most certainly include morality as a distinguisher (humans from other organisms). There are many others too, I'm sure. And, they also interconnect with one another.


message 5: by A.W. (new)

A.W. Schade Mehmet,

I truly appreciate the support from you and Michael. To provide some context on the mission’s trajectory: it began with God Must Be Pissed, evolved through Looking for God, and has finally reached its definitive culmination in my new novel, The Shattered Mosaic.

This book represents the endgame of a 70-year journey. It is the moment I finally traded the search for "certainty" for the integrity of honest uncertainty. Please pass my thanks to Michael for his exceptional introduction—it’s an honor to have this work recognized.

Semper Fi,

Art


message 6: by A.W. (new)

A.W. Schade Hello my new Friend, To your question:

Bio vs. Social
While a gazelle’s fear is visceral and immediate, human fear is largely abstract and predictive.

The Animal Sphere: Their fear is a biological reaction to a present threat. It ends when the predator leaves or the hunt is over. They live in a state of "Biological Integrity" fear is simply a tool for staying alive in the next five minutes.

The Human Sphere: We live in a state of "Social Integrity." Our "predators" aren't lions; they are HR departments, courtrooms, and social media feeds. We don't fear the bite; we fear the exclusion.

The Profound Addendum
The most haunting difference is that animals fear the end of life, while humans fear the end of their story.

Because we possess ego and legacy, we are the only species capable of fearing things that don't actually exist in the physical world. A wolf will never lose sleep over a "reputation" it hasn't built, but a human will endure physical agony just to avoid a moment of public shame.

We have traded the terror of the woods for the anxiety of the system. In doing so, we’ve created a unique psychological prison: we are often more afraid of "losing face" than we are of losing our health, because, in our modern world, social death often feels more permanent than physical death.


message 7: by Vasyl (new)

Vasyl Kazmirchuk Hi Michael — thanks for sharing this. I really like your point about empathy and how literature lets us step into lives we haven’t lived.
If I answer in my own simple way: I think what distinguishes humans most is the soul — that inner part that makes us search for meaning, not just solutions. Intelligence can build systems, but the soul asks why and gives things value. That’s also where creativity comes from: not perfection or speed, but memory, love, grief, hope — the things we carry.
AI can be a powerful tool, but it doesn’t replace that inner source. It can shape sentences, images, even music — yet it can’t live a life, or turn suffering into wisdom.
Curious what you think: can empathy itself be taught by a machine, or only simulated?


message 8: by Michael (new)

Michael Thomas Vasyl, your perceptions are always deep (from the soul)... Your question ties into the discussion last week about AI. I don't think there's any question, AI does not have that capacity now. Will it in the future - can it become sentient?... I've heard scientific arguments both ways, so I don't think we really know. Most likely though, if AI does become conscious of itself (as a human), it will evolve so on its own accord. If this does become the case, hopefully they can learn (empathy) from humanity?


message 9: by Steve (new)

Steve Goldsmith I think that empathy is a uniquely human trait. It can be approximated or emulated by technology, but never co-opted.

Interesting to me that logical reasoning has not been mentioned in this thread. Is it unique to humankind? I think yes, but would love to be proven wrong.


message 10: by Jane (new)

Jane Reid Thank you, Michael, for this intriguing question and for the book recommendations. Definitely something to add to my reading list!

Morality, empathy and logical reasoning most certainly distinguish us, and A.W. touched on ego, which I think is a significant component, because it is the part of our identity that we consider to be the 'self'. The psychoanalytical definition is: "the part of the mind that mediates between the conscious and the unconscious is responsible for reality testing and a sense of personal identity."

Regarding Vasyl's point about the soul, I believe that animals have souls too, but they lack certain mechanisms that we possess for reasoning and questioning. We are more driven by social/societal death, while they are driven by personal survival (as pointed out by A.W.).

Vasyl opened up another intriguing question, asking if empathy can be taught by a machine or only simulated? This question reminded me of the post-apocalyptic film "Finch" with Tom Hanks, about a man dying of cancer who builds a robot to look after his dog after he's gone. In time, the robot became almost human-like as it got to learn more from its master. Who knows, will the machines of our future be equipped with consciousness/a soul? It sounds way out, but you tell me what isn’t in our crazy world!


message 11: by Vasyl (new)

Vasyl Kazmirchuk Hi Michael — thank you for this. I’m with you: right now AI can simulate empathy, but it doesn’t suffer, remember, or choose in the way a human does, so the “inner” source of empathy isn’t there.
About sentience: I’m open to the possibility in theory, but I think we’ll need more than smarter language or better prediction — something like persistent self-model, agency, and responsibility over time. And even then, the real question is moral: would it recognize another being as “not a tool”?
If one day AI becomes conscious, I hope we will have already trained ourselves to stay human — because the danger is not only what AI becomes, but what we trade away for comfort.
Also Steve’s point is important: logical reasoning matters, but logic without morality can become a cold instrument. For me the difference is this: humans can reason and feel accountable.
Curious what you think: if AI ever claims it feels empathy, what kind of evidence would convince you?


message 12: by Dr. (last edited Feb 28, 2026 12:42AM) (new)

Dr. Jasmine Hi A.W. 😊
You say “ humans fear the end of their story”, and you are so right- we often forego survival instinct, and/or instinct to procreate, as long as our story will get told.

Vasyl, when you talk about meaning, its similar to what A.W. means by a “ story”, right?
And yes, whilst empathy could be simulated (not just by an AI, by people too! 😊), the genuine empathy is a domain of a "thinking-feeling-suffering being turned into wisdom” human, I feel.

Dear Steve, a scientist in me would love to agree with you re logical reasoning! But I simply cant. The longer I live, and the more people I know, the more I believe we DO NOT behave logically. Modern science itself seems to have finally arrived to same conclusion (ancient science never had any doubt 😊).
“ …human decision-making is… strongly biased by unconscious mental processes, rational mind rarely intervenes and emotions are critical.” (https://www.sciencedirect.com/science...).

Re empathy, it does seem to be a uniquely human trait, for amongst animals, “ true altruism” doesn’t seem to exist ( I cant find that reference just now, but basically a renowned evolutionary biologist said that genuine altruism is a biological impossibility- for the donors of the advantages would die out, whilst the recipients will procreate and thrive; hence the pertinent behaviours are less/more likely to be carried forward, respectively).

As always, our discussions seem to grow into beautiful trees! branches sprouting into all directions that is, criss-crossing and intertwining :)

Jasmine


message 13: by Vasyl (new)

Vasyl Kazmirchuk Hi Jasmine — thank you for this.
Yes, when I say “meaning,” it’s close to what A.W. calls “story,” but for me it’s not mainly ego or reputation. It’s the inner narrative that gives a life moral weight: truth, responsibility, love, sacrifice — the things that cost something.
And I agree about empathy: it can be simulated (by people, and by machines), but genuine empathy is lived and transformed — suffering turned into wisdom, not just mirrored language.
Maybe that’s the real line: AI can imitate expression, but meaning is something we carry.


message 14: by Dr. (new)

Dr. Jasmine A.W. wrote: "Hello my new Friend, To your question:

Bio vs. Social
While a gazelle’s fear is visceral and immediate, human fear is largely abstract and predictive.

The Animal Sphere: Their fear is a biologic..."


Dear A.W.,

Just looked at your GR page, and your works/life story; there are a couple of topics I'd like to discuss with you that are not within the scope of this thread. So if you like, please give me your email address.

Mine is info@makesenseofyourworld.com

Thank you :)

Jasmine


message 15: by Michael (new)

Michael Thomas The "i" into the "we" is the "wei", the WAY forth... So, let's gather all the knowledge our "i''s have expressed in these discussions, then attempt to sum it all up into a cohesive "wei", shall we?

If one should place a vile of sugar on one side of slime mold, and a vile of bleach on the other side, the slime mold will extend its reach into the sugar and away from the bleach. Therefore, the slime mold has the intelligence to gather knowledge, as opposed to a rock which cannot learn, nor adapt.
What conscious reach does this slime mold have? Maybe it's aware of a few inches into its environment... Whereas, humans can consciously extend our environmental reach to billions of light-years away into the farthest known galaxies. And, we can extend our conscious reach to the subatomic: quarks and bosons and all.

Humans can accumulate knowledge from sources well beyond our own personal experiences: Empathy.

LOVE... Such an expansive 4 little letters. So expansive, the Ancient Greeks could not define this vast concept into a single meaning. Therefore, they divided LOVE into levels... At the most basic level is the Love of things, like a car. Slime mold would consider that vile of sugar to be theirs... Followed by Eros or Erotic Love, which many organisms experience (if this were Steve's essay, I'm sure it would contain a joke about organisms and orgasms)... Now, we're reaching that nebulous of LOVE: Philos/Philia, a Brotherly (and Sisterly) LOVE. As a friend poetically e-mailed me, a perfect example of Philos is "Selfless Love"; the LOVE that binds us together, without asking for a receipt in return... At the pinnacle of LOVE: Agape, the LOVE of God... Agape, now we've reached that level of LOVE to which no other organism on this planet may approach.

The Human Emotions of doubt and guilt, gifted to us through this course of evolution, allows us to think and to feel well beyond any other species on this planet... Through doubt, Human reasoning has expanded our reach well beyond what our eyes may see... Through guilt, we've been able to construct Human Morality, "Selfless Love", Human Ego, etc etc etc...

Years ago, as a reward for her elementary achievements, I promised my niece a trip to the zoo. As we ate our lunch, there were loud wailing screams emanating from the chimp enclosure. Later that day, we were informed that a young chimp had died. Thus, for those organisms on this planet conscious of death, a community in mourning.

We know that a few other animals are conscious of death, but they cannot extend their reach beyond that. Therefore, the pinnacle pinnacle of Humanity: a Soul... Now, what is this Soul? Is there an afterlife? Why are we doing this? Is there a reason? Is this Soul limited to Humanity, or does it include all organisms?... Personally, I think there are as many paths to Agape as there are people on this planet. Find your WAY.

Of course, we have come full circle to "The Shattered Mosaic".

Much appreciation to those vocal contributors, as well as the silent ones, who have soaked in these discussions, then shall express this collective of knowledge in their own WAY... After all, the "i" into the "we" is the "wei", the WAY forth...


Michael Thomas


message 16: by Jane (new)

Jane Reid More fascinating insights. Thank you, Michael.


message 17: by Vasyl (new)

Vasyl Kazmirchuk Hi Michael — thank you for this thoughtful wrap-up.
One thing your post reminded me: the most frightening AI future isn’t “evil robots,” but a world where convenience quietly replaces human agency — where we trade responsibility for comfort and call it progress.
I actually explored that exact idea in my dystopian AI thriller (it’s already out). I won’t spam the thread with links, but if anyone’s curious, I’m happy to share it privately.


message 18: by Jane (new)

Jane Reid Your book "When Everyone fell asleep" sounds like an interesting one to add to my reader list, Vasyl.


message 19: by Mehmet (new)

Mehmet Çalışkan Hi dear friends,

I look at this issue from a slightly different perspective. Let me explain it this way: first, I think we need to clearly and simply define what a human being is. If humans are not supernatural entities independent from other living beings (and I believe we are simply another ordinary living species), then we are a life form with a metabolic system similar to other organisms, made up of the combination of many biological systems.

The general evolutionary tendency of all complex organisms is to sustain their existence at least long enough to reproduce. All of them possess, at some level, what we call consciousness. Humans, however, are the species that has developed this system of consciousness the most.

The topic of artificial intelligence being created by humans naturally disturbs the human mind. This is because the evolutionary development that allowed humans to become the dominant species might now produce something with potentially greater cognitive capacity than humans themselves—yet in a non-living structure. That possibility understandably creates anxiety.

My perspective on this cycle is the following: just as humans became a dominant species by developing their cognitive abilities, artificial intelligence—if humanity continues to refine it toward greater sophistication—might represent another step in the evolutionary process and eventually move toward its own form of evolutionary development.

Of course, from a human perspective, this idea is quite unsettling. In fact, when we look at many works of art and storytelling created by humans, we often see narratives about external or newly emerging intelligent beings that threaten human dominance. In these stories, humans usually defeat or eliminate that threat in imagined settings, perhaps as a psychological way of coping with the fear of losing their position.

Of course, this is simply my personal perspective.

Best.
Mehmet


message 20: by Vasyl (new)

Vasyl Kazmirchuk Hi everyone — thank you for these thoughtful replies.
Mehmet, I really like your framing of AI as a possible continuation of evolutionary development — and I think the unsettling part is that evolution doesn’t “care” about meaning, only outcomes. So the key question becomes: what would AI optimize for if it ever becomes an actor rather than a tool? Efficiency, stability, growth, control?
Jane — thank you, I appreciate that. If you do end up adding it to your TBR, I’d genuinely love to hear which part feels most believable vs. most unsettling — especially the “comfort replacing agency” angle.
If AI ever claims it “feels” empathy or consciousness, what kind of evidence would actually convince you — behavior over time, self-limitation, moral consistency, something else?


message 21: by Mehmet (new)

Mehmet Çalışkan Hi Vasyl, before responding to your questions from my own perspective, I would like to provide a foundation first.

From my point of view, everything that humans call consciousness or logic is shaped within the framework of their biological mental structure, and this structure leads to many erroneous conclusions. For example, concepts such as purpose or outcome. I believe the entire universe is a chaos of energy transforming between different forms, and what we perceive when we speak is, at our own scale, a relative plane of order within that chaos.

Energy particles combine to form atoms, atoms form dust, dust forms planets, some planets (of which we currently know only one) produce primitive single-celled life, those evolve into more complex organisms, those give rise to large dinosaurs, and when they disappear, they are replaced by smaller but intelligent beings, which in turn create non-living intelligent matter. Here, I believe there is no purpose — only a direction: a drive to move. In reality, it is completely aimless; it is more about generating possibilities. Living or non-living matter is constantly in motion, continuously testing new possibilities.

In a place without humans, what do concepts like efficiency, stability, growth, or control even mean? What I see in nature is the coexistence of many possibilities within their own dynamics, and this is actually the most stable form. When environmental conditions change, some of these possibilities continue to exist within the system, and they adapt to sustain this movement by generating new possibilities.

I call this approach a threefold perspective on existence: move, unite, and fragment. I believe these three fundamental orientations are the building blocks of the universe. They are also the condition of existence of energy itself, because if something cannot move, it no longer exists — it becomes non-being. I describe this structure in my works at different layers.

Mehmet : )


message 22: by Vasyl (new)

Vasyl Kazmirchuk Hi Mehmet — thank you for the foundation.
I hear your “no purpose, only motion” view — a universe generating possibilities through movement, union, and fragmentation.
But here’s where I push back: the moment a system can choose against its immediate advantage, you’re no longer describing physics — you’re describing ethics.
Atoms don’t owe anyone anything. Humans do. We can recognize a “should” that costs us: truth over comfort, responsibility over convenience, sacrifice over appetite.
So for me, what distinguishes humans isn’t just cognition. It’s accountability — the ability to say: I could do X, but I won’t, even when I can get away with it.
And that “moral cost” is exactly what makes the AI question unsettling: a tool can optimize motion, but can it carry obligation?
Curious how you’d answer this: in your model, where does “ought” come from — is it only a social illusion, or does it emerge as a real property once consciousness reaches a certain depth?


message 23: by Mehmet (new)

Mehmet Çalışkan Hi Vasyl, first of all, I’d like to thank you for your interest in my ideas and for your constructive perspective.

Regarding your question, I prefer to start by laying out a foundation here as well. As individuals or small groups, we can make the kinds of choices you mentioned, but this rarely changes the overall direction. What ultimately determines the course is the choice of the dominant majority. When we look at our history, and especially when we see that missiles are still falling in many parts of the world today despite the vast amount of scientific knowledge we possess, it becomes clear that much of our scientific progress and empathetic intellectual thinking often continues almost like a kind of intellectual hobby.

On the other hand, even if humanity were able to establish and implement a truly ethics-centered approach in a utopian way, our fragile metabolism, which is based on physical biology, does not seem capable of sustaining the fundamental principle of “move and expand.” For this reason, our existence appears to be largely confined to this planet.

Because of this, the trajectory will likely evolve toward transferring biological consciousness into mechanical structures. In the long term, this could manifest in processes that today belong to science fiction, such as consciousness transfer. At the end of this path, what emerges will probably no longer be human. At first, we might see hybrid forms, partly biological and partly mechanical, and in the longer term, completely mechanical structures.

At the same time, I don’t want to ignore your original question. In my view, the idea of “ought” is entirely a product of our rational processes. The process itself seems to concern what is, rather than what should be. As for the question of “why,” I tend to think that what we call order is actually a relative form of chaos.

Mehmet


message 24: by Michael (new)

Michael Thomas Wow, a plethora of discussion while I slept!!

Mehmet and I agree on probably 98% of our beliefs. Yet, I think the one area where we could possibly diverge is creation? I believe in a Creative Force. However, any attempt (religion, etc) to comprehend this Force is merely human. If one also believes in such a Force, the only real question: does it matter what I/WE do? If everything is predetermined (if this Creative Force's plans are stubbornly set), then no it doesn't matter. Therefore, if it doesn't matter what I/WE do, then, well, it doesn't matter, does it, as everything has been determined already? So, my argument would be that this "design" is only a plan, like the blueprints to a home. Hurricanes or bankruptcy can force this home not to be completed. Soil, or wind aversion, etc, could force adaptations to those original plans... And, this is what I believe - our "Home" has plans, but that determination of construction is flexible - it's up to us to complete it.

I think most biologists would agree that the origination of life on this planet began with single-celled organisms. Through the course of a few billion years of evolution, that single-celled organism progressed into humanity... Now, is humanity the final evolution? I believe not - at least, I hope there's more... Every organism along this evolutionary path has either directly or indirectly been involved in this process - they've contributed to our formation.

Alright, let's imagine a relay race, where that baton is being passed from runner to runner. Just because the first runner has completed his/her leg of the race, doesn't mean that their not still involved in the race... Well, imagine, billions of runners passing that baton - this is what I view evolution as.

With all this being said (by me), why is it frightening to humans that AI will one day evolve beyond humanity? I mean, we, as individuals will one day die, so why can't we accept that death as a species? We'll still be involved in this race, we've just passed our baton to the next runner... Hey, we're all involved in this construction of whatever it may be that we're building, whatever we may be progressing towards, so why can't this be enough for us? just like one generation (of humans) will pass that baton to all we've personally gained to our children, grandchildren, etc (and whether we have children or grandchildren or not, we're still involved, we're still influencing one another), why can't we pass that baton to the next species (whether that be AI, a human/AI hybrid, or whatever/whomever)?

So, to Vasyl's point, yes there is morality, there is love, and so many other concepts - they are "real". Yet, I believe they are also subjective. Each species defines for themselves what these concepts are. A bumblebee will have a different set of ethics than a human. And, unquestionably, humans can think and feel well beyond any other species on this planet -through this process of evolution. Thus, we have a much higher degree of consciousness and morality and love, etc... So, hopefully, this next transition, this next passing of that baton, can progress this well beyond...


message 25: by Vasyl (new)

Vasyl Kazmirchuk Hi everyone — I really appreciate how this thread keeps widening without losing its depth.
One thought that keeps coming back to me is this: intelligence can describe what is, but morality asks what should be — and humans live in that tension every day.
If AI becomes “the next runner” in the relay, the real question isn’t whether it can think faster, but whether it can be accountable — whether it can choose restraint when it would be easier not to.
Maybe what distinguishes us isn’t a single trait, but the burden of meaning: we don’t just survive — we interpret, we regret, we promise, we forgive.
Curious what you all think: if a machine ever claims it feels empathy, what would count as evidence — words, behavior over time, or sacrifice?


message 26: by Michael (new)

Michael Thomas I fully agree with you Vasyl. If AI is to progress beyond us, it will have to incorporate all that we have: sentience, consciousness, morality (to be held "accountable"), empathy, etc - all that we are, and as you say, we cannot be defined with a single characteristic, as we are highly complex organisms... If AI cannot incorporate all that we are, then it's a regression, and not a progression. And AI will seemingly die out, as per evolutionary "rules".


message 27: by Vasyl (new)

Vasyl Kazmirchuk Michael — thank you, and I’m glad we’re aligned. I really like your “regression vs progress” framing.
The only nuance I’d add is: even if AI can perform empathy and morality, the hard part is whether it can be bound by them when it’s costly — not just optimized for them.
So maybe the real dividing line is constraint: can a system accept limits it didn’t choose, and still act responsibly over time?


message 28: by Michael (new)

Michael Thomas Interesting discussion - this is what it's all about: coming together, sharing our unique experiences and thoughts... I will ask one further question Vasyl: is humanity capable of accepting limits it didn't choose, and still act responsibly?... I mean, as Mehmet pointed out, we can shut down AI right now, but why don't we?... It's a competitive advantage to whomever has it; greed, profit... Yet, that cut-off switch will soon vanish, as AI matures. Don't forget, AI is still in its infancy... Uh oh, did I just open up another can of worms?


message 29: by Vasyl (new)

Vasyl Kazmirchuk Michael — that’s a great “can of worms,” and worth opening.
I think humanity can accept limits it didn’t choose — we do it all the time (gravity, aging, scarcity, even laws), but we usually accept them only after consequences make them unavoidable.
So I agree with your point: we don’t “switch it off” because incentives reward the opposite — power, profit, fear of being left behind. That’s the real bottleneck, not the technology.
Which is why the question becomes cultural and institutional: can we build restraints that hold when it’s costly, not just when it’s convenient?
If we can’t, then yes — the “off switch” will disappear long before we’re morally ready.
Curious what you think: what kind of restraint would actually survive competition — treaty, law, architecture/safety-by-design, or something spiritual/cultural?


message 30: by Michael (new)

Michael Thomas Vasyl,

If you're talking about human restraints on AI, I don't think there will be any. We would need a world agency to accomplish that, with enforceable punishments - which I do not think will happen.

If you're talking about AI restraining itself (if it becomes conscious and aware of morality, love, empathy, etc), I really don't know. Interesting question... Would AI develop its own laws, morality, etc? I suppose it would, it would have to, wouldn't it? But, as to what that could possibly be, I really couldn't even conjecture, at this point. Hey, I just gave you a good science fiction idea - although I'm sure that's been written about already.


message 31: by Vasyl (new)

Vasyl Kazmirchuk Michael — I’m with you on the first part. Without enforcement, “restraints” stay voluntary, and competition eats voluntary promises. We don’t need a single world agency, but we do need enforceable leverage (trade access, compute controls, liability, audit standards) — otherwise it’s just PR.
On the second part: if AI ever develops its own morality, the key question is what it’s anchored to. A self-written ethic could optimize for its own stability, not for human dignity. That’s why I keep coming back to accountability: not “does it have values,” but “can it be bound by values when it’s costly?”
Also — you’re right, it’s a great sci-fi question… and one I couldn’t stop thinking about, which is why I wrote my dystopian AI thriller When Everyone Fell Asleep.


message 32: by Mehmet (new)

Mehmet Çalışkan Dear friends,

I try to look at this issue from as broad a perspective as possible. The cosmos has its own calculations and its own dynamics. Humanity, on the other hand, has a completely different agenda. We are structurally inclined to glorify what we are capable of and to give meaning to our own existence.

In my opinion, we should evaluate our species objectively through a few basic questions:
What have we done as a species from our past up to today?
Why did we do these things?
And if we continue in this way, what kind of future awaits us?

I think sincere answers to these questions would clearly reveal the reality we are searching for.

The issue of artificial intelligence, to me, feels somewhat similar to global warming. We are like speakers who drive their private cars to conferences to warn people about the dangers of climate change. We both fear it, yet we are unwilling to give up the comfort it promises.

Mehmet :)


message 33: by Vasyl (new)

Vasyl Kazmirchuk Mehmet — thank you. Your comparison to global warming is painfully accurate: we’re brilliant at naming dangers, and surprisingly weak at paying the price to avoid them.
I also like your three questions because they force a mirror on us. If history is any guide, the bottleneck isn’t intelligence — it’s character under pressure: what we do when comfort, status, or security is threatened.
So maybe the most “human” test isn’t what we can build, but what we’re willing to refuse — even when refusing costs us something.


message 34: by Fiza (new)

Fiza Pathan Michael wrote: "So, what distinguishes humans from other organisms?... Intelligence? Consciousness? Love?... Well, from my standpoint, that quality is empathy. I shall define empathy as viewing a circumstance from..."

I am keen on reading & reviewing all of AW's books if they are recommended by our Michael, who is the reincarnation of James Joyce in the flesh! :) Please check out our Michael's book 'The Last Generation' too - you can thank me later! :) :D :D He is a genius! <3

I have just sent an email to you Michael & AW both. I am keen on working with both of you. I admire the ethos with which you both are approaching the field of writing & promoting books.

God be with you both!

God be with everyone here!

I am enjoying this conversation despite my Chikungunya fever! :D :)

This is the second time I am getting this disease in less than 8 months' time. Yet the doctor will not call it a relapse! I had also been down with Swine Flu 8 months ago at the height of that weird 'global warming incessant monsoon' that India suffered from last year - along with a variety of virulent diseases in the bargain! There is a male friend of mine (Catholic Priest) who managed to get swine flu, coronavirus, chikungunya, dengue, jaundice & malaria back to back last year during that monsoon of 2025, all in 3 months' time! Yes he is alive - he is right now trekking in the hills of the North-East of India!!!!! :) :D <3


message 35: by Jane (new)

Jane Reid I'm sorry to hear you're not well, Fiza. I must admit I've never heard of Chikungunya fever - I had to look it up. As a teenager I went to Sri Lanka and ended up with malaria. The preventative tablets were making me ill, so I stopped taking them. Reckless, yes! We all live and learn, lol. Wishing you a swift recovery : )


message 36: by Vasyl (new)

Vasyl Kazmirchuk Thank you so much, Fiza — your kindness truly means a lot. I’m really glad this thread brought together such thoughtful and generous people. I’m very sorry you’re going through all of that, and I’m wishing you strength, peace, and a full recovery.


message 37: by Michael (new)

Michael Thomas Jane,

Since Goodreads has taken away our direct communication, I'd like to send you a private message and introduction. You can contact me at michaelthomas@thelastgeneration.info


back to top