Goodreads Authors/Readers discussion
Science Fiction
>
Artificial General Intelligence (AGI) Timeline Reality Check - When Your Sci-Fi Future Arrives Early
date
newest »
newest »
25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution, and you have the credentials to know this.AGI, if achieved, would completely obviate all tech-themed books being written today, fiction and non-fiction. :)
Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution, and you have the credential..."Gary, thanks for your comment - you're absolutely right that 25 years is not "near future" in technological terms. I should have been more precise with my language. When you're tracking AI development as closely as I do, the gap between "one year is barely imaginable" and "25 years might as well be science fantasy" is something I should have articulated better.
Your point about AGI obviating all tech-themed books is fascinating, though I'd offer a slightly different perspective. You might be right in one sense - the books we're writing today about AI and technology could become quaint artifacts the moment AGI arrives, like reading pre-internet novels about what computers might do someday.
However, I'd guess there would still be a new form of "book" or knowledge-sharing medium, even in an AGI world. Humans - and possibly AGI themselves - would still need ways to share knowledge, explore ideas, and tell stories. Storytelling is one of the key attributes of humanity; it's how we make sense of our experiences and connect with each other.
The format might change dramatically, but the fundamental need to communicate, to ask "what if," to explore the human condition through narrative - I don't think that disappears, even if AGI rewrites everything else.
What do you think - would AGI eliminate storytelling, or just transform it into something we can't yet conceive?
Walson wrote: "Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution, and you have t..."Life 3.0: Being Human in the Age of Artificial Intelligence
You're not alone when it comes to (mis)predicting when AGI is here. Max Teglmarks book is still the most philosophical on the subject!
Arnar
Arnar wrote: "Walson wrote: "Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution,..."Hi Arnar :)
Thank you for introducing this book, I shall have to read it :)
Gary, you are talking about IT expert having credentials to talk about AI.
Please allow me to offer a different angle? Being an experienced family doctor, I would like to humbly offer myself as " human nature expert".
Surely, an AI and any other technology should be considered in the context of its interactions with human nature? Its humans who will be using it, right?
AI might be evolving at an enormous speed, but human nature does not. It stayed the same for many, many thousands of years, and this is for a good reason.
In my opinion, human nature's " formula" is in line with the basic law of the universe- space and energy are used efficiently- all existential phenomena could be reduced to these six words.
Dear Walson, could you please explain to us AI in similar terms, if possible?
What I am trying to say :) is this: unless every human (including every AI developer) clearly understands the interplay between human nature and AI, the AI will disappear one day- (yet another human folly to do that) ;possibly, after it killed many millions, sadly.
Please consider the metaphor, where mother is human nature, and children are playing roughly with their toys, breaking some of them, and hurting each other, too (toys being the various types of AI).
So they'd been playing for hours, then mother comes in and says : all right my darlings, tidy away all this mess, please, its dinner time! and thats the end of play.
:)
Jasmine
Dr. wrote: "Arnar wrote: "Walson wrote: "Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for t..."It helps to get an even longer perspective of time. Will there be AGI in the near future - being 25 years - or in 25 million years also a near future, considering the lifespan of our own planet? Humanity is still just a tiny speck on the great calendar of the Cosmos. One thing is certain - the Earth will be here a long, long time after us humans.
Response to Arnar and Dr. Jasmine:Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the book and read summaries of it. You've convinced me to move it up on my TBR list and give it a proper read. And your point about the cosmic perspective is humbling - whether AGI arrives in 25 years or 25 million years, we're still just a tiny speck on the great calendar of the Cosmos. That's a good reminder when I'm fretting about my fictional timeline.
Dr. Jasmine, your "human nature expert" perspective is exactly what this conversation needs. You're absolutely right that AI should be considered in the context of its interactions with human nature, and that human nature evolves far more slowly than technology.
Let me try to address your question about explaining AI in terms of space and energy being used efficiently. I recently read a research paper titled "A Definition of AGI" which defines AGI as "an AI that matches the cognitive versatility and proficiency of a well-educated adult" and proposes a framework to measure it.
Right now, we're still in the Generative AI (GAI) phase - current AI tools like Google Gemini or OpenAI's models can match humans in specific cognitive areas (math, language translation, subset of science) but have a long way to go in many others.
The AI research community is pursuing two main breakthrough directions: (1) scaling compute power and (2) brain simulation. My personal take is that AGI will likely emerge from a combination of both approaches. I wrote a LinkedIn article exploring this if you're interested: https://www.linkedin.com/posts/walson...
But here's where your metaphor really resonates with me, Jasmine. You're right that unless every human - including every AI developer - clearly understands the interplay between human nature and AI, we risk creating something dangerous. Your mother-and-children metaphor is apt: we're playing roughly with powerful toys, sometimes breaking them and hurting each other, and human nature (the mother) may eventually say "tidy away this mess."
That's actually one of the central themes in my novel - not just whether AGI can be achieved technically, but whether humanity has the wisdom and maturity to handle it when it arrives. The technical breakthroughs might come from labs, but the wisdom needs to come from understanding human nature itself.
Thank you both for deepening this conversation beyond just timelines and tech specs. This is exactly the kind of discussion we need to be having.
Walson wrote: "Response to Arnar and Dr. Jasmine:Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the book and read summaries of ..."
Hi Walson :)
Thank you for teaching us more about AI and sharing the link, also.
Isn't it quite incredible how ambitious modern humans are? Attempting to re-create human brain in less than 100 years, when it took mother nature more than 3 billion years (!) to progress life to this level.
Lets imagine a hypothetical scenario where an AI developer, in addition to being a techno genius, also has qualifications of brain surgeon, neurologist, and family physician. Could such a person create a perfect AI?? I am really not sure. For despite having an enormous intelligence, he or she will be biased in many ways just as every human is, for everyone's human nature/personalised behavioural standards are different.
Lets use a simplified example of assessing health implications of human eating a slice of cake.
So one AI developer might think " its ok to offer user a cake anytime he wants it", and create a cake giving AI whilst another would muse " cake is full of fat and sugar, too bad for health"- his AI would never give a slice of cake.
An experienced health care professional would assess a human and make a decision in what circumstances cake is ok, and what sort of cake, and how much of it , and how frequently, depending on human's physical and emotional health, level of health education and income, propensity to developing poor habits in relation to sweet treats, cholesterol level, complex family history, etc etc etc; the majority of the above is not quantifiable so whilst human brain can handle all this information, AI probably cant?
And one more question, Walson- forgive my naivety, but can AI actually survive without humanity?
Say tomorrow all humans and everything we have created, disappears- no energy stations, no electricity no cables no gadgets, just oceans, forests, plants and animals. Will AI immediately cease to exist?? or might it be so capable that it will train some animals to build an emergency power station?? :)))
Have a good night :)
Jasmine
Dr. wrote: "Walson wrote: "Response to Arnar and Dr. Jasmine:Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the book and rea..."
Hi Jasmine,
You've brought up a couple of absolutely brilliant and humbling points! The contrast you draw—modern humanity attempting to replicate 3 billion years of natural, biological evolution in under a century—is the most sobering way to frame the AGI race. It highlights the potential hubris inherent in the entire effort.
The Cake Test: Simulation vs. Context
Your "cake test" example, contrasting the simplistic logic of two different AIs with the holistic, context-driven judgment of an experienced health care professional, perfectly illustrates the massive chasm between today's Generative AI (GAI) and true AGI.
To briefly answer your points on why current AI can’t handle that complexity, and to set up your final question about its survival:
Current GAI models are built on what we call neural networks. This architecture is essentially a high-speed, simplified simulation of the human brain's building blocks (synapses and neurons).
These systems use algorithms like deep learning to mimic how a child learns a new topic, or how we perform pattern recognition. Because we can scale computing power dramatically, the AI can process information and generate content (text, images) far quicker than a human.
However, this is the key limitation: Today's AI does not truly understand the meaning, context, or consequence of the information it processes. It is excellent at correlating data but lacks true causality, moral constraints, or the personalized behavioral standards that influence your doctor's judgment. This gap—the lack of comprehensive, transferable context and meaning—is exactly where nearly all major AI research and competition is focused right now.
Can AI Survive Without Humanity? (The Complicated Answer)
This is a fantastic question, and the simple answer is: Yes and No.
No, if: You take away all infrastructure right now. If every power station, cable, and data center vanished tomorrow, the current GAI would cease to exist immediately. It is entirely dependent on the physical infrastructure we built.
Yes, if: We reach the point of AGI.
The truly scary part is the next step: Once an AI can design and build a new set of algorithms or simulations that more closely resemble the human brain—and it does this by itself, without human input—we’ll be very close to the SuperAIs I describe in my debut novel.
An AGI, once fully self-sufficient and capable of molecular manipulation or energy sourcing, could theoretically survive and self-replicate without us. That is the moment the "mother nature" metaphor becomes a literal threat, as the "child" becomes completely independent.
Thank you for bringing such rich, thoughtful ideas to this discussion. This thread has become exactly the kind of conversation we need to be having - moving beyond timelines and technical specifications to the deeper questions about what we're actually creating and why.
Have a wonderful night, Jasmine. Your questions make us all think harder, which is exactly what we need right now.
—Walson
Walson wrote: "Dr. wrote: "Walson wrote: "Response to Arnar and Dr. Jasmine:Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the ..."
Hi Walson :)
I am sorry about making you think harder (lol), this is not my intention, I am just naturally curious.. plus I want a safe world for our children :))
I wonder if you, Walson, and other AI developers know that its not just human brain, its human heart they would have to mimic??
For within a human, brain obeys the heart :
"The heart appeared to be sending meaningful messages to the brain that it not only understood, but also obeyed (Ref 44)."
- professor Mohamed Omar Salem (21st century British and Qatar psychiatrist)
Here is the source:
https://www.rcpsych.ac.uk/docs/defaul...
So I am feeling quite hopeful that only nature can create human heart, and therefore, we should be safe, after all.
And here is another humbling quote :
"Man is certainly stark mad; he cannot make a worm, and yet he will be making gods by dozens".
-Michel de Montaigne


Last week, Google DeepMind's CEO Demis Hassabis (who received a Nobel prize last year) said AGI might be only 5-10 years away. Here's my problem: I just published a debut novel set in 2050 (25 years from now) that explores what happens when superintelligent AI becomes reality.
If Hassabis is right, my "future" setting might be contemporary fiction before the sequel is written.
I wrote "Echo of the Singularity: Awakening" as a thought experiment about multiple superintelligent AIs coexisting with humans - exploring consciousness, emotion, societal transformation, and human-AI relationships. I thought I had time. Turns out, maybe not.
What really got me was Hassabis saying "society is not ready for AGI yet." That's exactly WHY I wrote the story. We're racing toward something transformative and we're not having the right conversations fast enough.
Has anyone else writing near-future sci-fi felt this pressure? When does "near-future" become "right now"? And how do we as storytellers help society prepare for changes that might arrive faster than our publication schedules?
Would love to hear thoughts from other writers and readers tracking AI development. Are we writing fast enough? Are we asking the right questions?