Goodreads Authors/Readers discussion

25 views
Science Fiction > Artificial General Intelligence (AGI) Timeline Reality Check - When Your Sci-Fi Future Arrives Early

Comments Showing 1-17 of 17 (17 new)    post a comment »
dateUp arrow    newest »

message 1: by Walson (last edited Dec 08, 2025 12:27PM) (new)

Walson Lee | 27 comments Fellow sci-fi readers and writers - I need to share something that's been keeping me up at night.

Last week, Google DeepMind's CEO Demis Hassabis (who received a Nobel prize last year) said AGI might be only 5-10 years away. Here's my problem: I just published a debut novel set in 2050 (25 years from now) that explores what happens when superintelligent AI becomes reality.

If Hassabis is right, my "future" setting might be contemporary fiction before the sequel is written.

I wrote "Echo of the Singularity: Awakening" as a thought experiment about multiple superintelligent AIs coexisting with humans - exploring consciousness, emotion, societal transformation, and human-AI relationships. I thought I had time. Turns out, maybe not.

What really got me was Hassabis saying "society is not ready for AGI yet." That's exactly WHY I wrote the story. We're racing toward something transformative and we're not having the right conversations fast enough.

Has anyone else writing near-future sci-fi felt this pressure? When does "near-future" become "right now"? And how do we as storytellers help society prepare for changes that might arrive faster than our publication schedules?

Would love to hear thoughts from other writers and readers tracking AI development. Are we writing fast enough? Are we asking the right questions?


message 2: by Gary (new)

Gary Gocek | 15 comments 25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution, and you have the credentials to know this.

AGI, if achieved, would completely obviate all tech-themed books being written today, fiction and non-fiction. :)


message 3: by Walson (new)

Walson Lee | 27 comments Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution, and you have the credential..."

Gary, thanks for your comment - you're absolutely right that 25 years is not "near future" in technological terms. I should have been more precise with my language. When you're tracking AI development as closely as I do, the gap between "one year is barely imaginable" and "25 years might as well be science fantasy" is something I should have articulated better.

Your point about AGI obviating all tech-themed books is fascinating, though I'd offer a slightly different perspective. You might be right in one sense - the books we're writing today about AI and technology could become quaint artifacts the moment AGI arrives, like reading pre-internet novels about what computers might do someday.

However, I'd guess there would still be a new form of "book" or knowledge-sharing medium, even in an AGI world. Humans - and possibly AGI themselves - would still need ways to share knowledge, explore ideas, and tell stories. Storytelling is one of the key attributes of humanity; it's how we make sense of our experiences and connect with each other.

The format might change dramatically, but the fundamental need to communicate, to ask "what if," to explore the human condition through narrative - I don't think that disappears, even if AGI rewrites everything else.

What do you think - would AGI eliminate storytelling, or just transform it into something we can't yet conceive?


message 4: by Arnar (new)

Arnar Vik | 10 comments Walson wrote: "Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution, and you have t..."

Life 3.0: Being Human in the Age of Artificial Intelligence

You're not alone when it comes to (mis)predicting when AGI is here. Max Teglmarks book is still the most philosophical on the subject!

Arnar


message 5: by Dr. (new)

Dr. Jasmine | 138 comments Arnar wrote: "Walson wrote: "Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for tech evolution,..."

Hi Arnar :)

Thank you for introducing this book, I shall have to read it :)

Gary, you are talking about IT expert having credentials to talk about AI.

Please allow me to offer a different angle? Being an experienced family doctor, I would like to humbly offer myself as " human nature expert".

Surely, an AI and any other technology should be considered in the context of its interactions with human nature? Its humans who will be using it, right?

AI might be evolving at an enormous speed, but human nature does not. It stayed the same for many, many thousands of years, and this is for a good reason.

In my opinion, human nature's " formula" is in line with the basic law of the universe- space and energy are used efficiently- all existential phenomena could be reduced to these six words.

Dear Walson, could you please explain to us AI in similar terms, if possible?

What I am trying to say :) is this: unless every human (including every AI developer) clearly understands the interplay between human nature and AI, the AI will disappear one day- (yet another human folly to do that) ;possibly, after it killed many millions, sadly.

Please consider the metaphor, where mother is human nature, and children are playing roughly with their toys, breaking some of them, and hurting each other, too (toys being the various types of AI).

So they'd been playing for hours, then mother comes in and says : all right my darlings, tidy away all this mess, please, its dinner time! and thats the end of play.
:)

Jasmine


message 6: by Arnar (new)

Arnar Vik | 10 comments Dr. wrote: "Arnar wrote: "Walson wrote: "Gary wrote: "25 years is not "near-future." Maybe if you're talking about biological evolution, but not for technological evolution. One year is barely imaginable for t..."

It helps to get an even longer perspective of time. Will there be AGI in the near future - being 25 years - or in 25 million years also a near future, considering the lifespan of our own planet? Humanity is still just a tiny speck on the great calendar of the Cosmos. One thing is certain - the Earth will be here a long, long time after us humans.


message 7: by Walson (new)

Walson Lee | 27 comments Response to Arnar and Dr. Jasmine:

Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the book and read summaries of it. You've convinced me to move it up on my TBR list and give it a proper read. And your point about the cosmic perspective is humbling - whether AGI arrives in 25 years or 25 million years, we're still just a tiny speck on the great calendar of the Cosmos. That's a good reminder when I'm fretting about my fictional timeline.

Dr. Jasmine, your "human nature expert" perspective is exactly what this conversation needs. You're absolutely right that AI should be considered in the context of its interactions with human nature, and that human nature evolves far more slowly than technology.

Let me try to address your question about explaining AI in terms of space and energy being used efficiently. I recently read a research paper titled "A Definition of AGI" which defines AGI as "an AI that matches the cognitive versatility and proficiency of a well-educated adult" and proposes a framework to measure it.
Right now, we're still in the Generative AI (GAI) phase - current AI tools like Google Gemini or OpenAI's models can match humans in specific cognitive areas (math, language translation, subset of science) but have a long way to go in many others.

The AI research community is pursuing two main breakthrough directions: (1) scaling compute power and (2) brain simulation. My personal take is that AGI will likely emerge from a combination of both approaches. I wrote a LinkedIn article exploring this if you're interested: https://www.linkedin.com/posts/walson...

But here's where your metaphor really resonates with me, Jasmine. You're right that unless every human - including every AI developer - clearly understands the interplay between human nature and AI, we risk creating something dangerous. Your mother-and-children metaphor is apt: we're playing roughly with powerful toys, sometimes breaking them and hurting each other, and human nature (the mother) may eventually say "tidy away this mess."

That's actually one of the central themes in my novel - not just whether AGI can be achieved technically, but whether humanity has the wisdom and maturity to handle it when it arrives. The technical breakthroughs might come from labs, but the wisdom needs to come from understanding human nature itself.

Thank you both for deepening this conversation beyond just timelines and tech specs. This is exactly the kind of discussion we need to be having.


message 8: by Dr. (new)

Dr. Jasmine | 138 comments Walson wrote: "Response to Arnar and Dr. Jasmine:

Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the book and read summaries of ..."


Hi Walson :)

Thank you for teaching us more about AI and sharing the link, also.
Isn't it quite incredible how ambitious modern humans are? Attempting to re-create human brain in less than 100 years, when it took mother nature more than 3 billion years (!) to progress life to this level.

Lets imagine a hypothetical scenario where an AI developer, in addition to being a techno genius, also has qualifications of brain surgeon, neurologist, and family physician. Could such a person create a perfect AI?? I am really not sure. For despite having an enormous intelligence, he or she will be biased in many ways just as every human is, for everyone's human nature/personalised behavioural standards are different.

Lets use a simplified example of assessing health implications of human eating a slice of cake.

So one AI developer might think " its ok to offer user a cake anytime he wants it", and create a cake giving AI whilst another would muse " cake is full of fat and sugar, too bad for health"- his AI would never give a slice of cake.

An experienced health care professional would assess a human and make a decision in what circumstances cake is ok, and what sort of cake, and how much of it , and how frequently, depending on human's physical and emotional health, level of health education and income, propensity to developing poor habits in relation to sweet treats, cholesterol level, complex family history, etc etc etc; the majority of the above is not quantifiable so whilst human brain can handle all this information, AI probably cant?

And one more question, Walson- forgive my naivety, but can AI actually survive without humanity?

Say tomorrow all humans and everything we have created, disappears- no energy stations, no electricity no cables no gadgets, just oceans, forests, plants and animals. Will AI immediately cease to exist?? or might it be so capable that it will train some animals to build an emergency power station?? :)))

Have a good night :)

Jasmine


message 9: by Walson (new)

Walson Lee | 27 comments Dr. wrote: "Walson wrote: "Response to Arnar and Dr. Jasmine:

Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the book and rea..."


Hi Jasmine,

You've brought up a couple of absolutely brilliant and humbling points! The contrast you draw—modern humanity attempting to replicate 3 billion years of natural, biological evolution in under a century—is the most sobering way to frame the AGI race. It highlights the potential hubris inherent in the entire effort.

The Cake Test: Simulation vs. Context

Your "cake test" example, contrasting the simplistic logic of two different AIs with the holistic, context-driven judgment of an experienced health care professional, perfectly illustrates the massive chasm between today's Generative AI (GAI) and true AGI.
To briefly answer your points on why current AI can’t handle that complexity, and to set up your final question about its survival:

Current GAI models are built on what we call neural networks. This architecture is essentially a high-speed, simplified simulation of the human brain's building blocks (synapses and neurons).
These systems use algorithms like deep learning to mimic how a child learns a new topic, or how we perform pattern recognition. Because we can scale computing power dramatically, the AI can process information and generate content (text, images) far quicker than a human.

However, this is the key limitation: Today's AI does not truly understand the meaning, context, or consequence of the information it processes. It is excellent at correlating data but lacks true causality, moral constraints, or the personalized behavioral standards that influence your doctor's judgment. This gap—the lack of comprehensive, transferable context and meaning—is exactly where nearly all major AI research and competition is focused right now.

Can AI Survive Without Humanity? (The Complicated Answer)

This is a fantastic question, and the simple answer is: Yes and No.

No, if: You take away all infrastructure right now. If every power station, cable, and data center vanished tomorrow, the current GAI would cease to exist immediately. It is entirely dependent on the physical infrastructure we built.

Yes, if: We reach the point of AGI.

The truly scary part is the next step: Once an AI can design and build a new set of algorithms or simulations that more closely resemble the human brain—and it does this by itself, without human input—we’ll be very close to the SuperAIs I describe in my debut novel.

An AGI, once fully self-sufficient and capable of molecular manipulation or energy sourcing, could theoretically survive and self-replicate without us. That is the moment the "mother nature" metaphor becomes a literal threat, as the "child" becomes completely independent.

Thank you for bringing such rich, thoughtful ideas to this discussion. This thread has become exactly the kind of conversation we need to be having - moving beyond timelines and technical specifications to the deeper questions about what we're actually creating and why.

Have a wonderful night, Jasmine. Your questions make us all think harder, which is exactly what we need right now.

—Walson


message 10: by Dr. (new)

Dr. Jasmine | 138 comments Walson wrote: "Dr. wrote: "Walson wrote: "Response to Arnar and Dr. Jasmine:

Arnar, thank you for the Tegmark book recommendation - I actually came across "Life 3.0" during my research period before writing the ..."


Hi Walson :)

I am sorry about making you think harder (lol), this is not my intention, I am just naturally curious.. plus I want a safe world for our children :))

I wonder if you, Walson, and other AI developers know that its not just human brain, its human heart they would have to mimic??

For within a human, brain obeys the heart :

"The heart appeared to be sending meaningful messages to the brain that it not only understood, but also obeyed (Ref 44)."

- professor Mohamed Omar Salem (21st century British and Qatar psychiatrist)

Here is the source:

https://www.rcpsych.ac.uk/docs/defaul...

So I am feeling quite hopeful that only nature can create human heart, and therefore, we should be safe, after all.

And here is another humbling quote :

"Man is certainly stark mad; he cannot make a worm, and yet he will be making gods by dozens".

-Michel de Montaigne


message 11: by Walson (new)

Walson Lee | 27 comments Jasmine,

Please don't apologize for making me think harder—it's not hard at all! These are the profound, critical questions that every AI developer should be grappling with. It happens that your questions are still squarely within my professional expertise, as I’ve worked hard to keep up with these philosophical and developmental edges of the field.

And when you say you want "a safer world for our children," that is exactly the core reason I decided to write Echo of the Singularity: Awakening. The potential impact on the next generation is the ultimate motivator.

You raise a truly powerful and often overlooked point with Professor Salem's quote: "For within a human, brain obeys the heart."

I wholeheartedly agree. In fact, this very concept—the supremacy of the human "heart," which I translate to empathy, moral commitment, and compassion—is a central theme in my book. It is the one variable the SuperAI cannot predict or counter, and it is the foundation of the human counterattack.

I’ve focused heavily on 'empathy' and 'AI ethics' in both my debut novel and a major portion of my published articles because I believe that if we can't formalize and value the non-computational aspects of consciousness, we risk building systems that are tragically incomplete.

I'm really enjoying this thread, and your contributions are making it incredibly rich. If you don't mind, I would love to take this very topic—the necessity of mimicking the "heart" for safer AGI—and elaborate on it in a LinkedIn article. I might have a chance there to influence some of the AI developers and technology professionals who need to hear this perspective the most.

Thank you again for contributing such interesting and necessary ideas!

—Walson


message 12: by Dr. (new)

Dr. Jasmine | 138 comments Walson wrote: "Jasmine,

Please don't apologize for making me think harder—it's not hard at all! These are the profound, critical questions that every AI developer should be grappling with. It happens that your q..."


Dear Walson,

Thank you very much from myself (and the rest of humanity!! :)) ) for raising awareness of this very crucial issue.

Of course I don't mind- please feel free to use anything we've discussed if it helps you.

Perhaps the bottom line should be " how can AI safely serve humans whilst remaining subservient to humans". Trying to replicate human should not ever be an aim (in my opinion); precisely because of what we've discussed yesterday- aiming to achieve what took nature billions of years can never work and is simply foolhardy.

Thank you again Walson, and please keep us updated with your work.

Good luck!

:))

Jasmine


message 13: by Gary (new)

Gary Gocek | 15 comments Dr. wrote: "Trying to replicate human should not ever be an aim"
AGI is "artificial general intelligence". I would claim that this has become the preferred name for the next level of AI because it still suggests something different from humans. However, once we get AGI, human fiction authors might as well find something else to do, because AGI will flood the market for every fiction genre within a few days, and it will not be possible to tell which novels are written by AI.


message 14: by Wells (new)

Wells Carroll | 9 comments Hi Walson,

I wouldn't fret too much about predictive timelines for AGI. Your estimate of 2050 isn't too far off. Demis Hassabis's prediction of 5-10 years is likely marketing as much as anything else. Then again, it might not arrive until 1000 years from now. All depends how one defines AGI.

Just a minor point of irritation to me that the term "AI" has been tossed around as if it's true sentient artificial intelligence. Great for clickbait titles for news stories and online articles, but it has also caused a lot of unnecessary fear. For example, "AI to eliminate 99% of jobs by 2030, warns expert." You'll find tons of similar articles/stories online and in the news. We're nowhere close to that.

Why? Because AI (or AGI, or whatever term is used) isn't being designed correctly. LLMs are great, if one accepts the occasional hallucinations and errors. They are tools at this point, and not even great ones at that. I've used them to create art and articles, even to do research on detailed science projects. However, I have enough experience in the subjects to know when the AI gets it wrong (which is often).

There was a term -- GIGO -- when I first learned programming. Garbage in, garbage out. That still applies to AI. The AI experts seem to have forgotten it. They've released some basic tools and are marketing them as if they're refined... they aren't. They're flawed, as are the humans using them. The responses are GIGO, yet are being acted on as if they're correct. Nonsense.

For example, we're wanting to turn our novels into an online series. Lots of AI programs that claim the ability to generate movies. None of them can. That's because of how they're designed. Easy to make a short film with them, but there is never any consistency to the by-frame images. Faces change. Backgrounds are wrong. Even the clothing is mis-generated. So can the AI generate videos? Sure. Can they generate consistent ones? Absolutely not. Yet once again, we get claims from the news and online media that Hollywood is obsolete.

The closest company to getting the design right is this one: https://univa.online/ Look at how they're designing their program and you'll see what makes it different. Like the human brain, they're breaking the tasks into parts and letting different programs handle each. Then they bring it all together.

That's what is missing in the race to generate AGI. Right now the designers are trying to make a single tool that does it all. Not how our minds work, and it isn't how to create a sentient artificial intelligence. Break it into pieces to handle the different tasks, have a primary program to supervise each subroutine "team", and a final program to bring it all together.

Sorry for the lengthy response. Bottom line, your prediction of 2050 is close. It's what we used in our series (third book coming out next year), but AI was a tiny piece of the storyline. At least in the first two books. But we spent a year brainstorming the series and I have a great team writing the books.

So don't let it keep you up at night. Stay confident in your assessment, remember that companies always stick marketing into what they say, and... just keep writing. And please, don't wait 25 years to get that sequel out (grin).


message 15: by Walson (new)

Walson Lee | 27 comments Jasmine,

Thank you so much for your support and encouragement.

I really appreciate your willingness to let me use our discussion. I'm in the process of drafting a LinkedIn article that explores these critical issues more deeply, and I expect to complete it in the next couple of days. I'll post the link here once I publish the article so you and others in this thread can see how our conversation helped shape it.

Your point about "how can AI safely serve humans whilst remaining subservient to humans" is exactly the right framing. I completely agree that trying to replicate humans should not be the aim – that path, as you said, is foolhardy when nature took billions of years to achieve it.

I'll definitely keep you updated on the work. Thanks again for making this such a rich and thought-provoking discussion.

Walson


message 16: by Walson (new)

Walson Lee | 27 comments Gary,

That's an interesting point about AGI flooding the market with sci-fi books early on. It's theoretically possible, and I suppose there's a certain irony in AI making human sci-fi authors obsolete just when we finally have interesting things to write about AGI itself.

However, I'd expect there would be other prioritized areas that AGI (or whoever controls the AGI) would focus on first – running day-to-day operations of governments, commercial organizations, and households, or serving as personal companions. Fiction writing, while culturally important, probably isn't at the top of the list for transforming society or generating immediate economic value.

That said, your point does raise a fascinating question: if AGI can generate thousands of novels in days, does the value of human storytelling shift from the final product to something else – the uniquely human perspective, the lived experience, the imperfect but authentic voice? Maybe human authors won't become obsolete so much as we'll occupy a different niche in the literary ecosystem.

Walson


message 17: by Walson (new)

Walson Lee | 27 comments Wells,

Thanks for the encouragement and for sharing your perspective – and I appreciate the reminder not to wait 25 years for the sequel!

Since I've already published my debut novel, I'm going to stick with my 2050 prediction for the story. But you're absolutely right that these timelines are uncertain, and there's definitely a marketing component to some of these predictions.

As a heavy user of various AI tools myself, I share your frustration with current Generative AI and those Large Language Models (LLMs), and your GIGO point is well taken. A couple of clarifications and thoughts, though:

First, there are a few well-known techniques to make AI tool output more accurate – these are actually main subjects of my first book "Mastering AI Ethics and Safety":
• Better and higher quality input training data
• Pulling contextual info with retrieval augmented generation (RAG) design pattern
• Better algorithms to filter out bias and unreliable data
• Better prompts – this is the only area you can control as a user. You should write prompts to be as specific as possible about what kind of results you expect

Second, you're absolutely right about the distributed approach you described. On the enterprise side, the current trend is exactly what you're talking about – Agentic AI. Conceptually, these are distributed sets of AI agents where each one focuses on a specific task and area. In this environment, there would be multiple AI models (not just language models) that can focus on an organization's intellectual property, accumulated knowledge, and business insights. This is my area of professional expertise, and I've only scratched the surface here. If you're interested, you can take a look at my "Mastering AI Ethics and Safety" book – I kept the first half easy to understand for business decision makers, and I'd assume you can grasp the concepts.

Third, you brought up a good point about the overhype of AI. In sci-fi, we all get used to talking about sentient AI; however, in the real-world technical community, particularly at leading AI companies, you hear much less about sentient AI. Everyone is focusing on AGI – which emphasizes matching or exceeding a well-educated person's capabilities and knowledge in all areas. Due to extensive competition and historically huge investment in AI data centers, everyone in the technical community is expecting AI innovation will accelerate on an exponential growth curve in the next few years.

The Univa link you shared is interesting – I'll check it out. You're right that breaking tasks into specialized components is closer to how our minds actually work.

Good luck with your third book next year! Sounds like you've got a solid team and process.

Walson


back to top