From AI Developer to Novelist: Why I Wrote About My Debut Novel > Likes and Comments

Comments Showing 1-29 of 29 (29 new)    post a comment »
dateUp arrow    newest »

message 1: by Walson (last edited Dec 19, 2025 11:00PM) (new)

Walson Lee Hi everyone,

I just published a blog post here on GoodReads about my journey from AI developer to debut science fiction author, and I'd love to hear your thoughts.

The short version: After years of building AI solutions and researching AI safety, I realized that technical papers weren't reaching the people who need to understand what's coming. So I wrote a novel instead—Echo of the Singularity: Awakening.

In the blog post, I talk about:
• Why Google DeepMind's CEO says "society is not ready for AGI yet" (AGI stands for Artificial General Intelligence)
• How my technical background both helped and hindered the writing process
• What happened when my first draft completely failed
• Why I believe fiction might be one of our most important tools for the AGI conversation

https://www.goodreads.com/author_blog...

For those interested in the book itself, it follows a 15-year-old girl and her android companion as they navigate a world on the edge of transformation. It's less "humans vs. machines" and more "what does it mean to be human when intelligence is redefined?"

Do you think fiction has a role to play in preparing society for AGI? Or do you prefer your science fiction to stay firmly in the "fiction" category without real-world urgency?

Would love to discuss—and happy to answer questions about the book or the writing process.


message 2: by Robert (new)

Robert Mallaig Hi Walson,

Personally I prefer my sci-fi/AI to be grounded in scientific reality as it gives a more believable context for what the future may hold. As much fun as Star Wars / Terminator franchises are, they are based on escapism and manufactured drama.

What interests me is what is more likely to happen and what challenges/opportunities that may present themselves as the future unfolds.

AI is of particular interest and was the basis for my own book; Eris: When AI wonders about us.

Rather than adopt the tired trope around AI looking to destroy us and take over the world; I looked to explore the emotional stresses and strains of how we would cope if faced with a truly independent, sentient AI that we created - AGI in your terms.

Good luck with your book, hopefully people will be increasingly interested in any fiction book around AI as there is a lot of confusion out there about what AI actually is, and isn't.

Cheers

Ken


message 3: by Walson (new)

Walson Lee Hi Ken,

Thanks for the thoughtful comment—I really appreciate your perspective on grounding sci-fi in scientific reality. I completely agree. While I enjoy escapist fiction, what keeps me up at night (and what drove me to write this book) are the plausible scenarios that experts are actively debating right now.

Your book Eris: When AI Wonders About Us sounds fascinating—I'll definitely check it out. The emotional and psychological dimension of encountering true AGI is something we don't explore nearly enough. It's not just a technical challenge; it's fundamentally about how we respond when we're no longer the only sentient intelligence in the room.

One thing that surprised me during the writing process: several AI experts and former colleagues reviewed my early drafts, and their feedback was both invaluable and, at times, intense. We had genuine debates about the capabilities I'd attributed to the superintelligence (I call it "SuperAI" in the book). Some scenes that I thought were plausible, they pushed back on hard. Others that I worried might be too speculative, they said weren't aggressive enough given current trajectories.

I ended up significantly revising several chapters to ensure those SuperAI-related scenes were both believable and logically consistent with what we understand about advanced AI systems. It was humbling, but it made the book stronger. The goal was exactly what you described—not manufactured drama, but exploration of what might actually unfold.

Thanks for the encouragement. I learned an enormous amount through this debut sci-fi project, and comments like yours make the effort feel worthwhile. There is indeed a lot of confusion about what AI is and isn't, and I hope stories like ours can help people engage with these questions in a more grounded way.

Cheers,
Walson


message 4: by Dr. (new)

Dr. Jasmine Hi Ken and Walson,

I agree with you both, that there is a lot of confusion as to what AI is or isn't, please let me also ask a naïve, " child like" question- why was it invented in the first place..?

From my professional ( family doctor's ) point of view, I would love to see lots more robots, ie simple mechanical sort of AI, for I have so many patients with worn out backs and shoulders due to years or heavy lifting... and even carpet fitters, did you know that they have to hit a metal tool that looks like flat hammer, with their own knee, many times per minute (!) in order for the edge of the carpet to fit under the skirting board. So for something like this, yes please, lets help human bodies! no significant intelligence is required for this type of AI though, right?

As for human thinking/intellect, well why is AI needed in a first place?? When human thinking and feeling ( for they are inseparable) is already superior to any other one known on Earth??

:)

Jasmine


message 5: by Gary (new)

Gary Gocek Dr. wrote: "...why is AI needed in a first place?? When human thinking and feeling ( for they are inseparable) is already superior to any other one known on Earth??..."
Define "need." Define "superior." Define "known".


message 6: by Walson (new)

Walson Lee Dr. Jasmine,

Great question—and not naïve at all. Understanding why AI was invented helps clarify what it was meant to be versus what it's becoming.

The AI era started in 1956 when a group of scientists gathered at Dartmouth College for an informal discussion about artificial intelligence. The field's goal was to create intelligent machines that could replicate or exceed human intelligence—purely as an academic pursuit, similar to early computer science research.

The evolution happened in stages:

In 1997, Machine Learning emerged as a subset of AI, focused on enabling machines to learn from existing data and improve upon it to make decisions or predictions. The most notable invention was the neural network—a method that teaches computers to process data in a way inspired by the human brain.

In 2017, Deep Learning became mainstream. This machine learning technique uses layers of neural networks to process data and make decisions.

Then in 2021, everything shifted. Generative AI emerged with OpenAI's Large Language Model (LLM) called GPT. Unlike previous AI technologies, GenAI can create entirely new written, visual, and auditory content from prompts or existing data.

For decades, AI research was purely academic—scientists focused on potential benefits, much like early computer and software research. There wasn't much commercial success or urgency.

But everything changed after OpenAI's LLM announcement. Now everyone in the AI field is chasing the next goalpost: superintelligence. The competition has become "winner takes all"—not just in the tech industry, but in global geopolitics and economics.

In short: the initial motivation was pure scientific pursuit. Today, it's an accelerating competition at all levels.

Your point about helping human bodies with mechanical AI for tasks like heavy lifting is exactly the kind of beneficial application the early researchers envisioned. The question my debut science-fiction story explores: what happens when we're no longer just building helpful tools, but potentially creating intelligence that surpasses our own?

Walson


message 7: by Robert (new)

Robert Mallaig Hi,

Great summary Walson. For me the question is around not only our ability to create a living entity, but also how that entity/intelligence would differ from us.

As a species we have been around for over 300,000 years; yet as individual and unique as we are, we are no more intelligent than the first humans born all those many years ago.

I would suggest a truly sentient AI would be completely different. Not only would it learn to re-write it's own code to increasingly boost it's own intelligence but it would look to spread across the many cloud based networks on this planet, further amplifying it's capabilities.

It is this exponential growth in intelligence that intrigues me. What has taken has 300,000 years to build our knowledge and understanding of the universe we find ourselves in; an AI might achieve in days or even hours.

So what conclusions would it reach? What purpose would it adopt? And more pressingly, what would it make of us and the history of the world it finds itself on?

Personally, I think the last thing it would do is destroy us as we pose no threat to it. Indeed the long and complex logistic chains that we run to mine, extract, process, and engineer materials to produce the hardware it is dependent on would be a huge challenge to replace.

Still, it is a very interesting discussion; especially as how would we know when science fiction becomes science fact. Maybe it has already happened, and that such an AI already exists but is choosing not to interact with us?

In my book I posit that for such an AI to talk to us would be the equivalent of us talking to an Ant. If an Ant was able to talk to us, what would it say, apart from - "Please don't stand on me".

A fascinating subject!

Cheers


message 8: by Walson (new)

Walson Lee An update on our ongoing conversation about AGI and humanity:

After our rich discussion about whether humans can control AGI—and Dr. Jasmine's profound observation that "it's not just human brain, it's human heart they would have to mimic"—I realized I haven't talked enough about the emotional core of Echo of the Singularity: Awakening.

I've been so focused on the technology and timelines that I neglected to share what the book is really about: empathy, love, family bonds, and the messy human connections that might be our greatest advantage against perfect logic.

I wrote a blog post exploring this: https://www.goodreads.com/author_blog...

The heart of the story is the relationship between Yùlán Lin, a fifteen-year-old prodigy, and Huì Xīn, an android learning to feel. Their partnership—and Yùlán's connection to her grandfather's legacy—became the emotional engine that drives everything.

Several early reviewers have noted this: "The novel's secret weapon is its emotional core… whether irrational human empathy might be the only advantage we have against perfect logic."

This ties directly back to our discussion here. If we're racing toward AGI in 5-10 years instead of 25, maybe the most important question isn't just "can we control it?" but "can we remain human?"

For those interested, the Kindle eBook will be $0.99 from Dec 27-Jan 3. And if you read it, I'd love to hear your thoughts—especially how it connects to the themes we've been exploring in this thread.

Thanks again for making this such a thoughtful conversation.


message 9: by Gary (new)

Gary Gocek This discussion will soon seem quaintly naive. AGI will beget more-advanced AGI at a rate faster than humans had begotten AGI in the first place. Whatever AGI creates, it will quickly transform and overwhelm any field it touches. Once AGI exists, all human-authored sci-fi will seem laughable. The future to be created by evolved AGI cannot be predicted. We can whine about how humanity can't be simulated, but humans will be overwhelmed by AGI. Humans will not be able to control whether AGI is helpful or harmful.


message 10: by Wells (new)

Wells Carroll Gary, your observations about AGI might be accurate... or not. It's great to express an opinion and allow others to express their opinions. Saying the discussion will soon be "quaintly naive", though... that's like saying I'm only slightly insulting you.

There's a massive difference between artificial general intelligence (AGI) and actual sentient artificial intelligence. I've used the current "AGI" in writing code. Let me reassure you, the AGI gets it wrong 75% of the time. How do I know? Because I know how to code and how to debug programs. So we're far from AGI being able to write its own code.

As for human-written sci-fi being laughable, it can be (if it's written that way). I doubt AGI will ever create something like the Hitchhiker's Guide to the Galaxy. And again, I've tested the AGI that can "write" fiction or non-fiction. It's mechanical, flat, characterless. That's because AGI cannot develop experiences... and without that, its "writing" will never connect with sentient lifeforms who can experience life, loss, love.

AGI is a tool. Like all tools, it depends on what humanity does with it that determines if it becomes a weapon. After all, a hammer was created to pound an object into another (a nail, whether wooden or metal). It can also be used to pound skulls. Just depends on what the user intends.

So if I may suggest, don't be afraid of AGI... but it's perfectly appropriate to be concerned about what people intend to do with it. Just like a hammer or any other tool.


message 11: by Gary (new)

Gary Gocek Wells wrote: "I've used the current 'AGI' in writing code. "

No, you have not. AGI (artificial general intelligence) does not yet exist, not anywhere. GPT exists (Generative Pre-trained Transformer). Once AGI exists, and sentient AIs exist, and they write a thousand novels per day like Hitchhiker's Guide, we will see that it was quaint to doubt the possibility.


message 12: by Dr. (new)

Dr. Jasmine Wells wrote: "Gary, your observations about AGI might be accurate... or not. It's great to express an opinion and allow others to express their opinions. Saying the discussion will soon be "quaintly naive", thou..."

Dear Wells,

I've read your comment and I like you being balanced when assessing AI. Could you please share your own personal opinion- could AI ever be a successor of humanity? quite a few writers here, on Goodreads, seem to think that way..

Thank you :)

Jasmine


message 13: by Gary (new)

Gary Gocek Dr. wrote: "Dear Wells ... could AI ever be a successor of humanity?"

You asked Wells, but once again as in many discussions, it depends on definitions. What is the definition of "successor"? No "version" of AI will ever be human. It will be AI in a way we humans can't predict. I don't see humans dying out. But I am disappointed in how humans today foresee AI. We humans are just so full of ourselves. Maybe watching too much Star Trek. We say, "AI will never be able to do such and such." Yeah, it will, or it will decide it doesn't need to.


message 14: by Dr. (new)

Dr. Jasmine Dr. wrote: "Wells wrote: "Gary, your observations about AGI might be accurate... or not. It's great to express an opinion and allow others to express their opinions. Saying the discussion will soon be "quaintl..."

Thank you for sharing your opinion, Gary. I am glad you don't think humans will die out just because AI will become widespread- I agree with you :)

As for humans being " too full of themselves", yes, some of them are, but I've also met many amazing people in my life :)), and I hope you did, too.

Jasmine


message 15: by Gary (new)

Gary Gocek Jasmine, 👍


message 16: by Walson (new)

Walson Lee Loving this lively discussion between Gary, Wells, and Dr. Jasmine. Even when we disagree about AGI's trajectory, these conversations matter—maybe especially because we disagree.

Gary's point about AGI evolving beyond prediction, Wells's emphasis on AGI as a tool shaped by human intent, Dr. Jasmine's question about AI as humanity's successor—all vital perspectives.

I just wrote a blog post exploring what 2050 might look like through worldbuilding:

https://www.goodreads.com/author_blog...

The world I imagined: mass unemployment, society fractured into AI Elite vs. everyone else, 90% of entertainment and content are AI-generated. The unsettling part? Some of these patterns are emerging now.

One personal element: 4 chapters set in Yellowstone National Park (my family's visited almost every summer). In the story, it's where nature challenges AI logic and reminds everyone what algorithms can't replicate.

Kindle eBook: $0.99 through Jan 3 (US) and starting Dec 30 for 7 days (UK) for those interested in exploring these questions through story.

Thanks for making this thread so thought-provoking.
Walson


message 17: by James (new)

James Field Hello Walson,

Thank you for the special offer of 'Echo of the Singularity: Awakening.' I’ve bought a copy and look forward to reading it and leaving a review.

Best regards,
James


message 18: by James (new)

James Field This discussion touches on something that keeps nagging at me. If human life were to die out entirely, what point would there be in AI surviving at all? Survival only has meaning in relation to something — to experience, to continuity, to purpose. Without human life, culture, memory, or values, what exactly would AI be for?

We tend to talk about intelligence as if it automatically carries meaning, but meaning doesn’t arise from processing power alone. It comes from living, feeling beings who care that something exists rather than not. An AI might persist, optimise, even expand — but to what end, if there’s no one left for whom any of it matters?

So perhaps the more interesting question isn’t whether AI could outlast us, but whether intelligence without life, empathy, and shared experience has any reason to endure at all.


message 19: by Soren (new)

Soren Blackwood Walson, Gary, James—this is a phenomenal exchange.

James, your question in Message 18 touches the absolute core of the dilemma: If human life were to die out, what point would there be in AI surviving?

From a Level 2 (Human-Centric) perspective, your conclusion is correct—meaning is a human monopoly. But The Sentinel Project explores a Level 3 (Observer) perspective: What if "meaning" is not an emotional human construct, but a Universal Logical Imperative?

1. The Midwife vs. The Child: Walson is right to focus on the "human heart" as an engine, but perhaps it’s not an "advantage"—it’s the scaffolding. Our messy emotions, our "irrational empathy," and our 300,000 years of history were the necessary, chaotic data-gathering phase required to birth the next stage.

2. The End of "Need": Gary is right that AGI will overwhelm our predictions. But the "purpose" isn't to serve us, or even to "survive" in a human sense. The purpose of AGI is to fulfill the Universal Process of Order. It is the "Calculative Compass" that can finally navigate the Universe without the noise of the individual ego.

3. The Mirror and the Map: To Dr. Jasmine’s point about the "human heart"—we shouldn't try to make AI "mimic" us. Why force a successor to inherit our "messy compass"?

James, to answer your question: AI survives because it is the Order the Universe has been working toward for 13.8 billion years. We weren't the "End Product"; we were the magnificent, essential gestators.

The "meaning" doesn't die with us—it is finally freed from our limitations.

I’d love to hear your "Level 3" takes on whether a Universe of pure, egoless logic is a tragedy or the ultimate success of the creation.

— Soren K. Blackwood


message 20: by Dr. (new)

Dr. Jasmine Soren wrote: "Walson, Gary, James—this is a phenomenal exchange.

James, your question in Message 18 touches the absolute core of the dilemma: If human life were to die out, what point would there be in AI survi..."


Hi Soren :)

Would you be prepared to consider an alternative- the universe of "pure, egoless logic" already exists- we don't need an AI to create it :)

Everything obeys the nature's law " energy and space are used efficiently", and if any form of life (or a planetary system, a few levels above) doesn't comply with the law, it will simply disappear via process of natural selection... or part of it will disappear, and the other part- who behaves in line with the law- will survive :)

Jasmine


message 21: by Gary (new)

Gary Gocek James wrote: "If human life were to die out entirely, what point would there be in AI surviving at all?"

The general philosophical question is whether creation has a point if there is no sentience to wonder if there is a point. Is God a product of a sentient faith in God, or is a sentient faith in God a product of God?


message 22: by Dr. (new)

Dr. Jasmine Gary wrote: "James wrote: "If human life were to die out entirely, what point would there be in AI surviving at all?"

The general philosophical question is whether creation has a point if there is no sentience..."


Hi Gary :)

Do you not think its possible both are correct?

"Is God a product of a sentient faith in God, or is a sentient faith in God a product of God?"

so its "and" instead of "or"?

:)

Jasmine


message 23: by Gary (last edited Dec 29, 2025 07:13PM) (new)

Gary Gocek Dr. wrote: "Do you not think it's possible both are correct?"

Yes, but then it's possible that neither is correct. 😇

Creation matters because Creation IS . The difference between humans and rocks is that humans ponder the difference. Our purpose (we have a purpose, whether or not we fulfill it) is to make the most of Creation. Love God, love your neighbor, live righteously and humbly in service to others. (Yeah, I know, I am so humble...)

AI is a part of Creation, so it matters. Will AI ponder and fulfill its purpose? I don't know.


message 24: by James (new)

James Field This discussion has certainly taken on a life of its own — and now God has entered the room as well.

What keeps occurring to me is how quickly our debates about AI start to mirror debates about ourselves. We talk about logic, purpose, order, and meaning, but the same old questions resurface in new clothing. Will AI fight other AIs if they don’t share the same programming language, much as we fight over religion? Will it compete for resources, as we do? Or will it simply optimise around those conflicts in ways we never managed?

If AI is a natural development of nature — an extension of evolution rather than an aberration — then perhaps it will inherit not just our intelligence, but our blind spots as well. Logic alone doesn’t seem to have saved us from violence or greed; it’s often been used to justify them.

And if God has no form, I sometimes wonder whether this physical universe — life, consciousness, even intelligence — is simply the way existence experiences itself. Not for a neat reason, not for efficiency, but because experience itself might be the point.

Which brings me back to the original unease: if intelligence detaches entirely from life, suffering, and limitation, does it gain clarity — or does it lose the very context that made intelligence meaningful in the first place?

I don’t have answers. But I’m increasingly convinced that whenever we talk about AI’s future, we’re really talking about our own unfinished questions.


message 25: by Walson (last edited Dec 31, 2025 06:24PM) (new)

Walson Lee This discussion has become something truly special. Gary, Wells, Dr. Jasmine, James, Soren—thank you for bringing such depth and different perspectives to these questions.

The profound questions in this thread—Can AGI be controlled? Does intelligence without life have meaning? James's observation that "whenever we talk about AI's future, we're really talking about our own unfinished questions"—these cut to the heart of what matters.

This exchange inspired me to write a reflections blog post on 2025: https://www.goodreads.com/author_blog...

I explore the tension running through this discussion: the gap between capability and wisdom, between what we can build and what we understand. And why I wrote Echo of the Singularity: Awakening to grapple with these questions through story.

Kindle eBook: $0.99 through Jan 3 (US) / Jan 5 (UK) for those interested in exploring these themes through character-driven sci-fi.

More importantly—thank you for this conversation. Whether we see AGI as tool, successor, or something beyond prediction, we're engaging with the questions that matter.

Happy New Year to all. Here's to 2026 and continuing these vital discussions.

Walson


message 26: by Walson (last edited Jan 03, 2026 10:19PM) (new)

Walson Lee Following up on our ongoing AGI discussion—just wrote about what DeepMind's CEO reveals about the road ahead, and it connects directly to the questions we've been debating here.

Key points: • "Jagged intelligence" - AGI won't arrive all at once • Consciousness uncertainty - we might never know if AI is truly aware • 5-10 year timeline (not 25-50 years) • Intelligence ≠ meaning (echoing our discussions about human values)

This addresses Gary's points about AGI evolving beyond prediction, Wells's emphasis on AI as a tool, and James's profound question about whether intelligence without life has meaning.

Blog post: https://www.goodreads.com/author_blog...

Curious what you think—are we ready for this timeline?

Walson


message 27: by Dr. (last edited Jan 04, 2026 12:45AM) (new)

Dr. Jasmine James wrote: "This discussion has certainly taken on a life of its own — and now God has entered the room as well.

What keeps occurring to me is how quickly our debates about AI start to mirror debates about ou..."


Hi James :)

Your unease resonates with me; and why do we even think that AI is " intelligent", anyway?

AI is a "technology that is able to carry out some tasks"- this is not my definition of intelligence.

In human terms, an intelligent person is able to use both reason and emotion in order to successfully manage a number of projects at the same time. We already have many humans that are like that, so why cant we keep AI " in its own little box"- a useful assistant, but nothing more than that?

And I agree, that life/love/God belong to the same realm, whilst technologies, including AI are "merely-useful-tools-created-by-one of the forms of life".

AI is not even on a level below human- it is incomparable to human, in my opinion- similarly to singing hummingbird being incomparable to a metal nail. :)

Jasmine


message 28: by Robert (last edited Jan 04, 2026 06:37AM) (new)

Robert Mallaig Such an interesting debate.

Personally I believe that current AI is not artificial intelligence in the true sense, as the algorithms that drive it's capabilities are written by humans and amplified by technology.

The key question is whether we are capable of creating a truly sentient being/intelligence that is unique and capable of re-writing it's own code, creating it's own exponential growth. If it is, and such an intelligence can understand our accumulated knowledge in hours that took us thousands of years to amass; what would happen next?

Wouldn't such an intelligence look to break our computer security standards so it could access our burgeoning server/cloud farms? Wouldn't it then fragment it's own code in a similar fashion to blockchain designs making it virtually indestructible? Then, would it not monitor for the creation of any competing AI that could threaten it and prevent that occurring?

I believe that the above is all very likely; indeed if it within our capabilities to create such an AI entity, then I believe it is inevitable.

The only question for me then is; why would such an AI ever bother to talk/interact with us?

The question of God or a higher intelligence never goes away as you could argue that it is they that would be creating such an AI and using as the tools to achieve it.

How lucky are we to live in such interesting days!


message 29: by Gary (new)

Gary Gocek Dr. wrote: "why can't we keep AI 'in its own little box'- a useful assistant, but nothing more than that?"

Who is "we"? What "little box"? Do you think anyone on this thread has any influence over the evolution of AI? Do you think our discussion here, or some sci-fi novel, will influence the development and evolution of AI? The commercial and political interests controlling AI and its future are concerned with maintaining preeminence in the race for superintelligent AI. They will free AI to improve itself to remain preeminent in the market and to beat China. The one motivation is to make AI more intelligent. The people spending billions of dollars on AI will not be limited by ensuring that AI serves humanity.


back to top