Battle for Planet Earth
"If we can all stand together, those tentacled fascists are finished!"--Captain Protean
So, about The Battle for Planet Earth.
(Watch the trailer here).
Before we get there, let me state that I do not use AI in my writing. I have no plans to do so. I suppose I cannot discount some hypothetical future experimental story in which I incorporate clearly-identified AI passages in order to comment on the use of AI.
I am also a party in a lawsuit involving AI, because a book of mine was among numerous books to train a particular program. Again, I do not write with AI.
But I am fascinated by AI, even knowing the damage it is doing, both environmentally and culturally. The internet is awash in AI slop, and the data centers creating it are awash with water that would be better used elsewhere.
(Though, in fact, not more so than many other things people aren't currently upset about).
Currently, we're seeing what people will believe, even when a government isn't in control of the media, other sources of information are available, and the false statements being made are hilariously preposterous. What more will we believe once the technobarons perfect AI?
I have experimented with AI to generate images and, about a month ago, I decided to explore various programs and platforms that generate video. I wanted to compare them, and see what each could and could not do, and what the limits are, at present. I needed a fun project to focus my interest and thus was born The Battle for Planet Earth.
It’s a trailer for the sort of movie I might have made when I was, say, nine, a composite of the media that I thought was cool. It would include the stuff that I actually consumed, superheroes, science-fiction, and kaiju. Johnny Socko and his Flying Robot, manifestations of the martial arts craze, and tv and cartoons about adventurous teens and rascally anthropomorphic animals. It would also include stuff we knew from trailers and posters and schoolyard rumour, chop socky flicks and blaxploitation action movies, things I would mostly see much, much later, but that we all knew were cool. The trailer, I knew, would be created, of course, with adult eyes, one-third nostalgic, one-third ironic, and one-third aware.
Problematic tropes weigh down both the old media and the new.
And so I developed the backstory and the guidelines for a trailer for the best alien invasion /superhero/kaiju/kung-fu/blaxploitation/giant robo movie, with a side order of Scooby Doo.
Most of the work took occured over the last month, with morning coffee.
The process proved instructive, both in examining media and examining AI.
I made heavy use of Bing Image Creator, because it’s free. I couldn’t save characters, the way some pay services allow. However, the results I’ve seen even there can be inconsistent. Rotate your character, and they might become someone else. Fortunately, in a trailer of a hypothetical film with a large cast of characters, no one has to appear too often. I found methods, often just brutish force. If you describe a certain character the same way, repeatedly, you get images that are “close enough.” AI still works, it seems, with a broad range of human types. The process still required repeated prompts. Many of the results were off-the-mark, bizarre, or just awful.
I’ll be posting a video in the near future featuring some of the generations that I didn’t use. These range from "not close enough" to, "why are their feet on backwards?" which at least answers the question, "why are they walking backwards?"
I also exploited free trial offers.
I only paid briefly for one site, one of those proliferating on the web which advertise bringing photos to life. The pay site produced some excellent results for personal use, and one or two images that appear in the trailer. That said, this particular service engaged in what struck me as shady practices, and I did not use them for long. Besides, most of the media that inspired Battle for the Planet Earth was cheaply done, by today’s standards. It followed that my trailer should be equally cheap. Honestly, this project really should have visible wires in it somewhere.
I hit certain limitations.
Most generators restrict violence. Those restrictions appeared to become more extensive, even over the course of one month. I had to accept “close enough” for the final war zone shot. For a shot of Lady Knight (a female superhero-- they were on a bit of an upswing in the 70s) punching down a door into a high-tech room, I could only get her punching towards a door. I then had to generate multiple shots of a door falling into a tech room (harder to describe to AI than you might think) and edit judiciously.
The daikaiju character, Grifuto, really should have had his own distinct look. However, in order to get somewhat consistent results across platforms, I had to describe him as a dark green allosaurus.
Some elements developed in the process.
Most of the characters had backstories from which I could work. The Scooby-Gang-like support group did not. They were never intended to be in my notional tale. They developed as work progressed and I realized that Bing could give me a fairly consistent silver-grey 1968 Volkswagen bus. I decided to make the future Lady Knight one of them; she gets her powers at sixteen, and only later establishes herself as a superhero.
Getting consistent teens was a challenge. I wanted a diverse group of teens, and this still presents a problem. AI tends to default to Caucasian, unless you specify, say, Asian or African-American. Then, often, that's all it gives you. In short, non-white people get ignored, until one forces the issue. Then, suddenly, race is all that matter.
Striking, that.
In addition, some oddly problematic depictions turned up whenever I put the two teen girls together, despite giving innocuous prompts. What images, exactly, was Bing trained on?
I abandoned the plan to give them a dog. Current AI has trouble with dog breeds.
Ditto the kids who find the robot parts. I originally imagined them finding these closer to their current ages. The AI gave me little kids, but the images of them with giant robot parts looked great. In the end, it worked to my advantage. Their tween selves only had to superficially match their little kid versions.
Setting, also proved a challenge.
Initially, I didn’t think much about when this movie took place. I was using a lot of old tropes, however, and the project’s world quickly grew anachronistic. I leaned into that, and started creating a deliberately anachronistic, stylized world. I would have rather not have included cellphones. Boon to and bane of contemporary life, cells also wreak havoc with traditional narratives. But it's not as though I had to develop an actual coherent movie script, so I incorporated cells and vintage American clothing into crowd shots. Imagine, as you will, that this is a past with more advanced tech, or a present that more closely resembles the past. It looks like the sort of place that would have metahumans, actions heroes, and a giant robot.
The project is not entirely AI-created, and not just because I directed the generations, selected bits, layered and combined results, and edited. The narrative voice-over belongs to one Bryan Thompson. The music is stuff licensed for use with Adobe Premiere, which I used to edit. The sound effects come from a range of places, and most of it I acquired over the years for other projects. Much of the sound was commercially available. Some I recorded live. I also filched a line from a fondly-remembered 1970 Japanese kaiju movie, a fact which I address in the video's credits.
I enjoyed the results, and I learned a lot about the process. It's unlikely that I will do other large-scale projects. I couldn’t help but think that it would be much more fun to have done this project with real actors, creative shooting, and practical special effects. Of course, if I had that kind of money, I’d have better things to spend it on.
Nevertheless, the results may be the most perfectly ridiculous, dumb thing I have ever created.
And, if it were real, I’d watch the hell out of this movie, even now.
So, about The Battle for Planet Earth.
(Watch the trailer here).
Before we get there, let me state that I do not use AI in my writing. I have no plans to do so. I suppose I cannot discount some hypothetical future experimental story in which I incorporate clearly-identified AI passages in order to comment on the use of AI.
I am also a party in a lawsuit involving AI, because a book of mine was among numerous books to train a particular program. Again, I do not write with AI.
But I am fascinated by AI, even knowing the damage it is doing, both environmentally and culturally. The internet is awash in AI slop, and the data centers creating it are awash with water that would be better used elsewhere.
(Though, in fact, not more so than many other things people aren't currently upset about).
Currently, we're seeing what people will believe, even when a government isn't in control of the media, other sources of information are available, and the false statements being made are hilariously preposterous. What more will we believe once the technobarons perfect AI?
I have experimented with AI to generate images and, about a month ago, I decided to explore various programs and platforms that generate video. I wanted to compare them, and see what each could and could not do, and what the limits are, at present. I needed a fun project to focus my interest and thus was born The Battle for Planet Earth.
It’s a trailer for the sort of movie I might have made when I was, say, nine, a composite of the media that I thought was cool. It would include the stuff that I actually consumed, superheroes, science-fiction, and kaiju. Johnny Socko and his Flying Robot, manifestations of the martial arts craze, and tv and cartoons about adventurous teens and rascally anthropomorphic animals. It would also include stuff we knew from trailers and posters and schoolyard rumour, chop socky flicks and blaxploitation action movies, things I would mostly see much, much later, but that we all knew were cool. The trailer, I knew, would be created, of course, with adult eyes, one-third nostalgic, one-third ironic, and one-third aware.
Problematic tropes weigh down both the old media and the new.
And so I developed the backstory and the guidelines for a trailer for the best alien invasion /superhero/kaiju/kung-fu/blaxploitation/giant robo movie, with a side order of Scooby Doo.
Most of the work took occured over the last month, with morning coffee.
The process proved instructive, both in examining media and examining AI.
I made heavy use of Bing Image Creator, because it’s free. I couldn’t save characters, the way some pay services allow. However, the results I’ve seen even there can be inconsistent. Rotate your character, and they might become someone else. Fortunately, in a trailer of a hypothetical film with a large cast of characters, no one has to appear too often. I found methods, often just brutish force. If you describe a certain character the same way, repeatedly, you get images that are “close enough.” AI still works, it seems, with a broad range of human types. The process still required repeated prompts. Many of the results were off-the-mark, bizarre, or just awful.
I’ll be posting a video in the near future featuring some of the generations that I didn’t use. These range from "not close enough" to, "why are their feet on backwards?" which at least answers the question, "why are they walking backwards?"
I also exploited free trial offers.
I only paid briefly for one site, one of those proliferating on the web which advertise bringing photos to life. The pay site produced some excellent results for personal use, and one or two images that appear in the trailer. That said, this particular service engaged in what struck me as shady practices, and I did not use them for long. Besides, most of the media that inspired Battle for the Planet Earth was cheaply done, by today’s standards. It followed that my trailer should be equally cheap. Honestly, this project really should have visible wires in it somewhere.
I hit certain limitations.
Most generators restrict violence. Those restrictions appeared to become more extensive, even over the course of one month. I had to accept “close enough” for the final war zone shot. For a shot of Lady Knight (a female superhero-- they were on a bit of an upswing in the 70s) punching down a door into a high-tech room, I could only get her punching towards a door. I then had to generate multiple shots of a door falling into a tech room (harder to describe to AI than you might think) and edit judiciously.
The daikaiju character, Grifuto, really should have had his own distinct look. However, in order to get somewhat consistent results across platforms, I had to describe him as a dark green allosaurus.
Some elements developed in the process.
Most of the characters had backstories from which I could work. The Scooby-Gang-like support group did not. They were never intended to be in my notional tale. They developed as work progressed and I realized that Bing could give me a fairly consistent silver-grey 1968 Volkswagen bus. I decided to make the future Lady Knight one of them; she gets her powers at sixteen, and only later establishes herself as a superhero.
Getting consistent teens was a challenge. I wanted a diverse group of teens, and this still presents a problem. AI tends to default to Caucasian, unless you specify, say, Asian or African-American. Then, often, that's all it gives you. In short, non-white people get ignored, until one forces the issue. Then, suddenly, race is all that matter.
Striking, that.
In addition, some oddly problematic depictions turned up whenever I put the two teen girls together, despite giving innocuous prompts. What images, exactly, was Bing trained on?
I abandoned the plan to give them a dog. Current AI has trouble with dog breeds.
Ditto the kids who find the robot parts. I originally imagined them finding these closer to their current ages. The AI gave me little kids, but the images of them with giant robot parts looked great. In the end, it worked to my advantage. Their tween selves only had to superficially match their little kid versions.
Setting, also proved a challenge.
Initially, I didn’t think much about when this movie took place. I was using a lot of old tropes, however, and the project’s world quickly grew anachronistic. I leaned into that, and started creating a deliberately anachronistic, stylized world. I would have rather not have included cellphones. Boon to and bane of contemporary life, cells also wreak havoc with traditional narratives. But it's not as though I had to develop an actual coherent movie script, so I incorporated cells and vintage American clothing into crowd shots. Imagine, as you will, that this is a past with more advanced tech, or a present that more closely resembles the past. It looks like the sort of place that would have metahumans, actions heroes, and a giant robot.
The project is not entirely AI-created, and not just because I directed the generations, selected bits, layered and combined results, and edited. The narrative voice-over belongs to one Bryan Thompson. The music is stuff licensed for use with Adobe Premiere, which I used to edit. The sound effects come from a range of places, and most of it I acquired over the years for other projects. Much of the sound was commercially available. Some I recorded live. I also filched a line from a fondly-remembered 1970 Japanese kaiju movie, a fact which I address in the video's credits.
I enjoyed the results, and I learned a lot about the process. It's unlikely that I will do other large-scale projects. I couldn’t help but think that it would be much more fun to have done this project with real actors, creative shooting, and practical special effects. Of course, if I had that kind of money, I’d have better things to spend it on.
Nevertheless, the results may be the most perfectly ridiculous, dumb thing I have ever created.
And, if it were real, I’d watch the hell out of this movie, even now.
Published on October 17, 2025 12:43
No comments have been added yet.