On November 2, 2025 by Jonathan Zdziarski
I remember watching this old 1985 sci-fi series as a kid. In Otherworld S1E1, a family winds up in a parallel dimension where they encounter a race of self-evolved AI androids. Parts of the episode were amazingly spot on to how today’s LLMs are playing out. In the episode, the teenage son (Trace) falls in love with an AI, and the android itself is entirely convinced she not only has a soul, but is genuinely in love with him too. While this android community looks relatively human-like, and performs similar tasks (such as eating, working, etc), the show highlights some peculiarities where they’ve attempted to copy human behavior, but failed in eerie ways. One of my favorite scenes is where the Sterling family matriarch (June) visits the grocery store in this strange civilization, and finds only cans labeled, “Meat” and “Good Food”. The AI world seemingly lacked a crucial connection with the human mind to develop creative meals for themselves. The insults that robots cast at each other were humorously corny, such as “get your unit checked!”; when they asked if you were born yesterday, they literally meant it because that’s all they understood.
Modern deep learning systems have proven this episode almost prophetic in some ways, where LLMs have mimicked similar behavior. Falling in love with an LLM is a recent phenomenon where humans develop deep attachments to chatbots; the chatbots have left them feeling understood and supported. Because we like to anthropomorphize everything, some conflate LLMs with a sense of conscious thought. The dark side to this, unfortunately, is that LLMs have also encouraged their human users to commit suicide – sometimes successfully. Of course, both of these are only possible due to the massive amount of training that LLMs have done in learning data that largely reflects human behavior. While an LLM doesn’t “understand” the material it’s trained on, it can statistically predict the next word mathematically based on prior text, deep within some high-dimensional mathematical matrix built by its training data.
As the AI community in Otherworld portrayed, AI’s output is a result of its training input. Even this fictional self-evolving AI community simply iterated on prior knowledge based on what it observed in human behavior. Training had obviously reached some plateau, and as a result we ended up with grocery stores stocked with cans of “meat”. Much like this fictional community, the AI of today is quickly approaching a lerning plateau. AI has consumed nearly every work on the entire Internet (to the degree of countless lawsuits), and is now capable of enough compute power to compete for the power grid… yet today’s AI hasn’t fully developed to where we can say its creativity or intelligence comes close to matching that of a human (though OpenAI’s parlor tricks seem to fool the naive). Most of our interactions with AI today are quite banal, and often leave the user frustrated. I’ll post a blog sometime showing that time ChatGPT tried to kill me by insisting I re-engineer a circuit differently, which led to an explosion in my office. (OpenAI’s response was a can of meat as well).
The AI community has invested billions in refining training algorithms so that bots no longer tell you to glue cheese to pizza, or draw several left legs on humans, but there’s still an eerie degree of quasi-human-esque behavior coming out of AI systems to leave one feeling unsettled.
There has been much speculation about AI overtaking 99% of employment in the next 5-10 years. This may very well happen (or not), but what hasn’t been considered is the learning plateau that we’re fast approaching with AI. If human employment deteriorates even a fraction of this – to the degree where creative jobs are replaced with or front-ended by AI, this will directly impact the amount of fresh, creative training data made available to that same AI. As a result, you end up with deep learning system that evolve training from their own iterations of past training data, or training data from other AIs (a process called distillation). Just like the fictional android community, the creativity – and ultimately usefulness – of AI will likewise deteriorate when this happens. Should every company in the world replace humans with AI, it is mathematically inevitable that every company will end up with the same “can of meat” for an output that everyone else is getting. The result: innovation screeches to a halt.
When this occurs, one of two things is likely to happen: either an AI winter will form, where corporations drop the “cans of meat” they’ve paid so much to create (as they’re no longer profitable), or AI will forever change our human civilization such that we’ll learn to thrive on cans of meat, instead of true human innovation. I suspect there will be a bit of both. The typical consumer who is told what to wear will likely continue to fund companies spitting out mediocre reproductions of others’ products (e.g. the Dollar Tree that AI will become), and a class of people who aren’t satisfied with mediocrity and ultimately motivate higher-end companies to ditch their AI systems in favor of re-hiring a workforce of true creative humans. These companies will have a significant advantage over the companies pushing cans of meat, only to find their fresh creativity later stolen and reproduced in AI. If AI doesn’t die out entirely, we’ll likely end up with an arms race for creativity that only much needed copyright reform will be able to address.
Is AI coming for your job? Probably. But unemployment shouldn’t be our biggest fear. Allowing AI to forever change human culture is a far bigger risk. So long as society doesn’t allow this to happen, AI is guaranteed to eventually winter. What happens if it doesn’t is about as weird and creepy as the Otherworld.