Everyone’s Wrong About AI and Jobs. An Ex-Meta Director Explains Why.

Georg Zoeller spent 20+ years inside Big Tech — Meta, EA, BioWare. Now he runs an AI Literacy Institute and spends his days watching companies, governments, and creatives misread the same threat. In this conversation, he names it plainly.

“If you cannot explain technology to people in simple words without using tech bubble language — you’re probably a charlatan.”
— Georg Zoeller, Co-Founder, AI Literacy & Transformation Institute

About Georg Zoeller

Georg Zoeller is a veteran of over two decades in games, enterprise tech, and AI-driven product development. His career includes senior engineering and business leadership roles at Meta, EA (Electronic Arts), and BioWare — organizations that collectively represent some of the largest creative and data operations in the world.

He is the co-founder of the AI Literacy and Transformation Institute, where he works with companies, governments, and educational institutions to cut through the hype and build foundational AI understanding at the organizational level. His work starts with a deceptively simple question: “Can you explain what generative AI actually is?” Most organizations can’t. That’s where he begins.


Episode Overview

Most conversations about AI and jobs fall into one of two failure modes: blind optimism (“AI is just a tool, creatives are safe”) or paralyzing fear (“AI will take everything”). Georg Zoeller isn’t interested in either.

In this conversation, I sat down with the ex-Meta Business Engineering Director for a 67-minute, receipts-first breakdown of what’s happening to creative work, why the hype is intentional and who benefits from it, and what the “machine that prints machines” means for every writer, artist, voice actor, and knowledge worker currently wondering whether their job exists in five years.

Georg’s framework isn’t what you’d call reassuring, but it’s honest. He names the companies profiting from creative commoditization, explains why agents can’t do what the hype says they can, and ends with something rare in this conversation: practical advice that doesn’t require you to become an engineer.


Inside This Conversation

👉 Tap here to show/hide timestamps

[00:53] The First Question Georg Asks Every Organization
When the AI Literacy Institute walks into a company, government, or school, they start with one question: “Can you explain what generative AI is?” The answers are almost always wrong. Georg explains why an inability to define the technology is itself the crisis — and why you can’t transform through something you can’t explain.

[02:00] The Half-Life Problem: Why Everything You Learned About AI Is Already Outdated
Two years ago: five AI models, all in the cloud. Today: over a million. The pace of change means that traditional “learn it once, apply it” approaches have collapsed. Georg introduces the concept of the short half-life of AI knowledge — and what it takes to stay oriented in a landscape that rewrites itself quarterly.

“You can’t just go and learn something and then move and apply it — the half-life of what you’ve learned is very short.” — Georg Zoeller

[03:59] The Tech Bro Hype Problem: Words Without Substance
“Democratizing content.” “Efficiencies.” “Disruption.” Georg and the hosts dissect the language of AI hype — why it’s deliberately vague, who benefits from the fog, and how to identify a charlatan in the room before they waste your time or your budget.

[05:22]— Georg’s Conference Story: When Crypto Met AI (And Neither Made Sense)
Georg describes walking into an event positioned as an AI summit that turned out to be a crypto feeder event. The room was full of people “running around saying words they clearly did not understand — intentionally.” This section is a masterclass in recognizing manufactured hype in real time.

“If you cannot explain technology to people in simple words without using tech bubble language — you’re probably a charlatan.” — Georg Zoeller

[15:26] “A Machine That Prints Machines”: The Clearest Definition of Generative AI You’ll Hear
This is the moment the episode pivots from diagnosis to framework. Georg offers a definition of generative AI that doesn’t require a computer science degree — and explains why understanding it as a general purpose technology (not just “ChatGPT”) is the only foundation worth building on.

“What we have built is a machine that prints machines — a machine that can take data, find its patterns, and build a software machine that creates the same data. That is vastly more powerful than explaining ChatGPT, because you understand that this is a general pattern.” — Georg Zoeller

[24:49] The Productivity Injection: What “More Content for Less” Actually Means
AI is flooding the supply side of creative work. Georg explains what a massive productivity injection does to a market built on scarcity of creative output — and why the people writing checks to fund AI creative tools understand exactly where this leads, even if they won’t say it out loud.

[27:56] The Primary Beneficiary of AI in Entertainment Is Meta. Here’s Why.
This is the sharpest receipt in the episode. Georg names the company that gains the most from AI commoditizing creative work — and it’s not an AI lab. When production costs collapse, the primary business problem shifts from making content to selling attention. That’s Meta’s entire business model.

“The primary beneficiary of pushing AI into entertainment is — drumroll — Meta. Because when your primary business problem isn’t production anymore… it shifts to demand generation and advertisement.” — Georg Zoeller

[29:27] “AI Takes the Role of the Creator”
Georg is direct about what AI actually does to creative labor: it doesn’t assist creators, it abstracts them. It is trained on their work, learns the rules of their craft, and then performs those rules at scale without them. This section names the mechanism behind creative commoditization without softening it.

“AI drives content creation, drives down the value of the creator — because it takes the role of the creator. It’s trained on the creator’s work. It’s a mathematical function that abstracts the rules of creation.” — Georg Zoeller

[39:37] Which Jobs Are Actually Vulnerable: The “Number of Steps” Framework
This is the most practically useful section for anyone anxious about their career. Georg introduces a simple, non-hype diagnostic: how many distinct steps does your job require, and how reliably can current AI execute each one? Jobs with fewer steps and abundant training data are most exposed. He uses voice acting as the clearest example.

“What makes your role vulnerable is: how many steps does it have? Voice actors — it’s one step. Speak the sentence based on the trained model. That means you’re highly vulnerable.” — Georg Zoeller

[40:04]— Why AI Agents Cannot Do What the Hype Promises (Yet)
The agent hype is running about three years ahead of the technology. Georg explains compounding error — why a five-step agent workflow running at 80% accuracy per step doesn’t produce 80% accuracy overall. It produces near-total failure. This is the math behind why the “agents will do everything” narrative is currently fiction.

“Agents cannot work if the underlying technology is 80%. At 80%, five steps multiplied into each other result in a 100% failure chance — compounding error.” — Georg Zoeller

[41:19] The Data Availability Problem: Why Being “Out There” Makes You a Target
Public-facing creatives — YouTubers, artists, authors, musicians — have one thing in common: their work is online and scrapeable. Georg explains why data availability directly maps to AI vulnerability, and why being visible and prolific online is now a double-edged sword.

[50:46] The Platform Problem: Someone Else Controls Your Customer Relationship
The hardest truth in the episode. Every creative working on YouTube, Spotify, Amazon, or Instagram is building on land they don’t own. Georg explains how platform consolidation works against creators structurally — regardless of AI — and why AI accelerates the power shift rather than reversing it.

“The number one challenge you’re facing as a creative is platform. Someone else controls your ability to reach your customers.” — Georg Zoeller

[52:58] “They Compete in the Same Business”: Platform Capture, Explained
Georg names the end state: platform companies don’t just distribute your work — they eventually compete with you in the same market, using your own data to do it. Understanding this is the precondition for any strategy that actually protects a creative career long-term.

“These platform companies own everything. And they can decide who wins and who loses — and they compete in the same business.” — Georg Zoeller

[56:37] What Creatives Should Actually Do: Collective Action and Critical Thinking
The episode closes with Georg’s practical advice — not a tech roadmap, but a human one. Educate yourself on both the technology and the digital economy. Understand the platforms you depend on. And recognize that individual adaptation has limits: collective action matters. This section is where the receipts become a call to do something with them.


The Slop World Take

Georg Zoeller doesn’t do comfort, he does accurate. And what’s accurate right now is that most creatives are operating on a misunderstanding of the technology, a misread of the platforms they depend on, and an underestimation of how intentional the hype actually is.

The good news: the framework he offers is learnable in under an hour. You’re already watching.
The bad news: most of the people making decisions about your industry haven’t.


More on Georg Zoeller

Georg Zoeller is the co-founder of the AI Literacy and Transformation Institute, an organization focused on closing the gap between AI hype and AI understanding across businesses, governments, and creative industries.

His career spans over two decades of senior roles in some of the most data-intensive, creative-technology organizations in the world — including as Business Engineering Director at Meta and in leadership roles at EA and BioWare, where he worked on games played by hundreds of millions of people globally.

He brings that insider perspective to a frank assessment of where generative AI is headed — and who the winners are designed to be.


Don’t Miss the Next Episode

Slop World drops new episodes every week — AI news, tech accountability, and the mess in between.

📺 Subscribe on YouTube

🍎 Listen on Apple Podcasts

🎧 Listen on Spotify