Hello from Shanghai! One of my main preoccupations this year has been AI’s labor impacts. It’s been a bit whiplashy—I went from intense concern to feeling slightly more sanguine by the time everyone started freaking out about the “new grad jobs crisis.” I'm planning some deeper investigations into these questions for when I’m back (e.g. what does an “AI-native firm” actually look like in practice?), but in the interim, some more plane-ride aphorisms since you all liked the last :)
I first became anxious about AI and labor impacts in February when my younger sister told me how hard it was for her friends to find jobs. Consulting and Big Tech SWE roles were no longer guaranteed; new grads were falling into masters programs instead, delaying their adulthood until the job market improved. If Stanford CS majors can’t get hired, what about everyone else?
I asked economist friends whether to fret. Probably cyclical trends, they said with a shrug. The “new grad job crisis” is just a correction for Covid-era overhiring. But if Excel jockeys are automated faster than professional services firms grow, junior hiring won’t bounce back to previous rates. As a friend at McKinsey put it to me: We’re blocked on deals, not slides.
Substack laid off half its customer support team when the tech recession hit. Soon after, we started using Decagon—an AI chatbot trained on past tickets and resources—to handle the vast majority of requests. The remaining support agents transitioned their focus to higher-priority and higher-complexity problems. When the economy stabilized, hiring resumed. But we no longer needed to linearly scale the support team with the number of tickets. We just needed fewer humans than before.
You don’t need mass unemployment to inspire mass fear—merely its shadow is enough. In the Hollywood and port strikes last year, the vague prospect of automation was enough to spur workers to organize. In both cases, a critical worldwide industry was brought to a halt.
What if the same thing happened with teachers? Drivers? Doctors? More?
Then again, most American industries aren’t organized as ports.
Covid was a natural experiment in what happens when everyone’s sent home with a check and nothing to do. Weed, sports gambling, riots, conspiracy. Our culture has been built on the structure and meaning of work. It’ll take more than UBI to cure this kind of rot.
I don’t think policymakers would tolerate job loss past 15%. At that point, they’d step in to start slowing shit down.
If there’s anything American voters care about, it’s keeping their jobs. We’ve already seen the backlash against immigrant and offshored labor. If non-white people are intolerably alien, what about getting outcompeted by machines?
Most AI backlash is economic anxiety coated in a veneer of social justice. Alfalfa farming consumes 19 times the water that data centers do; there’s no sound environmental reason to boycott Claude but not GPS. When people say “AI is a moral stain,” they really mean: I am scared that I won’t be able to pay my bills.
To be fair, the labs are definitely trying to automate everyone’s jobs.
I roll my eyes when people demand we build AI to “augment and not replace” us. This is a platitude, wishful thinking; it is not a reality most workers can choose. If the tech is good and cheap enough to replace us, it will. Economic incentives are a hell of a drug.
Carl Benedikt Frey: “There is no iron law that postulates that technology must benefit the many at the expense of the few.”
Liberal democracy teeters on the tie between labor and growth.
Fortunately for humans, AI capabilities look pretty jagged so far.
Moravec’s paradox: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Only in tech would we measure a person’s value by high school math medals.
Does a calculator count as superintelligence?
Most jobs involve complex bundles of tasks. Thus, the speed and scope of automation matters: if AI can do a whole job at once, it’ll be eliminated. If it automates only one task at a time, the job will just evolve around it. Consider a Ship of Theseus: If a job has all its tasks replaced over time, is it still the same job?
The whole is more and less than the sum of its parts.
- : “Dystopia is when robots take half your jobs. Utopia is when robots take half your job.”
We still have radiologists, but not lamplighters.
In my last week of work as a product manager, I realized I didn’t have a single task to document and offboard. I wasn’t hired to write PRDs, lead standups, or run user interviews—each could be competently done by someone else. My role was relational, not task-based. Someone had to be the fall guy; someone had to herd the cats.
Bureaucrats have always dreamed of simplification. If only people could be compressed into tidy units, processed as input-output flows. If only there were a standard number—IQ, SATs, civil service exams—that could quantify a human’s economic potential; if only every employee were fungible with every other.
The map is not the territory. The org chart is not the org chart. Systems are much more unruly than they appear.
Another common argument says that AI capabilities are fast but diffusion is slow. Supposedly, regulations, backlash, and laziness get in the way of adoption; most people are change-averse decels who won’t admit when a robot does better.
But it didn’t take students long to start ChatGPTing all their homework. If AI could write my emails for me, I’d certainly let it.
“Diffusion lag” reflects a lack of product-market fit. Even AI optimists are still hitting practical roadblocks. That’s why detailed case studies are so much fun: physics, code security, running a restaurant at a small independent hotel.
James C. Scott defined mētis as “the kind of knowledge that can be acquired only by long practice at similar but rarely identical tasks, which requires constant adaptation to changing circumstances. Half the battle is knowing which rules of thumb to apply in which order and when to throw the book away and improvise.”
The real world is all edge cases, all the time.
Increasingly, fewer jobs will look like doing tasks ourselves, and more will involve teaching AIs to do them for us. How can we transfer context to the machine? Can they adopt the values and instincts we’ve evolved over millennia to have? When you pair with a model, will it remember what it sees? Can you teach taste? Creativity? Learning to learn? This is the great pedagogical project of our time.
A January 2027 forecast: “Copious amounts of synthetic data are produced, evaluated, and filtered for quality before being fed to Agent-2. On top of this, they pay billions of dollars for human laborers to record themselves solving long-horizon tasks.”
Our friendly hotel purveyor describes one such long-horizon task: “To replicate [chef] Hagai’s context, you’d need entire recipes, or maybe video of him preparing the foods; Toast sales data, or maybe video of the dining room; our hours; his calendar, featuring private events; communications among staff about what’s getting used for what; the CSVs for Baldor; the paper receipts for quick runs to Loeb’s; and maybe surveillance footage to capture exceptions.”
- : “There are no new ideas in AI, only new datasets.”
What makes a domain automatable? Training data, deployment ease, clear criteria for quality and reward. If the eval exists, the model can do it.
What makes a lab decide to master a domain? Enterprise demand, marketing splash, if it’ll make potential hires say holy shit. (Coding, Studio Ghibli, high school math.)
We all know the perils of teaching to the test.
Stuffing AI into human-shaped jobs still seems like fitting square pegs into round holes.
We’ve got to get the humans out of machine-shaped jobs.
No one’s destiny is locked in at 18. Societies should make lifelong learning and continuing education a more serious bet.
Progress always comes with pain.
Both human and machine intelligence seem infinite to me.
I cover AI from a humanist perspective. Sign up for future essays and notes:
Thanks for reading,
Jasmine
No gatekeeper saying we don't do plane-ride aphorisms, no SEO expert demanding h2 headlines. Just dense, stimulating text. More of that, please!
Most jobs are a complicated bundle of tasks. The complex part is that humans are needed / used to suffer through the complication that often borders on chaos and orgs don't want to have to work on that part.