0:00
/
0:00
Transcript

🌻 infinite cornucopia (ft. mills baker)

literacy crisis, humans vs. LLMs, parenting after AGI

Dear reader,

Today’s podcast features the brilliant and singular Mills Baker. Formally, he’s the Head of Design at Substack, where we met, and also a fallibilist, New Orleanian, and OG blogger extraordinaire. We ended up rambling and interrupting each other for over two hours straight (70 minutes above our allotted time slot), and could’ve gone two hours more. Among other things, we discuss:

  • Is the “literacy crisis” real?

  • The case for reading novels

  • Why humans are not just fleshy LLMs

  • Parenting for a post-AGI world

  • Why Mills didn’t become a full-time writer

  • Girardian scapegoating

Watch/listen by clicking above, the Substack app; or add it to Spotify, Apple Podcasts, or Pocket Casts. A full transcript and list of links is below.

Share


Episode transcript

This transcript has been lightly edited for length & clarity.

Literacy in crisis

Jasmine Sun (00:00)

Today on the podcast I have Mills Baker, a friend and a mentor whose title Head of Design at Substack always seems exceptionally insufficient. Mills has written some of my favorite blog posts of all time, and he’s also saved me from blowing myself or others up at work multiple times.

I'm very excited to chat with him as a follow up to last week's essay on AI, conversational interfaces, and oral vs. literate culture. Welcome!

My first question for you is, is text dead?

Mills Baker (00:37)

Is text dead? I don't think so. I wonder if this meme is occurring because we’ve never been more of a text supremacist culture than right now.

You can tell how influential the internet has been because of how frequently you encounter people who rapidly build arguments about nearly everything they encounter in the world. If they encounter a piece of music, they're immediately setting it against some cultural context or unfolding meta-narrative. People have never been more lawyerly or legalistic, and I associate that very much with text as a form. I think of oral cultures as being less indexed on building arguments and much more “vibe-oriented.” But everybody is constantly thinking in terms of posts, what they could argue, what kinds of comments it would attract, and how to position themselves. Maybe one of the worst things happening to the world is the degree to which we've gone hard in that direction.

So at this moment of huge text supremacy, people start writing these things like, Is text dead? Is reading dead? And it's because we see evidence that people can't read books anymore—but all anybody does is read!

When I was growing up, there was this popular bumper sticker that said “Kill your television.” People would bemoan that Americans watch five hours of TV a day and we need to get back to reading. I wonder if I could find those people in 1986 or whatever and say: I've got fantastic news. By 2025, almost every American will wake up and before they even look at their families, they're going to start reading text. They're going to be reading text all day on dozens of different platforms. It's all text, all the time. Are you excited?

They’d probably go, that sounds great. I bet everybody's really well informed. And I'd say that yeah, people have a lot of information. And they'd say, well, is the world better? And I would say no, it's dramatically worse. Everybody is fighting and anxious all the time about how they're going to be textually perceived, textually analyzed, and textually condemned. They have a chorus of commenters on their shoulders when they go through their day.

So I'm interested in whether this is a book-specific critique versus text generally. I don't feel like we're that oral a culture. I'd rather us be more oral.

Jasmine Sun (03:05)

I've heard both versions. There's the argument that people don't read long books anymore. But in addition to that, when people scroll on their phone now, they mostly scroll through shortform video and pictures.

My argument would be that text culture peaked in the mid-2010s, where we all used Twitter and Facebook, which were mostly text-based at the time. Over the last decade, we got into such big Twitter arguments and everyone was getting canceled for tiny little wording mishaps that we're returning to a less text-focused culture. The platforms most people scroll in the morning are not text-based platforms, right?

Mills Baker (03:53)

Yeah, TikTok, right? Although a large percentage of TikTok is still somehow entangled with text. It's somebody in front of an article.

Jasmine Sun (04:02)

I think that's a very small portion of TikTok.

Mills Baker (04:20)

Text is interesting because you can be talking about literal text, in the sense of alphanumeric characters. And then there's also the vibe of text.

An example I might use is that when you go to a loud bar or club, your opportunity to socially transact with other people through language is extremely constrained, right? Like you're not going to drop bon mots. You're not going to be a wit. You're not going to successfully persuade someone that you're cool based on your verbal dexterity.

Jasmine Sun (04:32)

You underestimate the kinds of things I do in loud bars.

Mills Baker (04:44)

Well, you and I are the kind of people who are yelling at bars. But when I was young, I remember going to bars and feeling like this is a bad environment for me because a lot of people are getting by on their look, their energy, how they make eye contact, their facial expressions, and even small things like their gestures, their ability to touch other people at appropriate moments, which I've never had and never will have.

I can't do it, I can't give it, I'm a total brain in a vat. So I think of myself as being extremely texty, even when it's not a question of written language. I don't use a lot of video platforms, but I still feel like when I see video culture, it's like that. It's all verbal. It's all linguistic.

When I think of oral culture, I think about New Orleans. And in New Orleans, whether you're popular largely depends on whether you can tell good stories and be funny, not as a function of how tightly coupled your arguments and facts are, or how well you've iterated from evidence to a devastating conclusion. All these lawyerly types of maneuvers are a little bit—well, I always felt like a huge dork in New Orleans and I still do.

There's a Cajun expression “Let's pass a good time.” What that means is spend time together in a way that’s loose and fluid and feels good. It doesn't have a lot to do with making progress on an issue or successfully exchanging information.

What I’m trying to say is that there are textier and less texty social cultures, and I still feel like the internet is a place for argument and positioning in basically textual ways.

But if the lament is there was a period where people read a lot and now they're not reading as much, it's hard for me to feel like that's a tragedy because I didn't think that chapter of the internet was all that productive or sane.

I went to Bard for three years. People were very, very smart there and argued very, very well. And most of them produced a lot of unhappiness through argument. They didn't arrive at anything that was personally or socially productive. They talked or wrote themselves into terrible little traps and corners.

Maybe it's because I'm mentally ill, but I think of text as being very dangerous just like all other media. I don't think it's naturally better than video. I no longer think that those bars are worse than an intellectual salon. The intellectual salons of 20th century Europe gave rise to unbelievable atrocities and horrors through well-reasoned text.

I read that piece on people not reading books. The first thought I had is: I'm old as hell, and before all of this stuff, people also didn't read books.

Jasmine Sun (07:44)

But book-reading is empirically declining.

Mills Baker (07:44)

I believe that. But reading has always been for a very small part of the population—real reading.

People used to get forced through books. That happened a lot. I watched my peers in high school, for example, get fucking frog-marched through long books, and they would in fact read them and through repetition become capable of writing a certain kind of essay.

But even the ones that were good were bad, right? Because it wasn't arising out of any kind of authentic engagement. They weren't living with these texts. Their brains weren't populated by these people. They were doing a thing that you had to do at the time, the way that maybe my kids will grow up having to be video performers and maybe they will or won't be natural video performers. You know what I mean? People can fake it.

Jasmine Sun (08:42)

Interesting. Maybe this is a generational difference, but your description of social media today as being “it's oral, but it's text culture”—I don't think that's true. The amount of Instagram and TikTok where it's someone explaining an idea or argument is a really small portion.

There was an article that

wrote early in the US TikTok boom. It was about how everything is “vibes” now. So many Instagram Reels and TikToks only have music. There's no text; it's either a montage or these Vine-style comedy videos of something physically funny happening.

Mills Baker (09:15)

I love those. Blessed content.

Jasmine Sun (09:30)

Aesthetics have become the content. For a lot of Gen Z, I think people got tired of the Twitter arena and arguing with each other and the unhappiness. So people reverted to vibes and aesthetics; that is a more oral culture thing.

You can argue that it's worse because now—well, maybe this was always true—when people choose political parties or nation-states to support, they're also doing so off vibes and aesthetics. The arguments have disappeared.

Mills Baker (10:04)

I'll give the strongest form of a position that isn't very deeply considered: I think that's fine because a median person's sense of vibes and aesthetics are as reliable as their capacity to parse arguments and differentiate valid from invalid evidence.

I work at Substack; I'm drowning in text all the time. I've been drowning in text since the 90s and probably since the 80s. I come from a text-oriented household. So my main familiarity with human fallibility is in reasoning about texts and evidence and arguments. I don't think most of us are very good at it. It's not a skill that I think is broadly distributed. I don't consider myself good at it. As you know, I have tremendous attention span problems and like everybody, fit everything to what I want to feel.

Jasmine Sun (10:37)

Actually, I feel like the problem that both of us have is that we are so good at reasoning that we can reason ourselves into anything.

Mills Baker (11:06)

Truly. Somewhere on this shelf, there's a book by a man named Leszek Kolakowski who has this argument he calls the “law of the infinite cornucopia,” which is that there are an infinite number of rational arguments that you can induce for any position.

People really don’t like this. One of my hottest takes is that everybody is an objectivist. But most educated people know they're not supposed to be that. But disagree with them about virtually any one of their beliefs, and they're like, no, there's right and there's wrong and the evidence from this only leads in one direction and you're like, so which part of you is a pluralist? Which part of you is not an objectivist?

In any event, I think that's true. I've seen that failure mode so often that I'm not worried at all if people are getting more vibes- and aesthetic-based. It's a reverse revenge of the nerds. I won't say revenge of the jocks, but for a long time, people who have been able to argue have had outsized influence in our culture.

Because a major field of my study was 20th century Russia and Eastern Europe, I'm quite sold on the idea that smart people using reason can do tremendous fucking damage, and can never become aware of the real human cost of their damage, because they have an infinite cornucopia to pull from for why it's okay in this case to substitute the will of a small group for a large group, why it's okay to force through an accelerationist historical process that they scientifically understand. So I'm no great lover of intellectuals as a general class. Perhaps I'm a self-hating intellectual, but for a long time, we've been a little bit too dominant.

I used to work at Home Depot and the leaders there were not intellectuals. I remember feeling enraged by how much of what they did was based on personality or charisma or vibe, or dynamics that arise from masculine behavior that I am really not good with. In retrospect, I think those guys were alright. They were doing fine. I don't think that it should have been run by people like me. I've seen very smart startups do incredibly insane things. If f anything, I feel some degree of equanimity about it.

You also mentioned the montage thing. I do think my generation in the 80s was poisoned by montages. Some of the happiest moments of my adolescence were being drunk with other drunk teens, smoking cigarettes, and driving around in a car listening to music. Why were those moments so happy? It's because they conformed to the mimetic structure we'd seen in movies. What it felt like when you were doing that was, I am living a sublime moment. Well, why did that feel like a sublime moment? It's because we don't know what sublimity is. We're fucking teenagers. We only know what we've seen represented as the sublime. When it's Winona Ryder and Johnny Depp doing it on the big screen, you go, well, that must be what living's all about. And I've grown up my whole life thinking that montages are the highest possible moments.

There's a great reality to life, which is that it cannot be composed of dense beauty. Life has lots of interregnums. Life has all kinds of boring stuff. Aristotle makes a joke about this, that drama is about removing the stuffing from life. So if you grow up in a mimetic culture that indexes highly on visual media, there's no stuffing, right? And all you're left with are beautiful moments. That's a risk of non-textual culture: you'll think it's all about beautiful moments and you'll be incapable of making good decisions. You'll always be seeking ecstasy. This is a real disease that young people have, always wanting to be outside of themselves in a moment of joy and mimetic conformity to a montage.

So there's risks, and I don't think anybody's gonna have it any better or worse, but I don't just flatly think oral-visual culture is bad. I'm not like, I wish everybody were reading Twitter beefs again. That wasn't all that hot either, you know.

Jasmine Sun (15:13)

There's a

article in The New Atlantis where he says that Twitter beefs are the worst parts of both textual and oral culture combined. Oral culture was highly agonistic; it had very short memory spans and was all about memetics and heavy figures, which Twitter is. But Twitter also preserves that forever and allows you to get in these lengthy semantic debates.

In comparison, the Instagram micro-aesthetics and the montages feel like a more pure form of oral culture. The rise of live video that’s not preserved forever. You say shit and then it goes away. That feels like a purer form.

I want to push back on one more thing that is surprising to me about the fact that you don’t care if we're regressing to oral culture. I certainly buy that people overrate text-based reasoning as a way to achieve truth, and that you can use text-based reasoning to justify crazy things. This was one of my big lessons of competitive debate: I can literally out-debate you on whatever position you give me, and the truth does not matter.

At the same time, an argument I liked when reading Orality and Literacy was that text-based cultures gave rise to science and falsifiability—the ability to separate a text or idea from the person who spoke it, because in oral culture it's always attached to the context. And once you have print, you can contemplate an idea on its own, which allows people to become contrarian, to critique, to not group-think. It produces all of these great scientific values like falsifiability and error correction.

You love David Deutsch and Karl Popper. How do those philosophies function in a society without text-based culture?

Mills Baker (17:16)

The element of text that makes it work for that is that it is exportable and transmissible. I'm not convinced that this would be substantially different in a culture that has videos.

If you said we were going back to an ancient oral culture, I think there would be a lot of loss. What I imagine actually happening is a thermostatic adjustment back to a mix of oral and non-oral cultures, textual and non-textual cultures, all coexisting and contending probably in roughly self-sorting proportions based on who likes what.

You and I are doing a video podcast, but we're probably going to stay pretty texty until we're dead.

Jasmine Sun (18:03)

We both talk and think in paragraphs. I was talking with a friend about this last week. He was working on a piece of writing, and he was like, I'm like finding prose challenging because I think in speeches and presentations. Whereas even when I talk, it sounds like a paragraph.

Mills Baker (18:20)

Yeah, I don't think the infection's going away.

The important thing about text is that when someone writes a text, let's say about a scientific theory, it can circulate around and other people can analyze it and talk about it. It's an artifact that you can organize information reactivity around, and you can build trees of responses, and those can get iterated on, and all of that. And I think that's still true for video.

Imagine a world where a scientist comes up with a theory of everything that unifies relativity and quantum mechanics, but presents it as a video. And the video goes out into the world. I don't think you'd see major problems with people's ability to critique it, build on it, use it to devise experimental regimes and so on, just because it was a video. I think there's efficiency questions, but the real hallmark of literate culture as it existed from Gutenberg on is really transmissibility.

When you were talking about oral cultures and everything disappearing, I thought of Homer and his poems, which endured for a long period of time. But the reason you and I know that that is because they were turned into text. Because videos can be sent and shared and posted and translated and turned into transcripts, I think we're moving towards an era where medium is less important in general. The computational costs of translating one to another are getting pretty low.

Jasmine Sun (19:50)

Yeah, totally. This is my interest in the intersection of oral vs. literate culture and the AI stuff. AI is non-deterministic, right? The more that we use AI as an intermediary between the original text and our understanding of it, the more that there is no longer a single reproducible artifact that everyone receives in the same form. That’s definitely on the decline.

I can imagine a world where we each have a browser extension that understands my reading level and proficiency and favorite format, so when you load up The New York Times, instead of getting the same 1000-word news article, maybe one person gets the article as a talking head TikTok, one person gets a slide deck, and another person gets a 3000-word news article. AI is doing the automatic translation. But what happens as it's translating between formats is that we don't have a shared text anymore.

Mills Baker (20:47)

Yes, that's right. I think this is totally true, and I wonder whether it'll be a problem. It reminds me of something I was just reading. C.S. Lewis was writing about George MacDonald, who gets referred to as the father of modern fantasy. Lewis talks about how MacDonald is not, in fact, a good writer at a prose level. He says that MacDonald is not writing novels. He's writing myths.

There's a thing about myths, which is that if you love a myth, you almost never love a specific version of a myth. It's very rare that someone goes, I really find the myth of Orestes captivating, but only in Stevenson's telling.

A myth is very platonic: there is something that everybody refers to that has no specific instantiation of particular consequence. The way I used to talk about Platonism in school is chairs. When I say “chair,” you know what a chair is, even if you and I don't have the same chairs. All kinds of things are at the margins of what we agree is a chair, yet we understand what a chair is, and that informs the so-called platonic chair.

Because I am a transcendentalist and believe in the reality of the Platonic realm, it's almost like what you're describing with AI is pushing things in that direction, right? Like you'll say, hey Mills, did you catch what Sam Altman said? And I'll say yeah. But what I'm talking about and what you're talking about is not any specific instantiation.

This already happens to an extent. We take such different things from text and build interpretations differently as we read. Two people can read the same text and completely disagree on the emphasis. So there's a germ of this today. It'll probably get more dramatic, and I guess it could get to a breaking point. But what you're describing is extremely real. I'm not nervous about it, but I'm interested in it. I'm excited for everything to be really, really personalized.

One thing I should mention is the decline of reading books. I'm not concerned, but I think reading novels in particular is one of the best things you can do to increase your chances of happiness on earth. So it's a shame if more people aren't getting the opportunity to learn what a novel can teach you, and what kinds of errors novels can help you avoid—particularly errors of personal development and personal reaction. I think of novels as having been the great gift of my life.

On the other hand, I tend to trust that people find the things they need to get by. So if my belief is true that novels are enormously helpful for people, they'll find their way to them at some point in some form.

If I had to grade how concerned I am about all of this, I'd say very little concern about general book-reading, slightly concerned about less novel-reading, and very little concern about AI translation—with one last wrinkle that AIs do tend to genericize. I don't know how inescapable that is. If all the translational steps that occur are also genericizing what's happening, you'll see convergence on known patterns in a way that could be bad for anything particular or interesting. But that also happens in our minds. I hear things and bring them back to the patterns I know. I knew a high school teacher who used to say, “You don't know what you like, you like what you know.”

Jasmine Sun (25:03)

I do see the great benefits of AI translation. In undergrad, I was interested in what technology could actually do to improve education, which is frankly, not that much. But one application I liked was an edtech startup called Newsela. They would take news articles or history textbooks, and adjust them to everyone's reading level. So a group of students who are at dramatically different reading levels could actually have a shared conversation about history or social science, which seems really good. It seems important that someone who is ESL can engage in conversation with someone at a higher level instead of segregating everyone out into little groups and never having them speak to each other.

The case for reading novels

Jasmine Sun (26:00)

I’m curious: Can you make the case for the novel?

Mills Baker (26:07)

Good novels successfully bring you inside the minds of others that can be quite different from your own, and allow you to perceive the sameness within those differences and also the degree to which you yourself might be different. This touches on that post I wrote about art and objectivity. Novels are like virtualization machines for the inside of psychology.

In my case, a novel persuaded me that it was a dead end to try to be right all the time. That's not an argument that I actually know how to make apart from living it. To quote The Big Lebowski, it's frequently the case that someone isn't wrong, but they are an asshole. When I was young, that wouldn't have made sense to me. I thought that if you were right, you were pursuing truth, so the costs were what they were.

The novel gave me a sense of the primacy of interpersonal psychology and relations in human affairs. I don't think I could have gotten that without Kundera's novels that are full of people who believe themselves to be animated by ideas and values, but who are living out patterns, archetypes, and psychological tendencies that are self-destructive or destructive to others. Because going back to the law of the infinite cornucopia, they have every reason for what they're doing. Yet they’re not being reasonable at all. It shows you how illusory a sense of reason can be.

The novel, as Kundera describes it, goes into this terrain where moral judgment is often suspended. One of the most common experiences people have with novels is identifying with a bad character—a character who does things they would never do, that they consider completely immoral, and yet loving the character. As people often say, “I couldn't help but root for them.”

This is a bit of a scandal to them, because so much of our culture, particularly in America, says that good people are good, bad people are bad; bad people should be destroyed and there's nothing good about them, and good people should be elevated and everything is good about them. Novels problematize this directly, because most good novels, if they feature a real enough and broad enough portrait of psychology, hit you to the fact that nobody's really quite good and nobody's really quite bad.

Bad people can be very good and very relatable, and if you really understand someone, you often love them. That, as an axiom for me, has been hugely important. When I'm angry at someone, I probably don't understand them. And if I did understand them, I would probably—like when I read these crazy fucking novels with anti-heroes—love that person. I even want them to be successful in their horrible intentions, because I now see the fullness of their human presence or their being.

So novels are like little VR machines for experiencing different psychologies. They're so varied and have so many different characters that they can take you all over the place.

I do think it's interesting that they're text-based. I'm sure movies can do this too, but there's something about text, at least for me, where it just seems like the code of my brain. What I read in text gets implemented in my brain in this very full way, whereas when I watch movies, there are all kinds of ways for me to partialize or falsify what's happening.

Jasmine Sun (29:45)

Yeah, I saw a tweet that was like, “Reminder that novels invented the internal monologue.” That’s what you get from novels that movies find relatively hard to achieve. You can see people's facial expressions and what they say, but the thing that a novel does is it lets you live inside someone's head. It conveys that people's actions and what you see on the outside is one thing, but the thing going on inside can be totally different. That’s the sort of empathy that you’re describing, and it does seem somewhat unique to the novel.

I’m thinking of the book Rejection by Tony Tulathimutte. Have you read it?

Mills Baker (30:41)

I haven't.

Jasmine Sun (31:00)

At one point we talked about his short story “The Feminist,” which is about an incel. All the short stories in this book are like that: they're about incels and people who are rejected, who experience and live in abject shame and often who really go off the deep end in these deeply embarrassing, disgusting ways.

What I think Tulathimutte does really well—and the reason that his book has been so popular—is that on one hand, it's satirical, it's exaggerated, it's hyperbolic. You read it and know that this is a scum-of-the-earth incel who you would never associate with or become. On the other, there are these tiny moments in the characters’ monologues that you recognize in yourself. And everyone starts to see a bit of that incel in themselves. We all experience many small forms of rejection every day, but suppress them because we don't want to seem like a loser. But when you live in the heads of these eternal losers, you're like, shit, like there is a little bit of eternal loser in all of us.

There's this line from one of his stories in The Paris Review. It's called “Ahegao,” like that face that anime girls make in hentai. It's a really vulgar story. But in addition to this guy's massive sexual dysfunction that is the centerpiece of the plot, one of the most memorable bits for me is where he's having a conversation with someone who he perceives as cooler and more well-adjusted. There's a line that says, “He was active listening at all the wrong times.” As soon as you read it, everyone can remember a conversation where the other person is active listening at all the wrong times, nodding and uh-huh-ing totally wrong, and it just sinks in that this person isn't listening to me at all, they don't care about me at all.

I think that's why the book has been so resonant, and it exemplifies the internal monologue thing that you mentioned.

Mills Baker (32:23)

Incels are such a good category for this. I often feel like society waits for novelists to figure out how to make intelligible the psychology of all the new categories of defective person that conditions of history catalyze. Until you read something like that, you probably think that what's needed for an incel is for someone to sit down and say “It's bad how you're thinking and feeling,” and then they'll just go, “Okay, what should I think and feel?” And you'll send them a couple of blog posts and they'll be on their way.

What you don't know is how certain experiences almost mechanistically push your mind in some directions. What a novel shows is that if I had these experiences, I might walk down those paths myself and arrive at those destinations myself, which is dark and discomfiting, but is also extremely important. I'm not a complete relativist; for example, being an incel in the sense we're talking about where you are hostile to whole populations of people, or orient yourself only towards grievance—I think that's extremely bad and mistaken.

At the same time, if novels are number one on my what-helps-me-in-life list, number two is that I've changed schools quite a bit. I had an experience at one school where I was a big loser, and the amount of bitterness and rage and hostility and othering that this engendered in me was incalculable. I can't tell you how fucked up and loathsome a person I became, and how readily I embraced all of the darkest things that these cultures often have. And then I changed schools again and had the opposite experience.

I think that's a real gift. The best thing that can happen to a person is that they can really actually be losers and really actually be winners; so that they learn how evil you can be from both positions, how good you can be from both positions, and how relatable people in both categories are. And ultimately, hopefully, then see through the whole thing—see through power, desire, get on the other side of it.

But a lot of people never get to be in one or the other. And I think it's a loss in both directions. If you've only won, the world probably looks very confusing and mysterious to you, and your dominant read is probably, God, there are so many bad and stupid people on Earth. And if you've only lost, of course you're full of pain and bitterness and the rage of humiliated peoples. Everybody knows through world history what happens to humiliated peoples. And it's usually not that they dig deep and find a way to be good.

I gotta check that collection out. That sounds incredible.

Jasmine Sun (35:17)

Yeah, there's a lot in there. And there's a lot where rejection and rejector are in one person. There's a femcel in the book; and the femcel gets rejected, but also spends a good amount of the story rejecting other people for being less than her. So you do see the both-sides situation.

If you read it, we should talk about it. I did a book club with friends about it, which was very interesting. It’s polarizing.

Mills Baker (35:37)

Alright, I'll get it right after this.

“Knowing” and its discontents

Jasmine Sun (35:40)

I wanted to ask you about a note that you wrote recently. It reads:

Very interesting, very mysterious. Frankly, I don't know what it means. I wanted to ask you on this podcast.

Mills Baker (36:06)

I think I wrote a reply to that note where I was like, “I'm not actually high because I quit smoking weed last year, but that is such a high thing to say.”

I'll start at the beginning. We had a Super Bowl party, and my daughter Kizzy decided to amuse some other kids by slapping slime in her hair. It rapidly crusted and became a massive chemical problem-solving ordeal for me and the whole family.

It reminded me of the television show that was on when I was a child, called “You Can't Do That on Television.” It was some kind of bizarre Canadian sketch comedy show for kids. On this show, whenever someone said the phrase “I don't know,” slime would fall from above them onto them. Initially this slime literally contained old food, so it was a disgusting and unpleasant experience for the performers. Eventually the performers complained so much that they updated it to just gelatin.

But when I was growing up, I regularly watched a television show where if you said the phrase “I don't know,” they dropped green slime on you. And I thought, what a strange little cultural practice that is to denote to children that not knowing something is an occasion for literal physical ridicule. Nothing could be worse than not knowing something.

I think a lot about this phenomenon of knowingness. It's a real defect I have, where I'm always acting like I know more than I do. I have a deep sense of shame about being caught not knowing. And I'm always angling and positioning my personality and myself to be knowing. Knowingness is not a good quality; it's not something you should be cultivating. Knowledge is good; knowingness though, the vibe of not being ever on your back foot, never being unaware, instantly generating interpretations… I think all of that is related to like ego problems and insecurity issues.

Anyway, isn't an LLM just the best example of all time of a technology optimized around never not knowing? When you ask an LLM a question, it doesn't know anything, but it improvises instantly and in florid length and florid detail an answer that may or may not be correct. It'll produce things that are hallucinated. It'll get anything it needs to get wrong to predict the next token successfully. And I just thought, it's almost like LLMs grew up watching “You Can't Do That on Television.” They always act knowing.

I had this thought that LLMs were called into existence by this cultural meme that you should never not know. If you grow up in a given technology era, it incentivizes and punishes certain things. The next era's technology is often invented by people who grew up in the prior era. And the performers of the next era grew up in the prior era.

There's a joke or a comment people often make about generals, which is that they're always fighting the previous war. They spend all their time studying what went well and what went wrong in the previous war, then the new war happens and they're like, we got it. But that shit doesn't work in the new war. And so they eat shit like they always do, and people die.

In a way, we're always fighting the last culture war, or optimizing for the last cultural moment. LLMs seemed like a response to the world of television. There's a show called “It's Always Sunny in Philadelphia,” and they're recording a podcast at one point. Danny DeVito notes when there's not enough talking. He goes, “Dead air!” Because everybody knows dead air is bad in a televisual or audio medium. You've got to always be going. You can never pause. You have to immediately go, “Well, I think that's an interesting question, and one of the things that calls to mind for me…” Knowingness, selling knowingness, being knowing. That's what people look for.

Then, I started thinking, what was the past version of this? Radio is a medium where you tune in to listen to humans talk. And so much of television is watching a human sitting and talking. That's kind of odd. Like if you were to imagine from nothing what a television medium would be like, you'd assume it would be very beautiful and very visually rich. But it's not. 90 percent of television is someone sitting at a desk talking, and they call those people talking heads. A talking head seems to me like what's been optimized for people who grew up in radio. And an LLM is optimized for what people who grew up with television, which is like never dead air, never a lack of understanding, never an inability to generate an image, always capable of like throwing a pastiche together, keeping it flowing, keeping it moving.

I don’t want to exaggerate the degree to which I thought about any of this. You could say the LLM is just as good of a response to radio. All of it breaks down on serious examination. That's why it made me think of academics, because when I used to read Derrida, I would often think: this sentence sounds kinda true, but if I ask one follow-up question, the whole thing is exposed as just some French guy riffing.

Jasmine Sun (41:30)

I can even feel myself wanting you to be more knowing right now. Because you hedge every answer as much as you respond to it, and I need to edit out your hedges because it's not good listening for a podcast.

Mills Baker (41:41)

That's right. People don't want the hedging. They just want you to say, let me tell you how it is.

Jasmine Sun (41:50)

Yeah, and I'm always thinking of the product output and what I can clip into two minutes or less. The hedging is not very clippable because I need a question and an answer. If it's question, hedge, answer, hedge, that makes it not fit into the two minutes.

Mills Baker (42:03)

The type that would be called into existence by this element of our medium is the aphorist. I often look forward to the return of the aphorist.

Jasmine Sun (42:16)

Yes, absolutely. The other thing that I do is—I don't know if I'm gonna post this, it’s shameful to admit—when I write Substack posts, I write the aphorism that I want people to quote-restack or screenshot as I'm writing the posts. In the revision process, I’ll check if there are enough quotable bits, because I know that the way that you get distribution as a writer is by getting quoted in little aphoristic snippets. Ideally it's broken out into its own paragraph, and it's short. Of course there are many amazing pieces of writing that don't fit into that length.

So there is this game that you have to play these days—I mean, you don't have to, but I'm not a purist—that I play these days as a “creator,” where I am translating into the future mediums, and that impacts the way that I execute this one.

Mills Baker (43:10)

I used to do that on Tumblr. Then I caught myself doing it and didn't like it because I aspire to be a purist.

In general, my writing has far less hedging because I strip it. I used to write my whole thing out, then go back and remove everything that I felt wasn't strictly necessary. I felt like it's almost disrespectful to the reader to hedge. For example, if you say “I could be right or wrong.” Well, they're humans on earth. They know you can be right or wrong. They don't need you to tell them. If anything, it's almost cocky to say like: hey, unlike my normal shit, this following sentence could be right or wrong.

I still do it verbally and I'm not sure why. I think it's because in conversation, I always want to make sure that people I'm talking to know that they can say, “I think that's totally wrong” and I won't be upset about it.

Jasmine Sun (44:06)

I learned this from you: the “I'm really stupid” sandwich. I say, “Well I'm definitely wrong about this and I won't die on this hill, but here’s my opinion. Anyway, it's not that important to me and feel free to tell me I'm wrong.”

Mills Baker (44:18)

It's really good for people like us because in my head, I think of myself as an incredibly meek and obviously insecure person who couldn't possibly be wedded to anything I think or say. But I've learned through research that I come off like one of the proudest and most confident people on earth—completely hard-charging, and really aggressive. So I had to start doing this.

I'm trying to bring my inner self into alignment with how I'm perceived. I want people to know that if they go, “I think that's totally wrong,” I won't be like, “You idiot!” I'll be like, “It certainly fucking could be. I've been wrong a lot.” And that's true. But in writing, I feel less pressure to do that because they can close the tab.

Jasmine Sun (45:06)

LLMs should do this! I mean, it wouldn't work as a product, but it makes me think that they should say “I hallucinate all the time, I don't know anything, and you will never get me to say I don't know. Anyway, here's my answer. But by the way, I'm a next-token predictor, so I don’t actually know.”

Mills Baker (45:20)

I would love that. Maybe someday Claude will do that. I don't feel like OpenAI is going in that direction. But like you said, it's not what people are paying for.

Jasmine Sun (45:34)

I don't think any of them will. Everyone wants the product that you can always call on. With LLMs, I often just want something to respond to me and give me an answer even if it is not a correct or a good answer. This is a very human impulse: I just want it to react. Sometimes when I feel really blocked on writing or blocked on an idea, I’ll ask an LLM to draft it or to tell me what it thinks. Almost always, I'll be like: this is stupid and bad. I'll go write it myself. But just getting a response unblocks me through that process.

Mills Baker (46:12)

Yeah, I do this with dream analysis with Claude all the time. Claude's okay at dream analysis, but just as frequently, I'll come up with my own.

It's funny because I write so much contra LLMs or contra theories of intelligence that LLMs engender that I think people think I'm pretty anti-LLM. And I fucking love them! I have Claude and ChatGPT in my dock on my home screen. I use them all the time, but I just don't agree with the interpretation or the context in which they're placed.

Why LLMs are not like humans

Jasmine Sun (46:55)

Do you want to get into that disagreement?

Mills Baker (46:56)

I really dislike the degree to which people think that the vibe of a smart person is to always reduce the degree to which human beings are special.

Someone said recently, “I'm always trying to tell my kids we're just mammals.” And yes, we are technically mammals. (Mammal being a relatively superficial taxonomic description of certain attributes we have.) But in a much more important way, a much more highly consequential way, to say that we're just mammals is absurd. If you think that aliens are gonna come to Earth and the first thing they're gonna say is, “You guys are just like those squirrels”… In innumerable ways, we are very different from squirrels. In fact, we're different from all existing mammals.

LLMs have created a sense in a lot of people that we are just LLMs. The other day, somebody in my replies said, “The human mind works just like an LLM.” Okay, that's completely absurd. Here's one very important way in which a human mind is not like an LLM: We can create LLMs because we develop theories of knowledge that orient us towards changing the world, and we build technologies like LLMs. So far, LLMs do not build anything or do anything on their own or have any will of their own.

LLMs also don't have qualia. LLMs have no sense of experience. You can ask them and they'll be very candid about it. One of the most important things I remember reading was Schrodinger talking about qualia. Schrodinger makes this big hay about the color yellow. You've seen yellow, I've seen yellow, we both know what that is. Our experience of the color is not in the photons, and it's not in the surfaces that they reflect off of. The surfaces they reflect off of are almost entirely empty latticeworks of vibrating particles. Photons themselves have no mass, have no color and are invisible. It's not in the wavelength. The distance between two invisible particles cannot create color. This is all empty, vibrating energy and matter. The color yellow exists only in your mind as an experience that comes when a certain wavelength of light hits your eye and gets translated along a retinal nerve. LLMs don't have any of these things.

So to endure a world where people say, we're just mammals and we're just prediction machines, it aggravates me. First off, I think there's no way that they arrive at this point of view from any kind of reasoning. They arrive at this point of view from a vibe of what smart people think. The vibe of what smart people think for the last several decades or centuries has been chipping away at human exceptionalism and suggesting that there's nothing special about humans, when in fact, we behave and operate completely differently from any known phenomenon in the entire fucking universe. We have no theories for the origin of life, for how the human mind operates; nobody has any way of tying the human mind to neural impulses or neurons or chemistry or electricity in the brain. We know that these things are coterminal or coexistent, we know that they depend on one another, but there are no good explanations. Yet I have to live in this world where people act like I'm the mystic for acknowledging this.

I think this drives an enormous amount of nihilistic depression in the world. We have to walk around going, “Man, in a certain sense, we're just ChatGPT ourselves. We're just trying to predict what comes next ourselves.” Yes, prediction is a major part of how the human mind works. There's also obviously enormous other components of the mind's operation. I just mentioned the two that spring to mind: qualia and will. And until I see some explanation for what those things are, I find it exhausting.

That's probably the single biggest departure point for me from LLM discourse. It’s not about how good they are or how many jobs they'll take. I don't know how to speculate about those things. I find LLMs totally awesome and totally ridiculous and totally unreliable. I wouldn't let one babysit my dog, so I'm not extremely worried that they're gonna take over the universe. I don't think that they, on their own, are a path to AGI.

You know that

tweet that was going around about, how come LLMs haven't come up with any new science?

There was one reply that seemed really good to me, and it was about loss minimization—the possibility that minimizing one kind of loss might actually make it impossible for them to do qualitative leaps, at least right now.

In any event, I fed both of these tweets to Deep Research and I said: can you do like a 10-page paper on the state of the art that the smartest and most involved people think about why this is true and what it means? And Deep Research itself had the conclusion that current technology isn't going to get us to the next step. More fundamental advances are required if we want AI software to do novel scientific discovery. It was like, I think I can be super useful for all kinds of scientific processes. But no, LLMs will not generate a new theory of everything.

I'm constantly on my back foot defending this. People act like I'm some kind of Luddite. But LLMs themselves are quite candid about their limitations.

Jasmine Sun (52:32)

I'm not very good at arguing about AI yet, but I'm actually trying to get better, so this is good practice.

Mills Baker (52:38)

Let's try.

Jasmine Sun (52:42)

My primary disagreement on the very last thing you said is that I don't think you can trust what an LLM says about itself. People use this as a gotcha but the LLM doesn't think. That's sort of self-contradicting.

Mills Baker (52:48)

Yeah, well, the contradiction is validating for my main argument. So I completely agree.

Jasmine Sun (53:00)

The other thing is that the history of deep learning suggests—and this is the “bitter lesson” paper that everybody loves to talk about—that people keep saying we need a fundamentally new way to do learning and this isn't going to get us anywhere. Then they find out that you actually just need to make the model bigger. And once you make the model bigger, all these things that we previously said that we needed fundamentally new methods to do actually start working.

For some of your qualia stuff, and I'm steelmanning an argument that I don't know if I believe—the thing that people say is beautiful about LLMs is that there's so much you can extrapolate from text that is not text, right? The same way that a blind person who has never seen yellow would begin to understand a bunch of things about the color from talking to people and maybe listening to audiobooks: the connotations, what kinds of things are yellow, what kinds of things are not. When you train on enough text datasets, LLMs begin to understand the relationships of things in the physical world without ever having seen them. I know that's different from qualia, but it seems to me that you do not need to experience the world in order to model things beyond the training data.

Mills Baker (54:15)

I totally agree with both of these points. So I don't think LLMs are reliable narrators about LLMs, and I do think it's silly that we use it as a gotcha. I actually think Deep Research did good research.

What I would say about the qualia thing is I make any representations about what's needed. I don't know what's needed because we don't have a theory for how the human mind works.

If you asked me 10 years ago, “Are qualia and will necessary to write well?” I might have said yes. And it turns out I'm wrong. I could be wrong in the future as well: maybe they'll be able to generate novel scientific insights without additional foundational leaps. I also don't make predictions about the future from the past because scaling laws are one of these things that are true until they're not. And I think there's no way to know.

Basically, I'm sold on the Humean opposition to induction. I basically think induction is invalid. Scaling laws tend to be inductively derived. They're never explanatorily derived, right? And because I'm an explanations junkie like Deutsch, induction doesn't mean anything to me. The sun is going to continue to rise for my entire lifetime, but if you were to make a long-term bet that the sun will always rise in some billions of years, you'd be on earth and it would get swallowed by the sun as it turns into a new type of star and then collapses in on itself. And that's the end of the sun rising. Maybe that'll happen on a similar time scale with scaling laws.

But I don't make the claim that LLMs need qualia or will to do anything in particular. The claim I make is they are not like us. They do not teach us how humans work and they do not invalidate human specialness. What I really find risible is the idea that LLMs being good is more evidence that we're actually a non-special phenomenon. I don't think they tell us really anything beyond what you described, which is that very large datasets of text contain embedded all kinds of knowledge that LLMs then can transact upon very effectively merely through token prediction—which is fucking awesome, by the way. It’s a delightful and fruitful thing.

But I don't think it tells us that much about ourselves, and I don't feel like we are LLMs, which is ironic because as Chris pointed out, I am very LLM-like.

Jasmine Sun (56:23)

Haha yeah, I do think that you personally are quite LLM-like because you always have an answer in the sense that we talked about.

I'm doing a background project, which is that I'm trying to AGI-pill myself. I default share a lot of your instincts, but have begun trying to talk to people who feel very strongly about AGI and ASI and that we're on this crazy trajectory. I'm trying to take those ideas more seriously to see whether I agree or disagree.

I’m noticing that I don't think that a lot of AGI-pilled people argue that LLMs are like humans. I think most of them make the argument that LLMs are like a different species that can be smarter or more powerful than humans, but that is unlike humans, which I agree with.

I don't think LLMs have will. Even the risks that people talk about with agents are because humans attempt to give the LLM will, and then we specify poorly so something bad happens. But most people don't argue that humans are just like LLMs.

Mills Baker (57:13)

Yeah, agree.

Jasmine Sun (57:18)

I was also going to agree with the part about loss. What I realized about DeepSeek R1 is that it both hallucinates more and it is a better creative writer, and these things feel very related to me. The fact that it gets more stuff wrong also makes it much more creative, makes it create metaphors that I wouldn't have thought of. I think we would need to accept a higher degree of hallucination and crazy shit that doesn't make any sense if we want to start getting scientific advances that are actually novel. I mean, the most extreme humans are often very right about some things and very, very wrong about others.

Finally, I've been trying to list characteristics that make someone very AGI-pilled. One is the foundational belief that humans are just information systems. Almost everyone I know who's very AGI-pilled at one point says, “Because humans are just information systems, right?” And the second I've noticed is that they're all very big induction people. They're big believers in extrapolation and induction. I'll ask, “When did you start believing in AGI?” And they'll say, “10 years ago.” And I’d ask what the catalyst was, and they would say something like, I learned about deep learning and then extrapolated. But they've also been proven right for the last 10 years, so I don't think I share as strong a distaste for induction as you do. But I agree that these two foundational beliefs are extremely common among the AGI-pilled and not so common among normal people.

Normal people I actually think are in your camp, which is argues that humans are irreducibly special and can never be like a machine. In fact, humans tend to move the goalposts a lot to prove this. In the book God, Human, Animal, Machine one of the things the author talks about is how humans used to draw the boundary lines between themselves and animals via intelligence. Intelligence was the main way we would say we're smarter than other beings. We're not like animals because we can make fire and we can think; and the dog can't think. But now that we have AI, humans are redrawing the lines and saying it's not about intelligence at all; it's actually that we have qualia, we have will, we can feel. And insofar as other things encroach on humans, we will just draw new lines.

Mills Baker (59:32)

Well look, qualia and will are features that humans have. I don't think that's what makes us special. With respect to LLMs, what makes us special is that we fucking made LLMs.

Jasmine Sun (59:39)

But the AGI-pilled believe LLMs will make more LLMs.

Mills Baker (59:50)

Great! Where they will learn to have done that is from us, from our texts, and from our knowledge that we developed. There's no going back in time and saying that we're not special relative to the natural universe because LLMs exist when LLMs would not exist without us.

Jasmine Sun (1:00:06)

But two things can be special, right? It's possible to believe that humans can be special and that LLMs can produce new LLMs, by training on human text and knowledge, and then becoming their own special thing.

Mills Baker (1:00:14)

I absolutely think many things are special. I think dogs are kind of special. I think plants are special. Flowers, you know?

But really this has nothing to do with AI; it has more to do with my particular philosophical and religious preoccupations. I spent a lot of my life with inherited nihilism from a zeitgeist that's obsessed with the idea that humans aren't shit. A culture that believes that when humans think of themselves as special, it's always an ass-showing error. And that the real attitude of an intellectual should always be like, Love is just biochemistry. You don't like art, you just like dopamine and serotonin. That is not how dopamine and serotonin work. That is not what's happening when humans behold something beautiful. There is no such thing as a chemical imbalance. There is no brain scan that can show you what happened.

Not only is chemistry not realer than love, but we, the creatures of love, the creatures largely created through love and capable of love, are the ones who fucking discovered chemistry. It all flows from us. There isn't some naturalistic world that we can be reduced to. The reduction of ourselves to the naturalistic world is something that only we can do. My dog never says, I'm really just chemical scum. Never. Even that maneuver is so human.

I guess I'm in a phase where I really want humans to understand themselves as being more special, more capable, more unique, and more worthy of at least a sense of mystery, if not a sense of admiration, than they tend to.

Parenting for a post-AGI world

Jasmine Sun (1:01:55)

Do you think that AI progress will change the way that you parent Kizzy and Raines?

Mills Baker (1:02:05)

I already use it sometimes. I have Claude write short stories for me all the time. I could be writing those myself; I would feel better if I did. My mother would have written them for me. My father used to make up stories for me. But he was temporally bound; often he was exhausted and he couldn't come up with a story at the end of the day, so no new story for me. Very regularly, I have Claude write. I give it this complicated prompt where I say to write in the style of George MacDonald, write as though you find the world fundamentally good, and write from the point of view of an honor culture attitude towards how humans should act. Then write a story with the following elements, and I'll take things that I know Kizzy is interested in, or currently finds worthy of attention, and I'll have it generate a little short story, and I'll read it to her.

I use AI for all kinds of other things. Kizzy broke her arm a week ago, and she's been a more difficult kid. She's sassier and brassier and asserts her independence in more ways. She’s a little bit more foolish. I asked the AI about it. And it explained that sometimes when kids injure themselves, the sense of vulnerability it gives them is something they want to counteract. They're protean, they're in an early stage of development, so rather than going, gosh, I feel more vulnerable, they start acting in a way to assert more will. So when you say, “Put this on",” they say, “I don't wanna.” They want to feel their capacity to bravely strike out. That was very helpful.

Jasmine Sun (1:03:45)

Because AI will get very good, do you think about future-proofing and AI-proofing your kids, or parenting them with a different set of values or a different form of education?

Mills Baker (1:03:50)

No, no. I'll tell you the true way I think about this. I love my job, I love my career, I love what I do. But I don't have any particular respect for the way an economy is constituted at any given time in history. I'm a father of two children. There's a chance that there's a nuclear exchange and we all have to become farmers. And there's a chance that there's no nuclear exchange and we remain desk jockeys. There's a chance that AGI changes everything and we all become UBI party people. I don't know. I consider that all the weather.

The weather changes, and your job is to be adaptive and find ways to live successfully under whatever the weather is. Sometimes that's harder if it's a hurricane; sometimes it's easier if it's a sunny day. I don't think there's much use in trying to plan or control it beyond a general attitudinal orientation: this is the weather, it’s not the source of meaning.

The overwhelming interest I have for my children is that they become good people who are imaginatively capable of finding ways to create happiness in whatever forms and frameworks are resonant for them. I think of preparing your children as a moral and imaginative process, not an economic process.

Bluntly, I don't think money makes you happy. When I've had no money versus when I've had money, my life has been mostly the same. Everybody knows this on some level but resists this largely for egalitarian political reasons, because it sounds like you're saying the poor don't deserve sympathy. Well the poor deserve sympathy and the poor deserve help; you should always be helping your fellow human being. But I think of that as a moral and imaginative thing, not a thing having to do with particular economic arrangements.

In any event, the main thing I think about with their rearing from the point of view of AI is I really do think it's going to change education quite a bit, so I don't have a college fund. First off, as you know, I wound up going to a state school that was free. They actually paid me $400 a year to go there because I kept myself above a C average. I don't plan on sending them to an elite school—I think those schools are fucking crazy, no offense. I went to an elite school for a few minutes. I'm not saving for that.

I expect AI to have an extremely good tutoring functionality at some point. It's very close now. I can already set Kizzy up where she could just endlessly ask questions and get good answers. The voice recognition isn't good with toddlers, which is funny. But that's probably a few years away. So I don't worry much about their education. I have no fears of them being able to get any kind of information or training they could want. That was already a little bit true with YouTube. There's a time in your life when you're like, I need to teach them how to do the things I think are important. And now they'll be able to learn anything they want. If they want to learn to play clarinet, it's all right there.

But like I said, the whole challenge of life is moral and imagination formation. For that, AI hasn't really changed much.

Jasmine Sun (1:07:01)

In fact, AI may make that more important and true because AIs can't have morals and imagination, right? If large parts of the cognitive and professional economy will simply be automated because AIs can do it better, then what better thing for children to spend their time on than developing morals, a sense of community, and other reasons to live? Trying to out-compete the AIs on economic potential is the stupidest thing that you could do.

Mills Baker (1:07:41)

Truly. The steam engine comes out, do you train your sons to be really strong, so they'll be able to do the jobs the steam engine can’t quite do? No, the steam engine is going to do the strength jobs.

And as you know, in my career, I've really been catch as catch can. When I lived in Baton Rouge, and I didn't have access to jobs at startups, I worked at a call center, and I fucking loved it. The reason I loved it was my mother's formation of my moral and imaginative capacities to find people interesting, to find routine joyful, to find submission joyful, and to find letting go ennobling. All these things came from sources that are ancient and eternal and don't have any real relationship to AI.

Kizzy might have to find joy in a post-apocalyptic world where like there's only fishing and farming. And I know that's possible because people in the past have found joy fishing and farming. And she might need to find joy in a world where there are no desk jobs. I think she'll be able to do that. That's much more my focus. Hopefully, she'll just catch as catch can as I did, and Raines will too.

Reasons for & against writing full-time

Jasmine Sun (1:08:44)

That makes sense. I am making much less money and am much more AI replaceable now than I was three months ago. The nice thing about being a PM at Substack was that one, the job is fully relational and therefore you are completely irreplaceable by an LLM. I firmly believed I had one of the most AI-proof jobs ever. Two, I had income.

Now I have no income and I'm very replaceable because I generate text, and I'm much happier.

Mills Baker (1:09:12)

I'm sure you've noticed this, there's this funny thing where like LLMs are best at generating text. Yet I don't know anyone who reads LLM-generated text for pleasure reading, right? I'm reading Girard. I could go to OpenAI and say, “summarize Girard for me.” But that's like the kind of fucked up shit that San Francisco people do.

Jasmine Sun (1:09:32)

Wait, I do that. LLMs don't replace my entertainment reading. They replace my Wikipedia, the Stanford Encyclopedia of Philosophy, and sometimes a news article explainer-type thing. I’ll say: hey, explain Girard like I'm 12.

Mills Baker (1:09:37)

But I’m guessing that's a case where you actually have an instrumental need for that knowledge. You're about to interview a podcast guest who’s into Girard, so you just need the knowledge. You're not actually interested in Girard per se.

I think I had this argument with Patrick Collison about book length once. Some people are reading to acquire discrete knowledge units, and most people, most of the time, are reading for a different purpose entirely, which is to build models of a given thinker or moment in history. AI doesn't help with that at all. Like, I need to read Girard so that I can have Girard in my head, so that I can look at the world through the Girardian lens.

And also I find Girard extremely pleasurable to read. Now, I'm only on my first Girard book. I'm, as always, really late to the party. But it's a fucking blast. In no way do I want this made shorter.

Jasmine Sun (1:10:15)

Yeah, I just started reading The Power Broker with friends, which is very San Francisco. It's very long, and I have no interest in reading a Wikipedia page or ChatGPT summary of Robert Moses's life, even if I know it will be far more efficient, because it definitely is more about Caro's descriptions and understanding what is going on in those 1,200 pages.

I do realize that whenever I care about something, I only want the primary text. And whenever I don't really care about something and it's instrumental, as you said, I will ask for the ChatGPT summary.

This is why I never worry about Substack writers getting replaced by AI. No one reads a Substack writer because they want Wikipedia. Like, that's the worst reason to read a Substack writer. You want to be able to model a particular thinker, sit inside their lens on the world. It's all relational, really.

One question that I have for you: Did you ever consider becoming a full-time writer? Because you used to blog all the time. And why didn't you?

Mills Baker (1:11:32)

God. There's a lot of reasons. A psychologist would probably suggest that I have a defensive insecurity that inclines me not to submit the things I care about most to any kind of judgment. So you can almost think about my whole life as an effort to protect myself from judgment in the things that are most important to me.

For example, in undergrad, people would often think that I was going to become a professor. I even thought that about myself. Then I had one professor tell me that it's a nightmare becoming a professor in a field you care about, because now the thing that you care about becomes instrumentalized, and it's subject to the discourse dynamics and the political dynamics of the world. You think you want to be a professor when you imagine sharing your passion with like-minded people and students. That sounds fun. But that's only a percentage of being a professor. A lot of it is trying to get shit published by people who fucking hate your guts. It's market dynamics, you know? It's competitive.

Anyway, I felt this with writing. I like writing the way I like writing. I like writing when I like writing. I don't like assignments. I don't do well with any kind of collaborative process. It's very, very hard for me to write something for someone else that they'll then want edits on because they have a slightly different vision of what it is. I get so bent out of shape so fast and moody and pathetic and cantankerous and my feelings get hurt.

That might be because I started writing publicly on the internet in the 90s, and you were free. You said whatever the hell you wanted and almost no one read it. It was a really fun time. So that level of creative freedom has always seemed to be only possible if you don't make it your profession.

When I got bigger on Tumblr, when I would read books, I would have an idea and think, that would be a good Tumblr post. I really didn't like that I was warping myself for what I thought people would like. If I were to make money off of writing, that would be an unavoidable reality. I would have to think about what works, what doesn't, what people like, what they find useful, what they find valuable. Whereas right now when I write, I feel like I'm in an infinite empty space. It's just me.

I obviously have some degree of audience calculation. I am a mammal, and I think about whether the tribe will kill me for what I'm saying and so on and so forth. I'm not brave, I don't speak my truth no matter what; but I like to do my creative operations pretty much completely freely.

Part of why I like software design is that it's one of my favorite things that I don't care so deeply about that it hurts my feelings when it doesn't go my way. When a given software design decision at Substack happens in a way that I don't like, it doesn't shake my fucking heart. You've seen me. I'll get aggravated and I'll get intemperate, but I won't want to open a vein.

Jasmine Sun (1:14:47)

This has always been one of our core differences. When I started working at Substack, I didn't think I was going to care very much. There was a hope that I would have that sort of relationship to Substack where I just would do the job and not care and it would be okay. And I just never got there. I always cared too much.

With writing also, I have the ego about it, where I associate it deeply with my identity. And I do really care if people read it. I care how they respond. I can't imagine writing without an audience. I guess I've never figured out how to do the thing where I just separate myself.

Mills Baker (1:15:27)

Well, I can't write without an audience either. I mean, I write things that I don't post, but I always intend to post it. The audience for me is very abstracted though.

I always imagine there may be someone out there for whom my writing is catalytic in some way that is useful. That's what happened to me with David Deutsch. I was putzing around reading stuff, and my father and I had emailed with Deutsch a little bit in the 90s or 2000s about his first book. We were both interested in physics.

When I read The Beginning of Infinity, it solved a major problem that had been emotionally oppressing me for a long time by introducing me to Popperian epistemology. The problem being that it's undeniable that progress happens, but I'm not an objectivist. And how do you square those things? Popper's thing is that there may be something objective, but we can't access it. We never know what it is. Instead, we make better and better mistaken estimates. This asymptotic approach—everything is always a human guess, everything can always be wrong. Deutsch says that theories should be called misconceptions. We just replace worse misconceptions with better misconceptions, but they remain misconceptions. That unlocked this deep troubled sense I'd had for 20 years that something about how I understood the world wasn't adding up.

That was a huge thing. My hope is that something I write will, for someone, be a catalytic step to some bit of personal development that they like. Keeping people moving on their path is a very exciting possibility to me.

Then, periodically I write because something's made me so frustrated that I just have to get it out there. That happens quite a bit too.

Jasmine Sun (1:17:07)

I wonder if the reason that I feel like I can do this job is because I grew up in an era where this was always the case. It's not the 80s thing where people are writing online and nobody reads it. It was always that you put stuff out as a “personal brand”; you can never do anything without being judged on it. You post a picture of you and your friends on Instagram, and it will be measured in likes and views. There's no pure forms of life anymore. So it doesn't bother me that much.

I don't fear someone being in my comments saying “I think you're wrong.” In fact, I keep trying to get people to do it more, to email me their points of contention. I get really excited when someone deeply engages and tells me that I'm wrong. I write in order to be heard. When someone judges me positively or negatively, that makes me think that I'm not alone, that my expression has done something to the world. Whereas if I didn’t get comments or replies, I don't know if I would ever write.

Mills Baker (1:17:59)

Yeah. I'm sure if I got nothing, it would be quite different. Now I've gotten used to getting a little something, you know?

But I do think this could be generational. Gen X was told that you had to be authentic and do things for their own sake, so we all pretend like we're doing that. I think we're actually just as mimetic and sensitive as any other generation. I imagine there are lot of Gen Xers like me who never look at the stats. And it's true: I never look at the stats. But you could also say, that's a performance that I engage in for myself or for some imagined audience of Ethan Hawke and whoever else, sitting in the background going, yo man, that's cool, you don't even look at your stats.

Gen X does a lot of this stuff, performative indifference to outcomes. And my whole life, I've actually thought rather highly of people who can be more honest about what's happening. I think it's cool to want success. I personally just can't—it's like not in my cultural makeup.

Yesterday, somebody was asking me why I write, and we got into the subject of how we think about our lives. I told this person that if nothing in my life changed and every day was exactly like this day forever, I'd be completely happy. And that's true. I don't think in terms of wanting change. That has certain virtues and upsides, and has certain tremendous downsides. You worked with me. The complacency that this attitude can generate is very frustrating for people who are trying to accomplish good things in the world. This is something people hate about Gen X. We were called slackers and it wasn't a compliment. Things need to get done.

Jasmine Sun (1:19:57)

Man, at some point we will have a much lengthier discussion about the historical conditions of Gen X because I'm really curious about this. It’s very foreign to me. I'm more of a Gen Z-millennial cusper, where you still have ambitions, as cringe as they are.

Girardian scapegoating

Jasmine Sun (1:20:05)

My last question is: What is a reading or research rabbit hole that you've been on recently?

Mills Baker (1:20:12)

I've developed a total neurotic fixation with a bunch of like nutrition stuff that I won't bore you with. Literally 80% of my usage of these AIs is being like: listen, if I take berberine 45 minutes after I began eating a mostly protein composed meal, what can I expect the next day in terms of my energy? What I love about ChatGPT is that you can iterate in the conversation, it's stuff you can never find on forums.

But I'm fairly deep in my religious phase right now. I read a lot of religious texts, and at the same time I'm reading Girard's I See Satan Fall Like Lightning. I'm really interested in extensions and applications of mimetic theory. I did a podcast with

where I hit him with a take that I'm into right now: the current most common scapegoat in American culture is the past. We constantly exhume and execute generations of the past. We put all of our shame and guilt into the past; we locate the source of all American evils in the recent past or its failure to ameliorate the evils. It's a neverending prosecution. And I wonder how much American dynamism comes from an effective new scapegoat mechanism.

Prior scapegoat mechanisms generated things like war and genocides. Then Girard's great theory is that Christianity is a major development because it posits that the scapegoat mechanism is fundamentally in error. The scapegoat is not only innocent, but perfectly innocent. And Christ is a scapegoat.

Lately I've been thinking using ChatGPT for working through other forms of scapegoating, other forms of Girardian mimetic activity. It's what we talked about earlier: the idea that humans are actually much less than we seem sometimes seems to me like a form of self-scapegoating. What you're always saying is the source of the world's evil is human exceptionalism. I find that very funny; it seems very Protestant to me, needing to relentlessly drive home the idea that nothing human is good. Cities are bad, technology is bad, human exceptionalism is bad, humans aren't different than the lowest forms of life, life is not different than non-life. This exhaustive desire to empty ourselves of a sense of election or specialness fascinates me. It’s cropped up before in history. There are a lot of human movements that organize themselves around desacralization of various kinds. And especially in the past half millennium in Europe, it's a big theme. Whatever you thought was special isn't special. Whatever you thought was sacred isn't sacred. Whatever you thought needed to be protected didn't need to be protected and in fact needs to be destroyed.

Basically, a research rabbit hole I'm in is trying to figure out if there's fruit here. I should just read the rest of Girard. He might've said all this, for all I know.

Jasmine Sun (1:23:38)

I hope you do write about the Girardian stuff. I like the self-scapegoating idea. I'm also curious: what are the historical conditions that cause humans to get into desacralization?

Mills Baker (1:23:50)

That's a great question.

Jasmine Sun (1:23:55)

Cool. We are wildly over time, and I'm also late for my next call.

Thank you for spending so long with me. This was so fun. Really appreciate having you on the pod Mills!

Mills Baker (1:23:58)

Likewise, thank you for having me! We'll talk soon.

Links & books

Thanks for reading or listening! Long live text.

Jasmine

Discussion about this video