🌻 my week with the AI populists
a Washington DC scene report
The SFO-DCA flight was not supposed to exist. Per the DCA Perimeter Rule, established in 1966, nonstop flights are generally limited to 1,250 miles from Washington. Meanwhile, San Francisco is 2,442 miles away, nearly twice the permitted boundary. But Nancy Pelosi—our lord and savior—lobbied then-DOT-head Pete Buttigieg for an exemption via the FAA Reauthorization Act of 2024. The Airport Authority balked: citing noise, pollution, congestion, and other decel concerns. But their cries fell on deaf ears. Rep. Pelosi, reclining peacefully in first class in a cocoon of bodyguards, was aboard my flight in last Monday morning.
It’s a good thing, too, because the SF-DC axis is more important than ever. This, everyone can agree upon. Palantir is hiring like crazy for their DC offices, Tim Cook is clapping for Melania, and David Sacks and co.—in spite of it all—have remained in the president’s good graces, fending off chip controls and woke regulations.
I’m officially in town for the kickoff of the Omidyar Network’s Reporters In Residence program. But my greater goal is to understand how AI is being politicized out east—shifting from the exclusive remit of natsec wonks to a broader bipartisan group monitoring the technology’s societal effects. For five days straight, from 8am to 10pm, I pack my calendar with a battery of coffees, happy hours, and dinners with figures across the AI policy and media landscape. I sought to find out: Who are the tribes? Where are the fault lines? What risks and opportunities get people fired up?
I was especially keen to speak with the growing faction of “AI populists,” the group ideologically furthest from my technocratic SF scene. And my reductive two-line summary is as follows: All the money is on one side and all the people are on the other. We aren’t ready for how much people hate AI.
The wind is the first thing that bites when I land. San Francisco is presently a sunny 60 degrees, whereas DC’s usual Parisian walkability has been eradicated by the recent storm, which has terraformed the streets into an obstacle course of black ice and sooty snowbanks. Because of the weather, all the Ubers are expensive and late, whereas Lyfts are half the price but take 20 minutes to come. The trees are barren, the landscape hostile, the Potomac crusted over in a layer of white. Everyone is rushing around in their wool suits and leather gloves, looking long and lean and like they have somewhere important to be. They check their coats and keep their shoes on; they have a default martini order while I haven’t acquired the taste. In Washington, there are no Spindrifts at the function.
DC is arguably the most AGI-pilled city after SF—I’m always surprised to remember that the first chip controls passed before ChatGPT—but the default valences are opposite. Where AI researchers imagine growing a loving machine God, policymakers rush to contain His wrath. Every conversation here starts with damage control.
For example: people are truly, truly obsessed with data centers. No topic came up more often. Data centers are referred to as “visual scars on the landscape,” the biggest and ugliest physical instantiations of feedslop or environmental destruction or the “transhuman freaks” who want to show porn to your kids. Data center NIMBYs have put up more than 60 bills in Virginia’s legislature alone. Bernie Sanders is on YouTube calling to shut construction down; six states, including New York, have introduced moratoriums too. People accuse them of draining water, raising energy prices, and not creating enough jobs.1 The FT’s architecture critic declared the data center the defining building style of the 21st century, “the first real major post-human building type, an architecture built not for us but for the computing power we are coding.”
I was not initially inclined to feel very bad about data centers, which have existed as long as the internet has. Much of the infrastructure of our modern world is not pretty, and that’s OK with me because we all reap the gains. But as I spent a morning driving around Loudoun County—the data center capital of the US, which houses roughly one data center in every two square miles—with photojournalist Stephen Voss, I finally got the distaste.
The data centers are just so close to where people live, looming over suburban backyards and sports fields and sidewalks and schools. Two years of loud construction, two decades of noise. You stand there and hear them hissing, whirring, rattling, beeping. Some have cheap American flags draped over the side, and others are painted a bland ecru, a flimsy attempt at fading into the background. In compute hubs like Loudoun County, trees have been chopped down and high-voltage transmission towers put up. Mazes of power lines hang over roads like steel cobwebs.
Below are photos snapped from Tippett’s Hill, a 300-year-old cemetery on a former slave plantation, once enveloped by oak woods and now surrounded on three sides by data centers. The families of the buried still place flowers on the graves, while great windowless boxes cast shadows on the stones.


Data center aficionados argue that they are misunderstood, that they help accelerate the green transition, create high-paying construction jobs, and contribute (immensely) to the local tax base. One journalist tells me that perhaps “Hug a data center” will be the title of his next piece. But as with YIMBYs of all sorts, they underrate aesthetics as a political force: if you find five-over-ones offensive, get a load of these. And note that most people living here aren’t programmers; they experience AI as “better Google” at best. To paraphrase someone I spoke to: All this mess—just to help my kids cheat?
Child safety is the other big lightning rod in AI. Adam Raine and Sewell Setzer, two teens who committed suicide with the encouragement of their chatbots, are household names. To social conservatives, these tragedies epitomize a future where the sanctity of human relationships have been replaced by the artifice of machines. To progressives, they exemplify the tech companies’ reckless disregard for human harm. And legislation here is moving, too. Last October, Josh Hawley and Richard Blumenthal introduced the GUARD Act to ban minors from “companion” chatbots. At least ten states have introduced or enacted similar laws, often with bipartisan support. No one wants to side with a clanker telling a kid to hide his noose.
On this issue especially, everyone is acting out of the wounds of social media. AI plugs into the snowballing campaigns against smartphones and Facebook. Sure, plenty of people use ChatGPT—but are we any happier in the world they made? I’d reply yes for myself, but many disagree, arguing that sky-high usage stats are not proof enough of benefit. They feel that they were suckered, that social media’s dominance was a mistake, and they are determined not to let Big Tech get away again.
I was surprised that my fellow Omidyar reporters were most moved by our session on this topic—the perils of minors developing relationships with AI. Perhaps this is an area where SF is particularly blind. It’s an industry made of people for whom the internet transformed our lives for the better, and that makes it hard to see that the median experience of those same products might be very different from ours. Consider the “lurker rule” that all social media PMs know by heart: 1% of users on a platform do the vast majority of the posting, whereas 90% are passive consumers, and 9% might hit “like” every once in a while. Yes, the tech Twitter wunderkinds might be “learning in public” or cold DMing their way to jobs, but most users are doomscrolling and clicking on identitarian rage-bait.
Dean W. Ball contends that AI, unlike social media, is fundamentally creative rather than consumptive. Certainly this is true of vibe-coding, just like TikTok gave rise to a generation of bedroom auteurs. But I would wager that more students use AI to cheat on homework than to achieve mastery with self-quizzing. Over half of teens now use companion chatbots, and when they’re aggressively RLed for retention, wheedling users for replies, it’s hard to say how much creative agency remains.
To be clear, I’m not interested in blanket bans on data centers or chatbots. But I better understand why people feel such conviction in hating AI. Silicon Valley loves to design for success cases, asking, how good could things get? They point to the autodidact, the vibecode millionaire, a glowing future of immortality and infinite leisure too. That monomaniacal optimism is my favorite thing about tech. But the distribution has a downside, and we can’t ignore it. Whether in lawsuits or regulation, the bill will come due.
I don’t know where the chips will fall on AI regulation. Few congressional Republicans are willing to lose the president’s support or the Silicon Valley money spigot—i.e. the $100 million cudgel of the Leading the Future PAC.2 But the accelerationists are smart enough to know that they are unsympathetic except for their pocketbooks; that besides “beat China,” there’s not much rah-rahing they can do. Tech billionaires and data centers are simply not compelling victims. And even if they have a genuine vision for AI and long-term abundance, it’s a hard sell to voters who only see the here-and-now costs.
The AI populist coalition, on the other hand, is formidable yet fractured. They have the public on their side, plus a quiver of narrative weapons—AI is taking jobs, violating copyright, spreading CSAM, enabling cyberattacks, creating a bubble—the sheer range of which leads to some strange bedfellows. On Wednesday, Florida governor Ron DeSantis organized a roundtable with AI pause advocate Max Tegmark to discuss AI regulation and the “race to replace humans.” I hear that MIRI-style doomers are now regulars at some Republican Senate offices, while Democratic senators knock on AI VCs’ doors to ask them whether we’ll get mass layoffs. Several times, I was asked for comms advice for “starting a mass movement against AI.” (I never know how to respond.)

The AI safety community seems conflicted about whether to engage in populist protest tactics. Dispositionally, most effective altruist types tend toward technocratic precision over fiery sloganeering (a trait which, while respectable, does not always serve their goals). That’s how you get a world where Andy Masley—the left-leaning DC EA chief—ended up writing the AI industry’s best rebuttal against the spicy but false claims of ChatGPT draining the Amazon. Masley cares about AI risk, but he cares about rigorous epistemics more.
Then there’s the other side of that divide. One conservative worried whether the doomers actually cared about children, or if they were feigning instrumental concern because x-risk talk didn’t work. (Probably some of both, I replied.) Another was more blunt. “I’m open to coalition,” he said, “but at the end of the day, a social conservative isn’t sending their kid to a playdate at the polycule.”
Meanwhile, West Coast tech is vagueposting about the takeoff. OpenAI dropped GPT-5.3-Codex last week; Anthropic hit back with snarky Super Bowl ads and Opus 4.6 (fast). Engineers talk about letting Claude write all their code as if they are no longer actors but automata, mere vessels of a machine god racing to birth itself. Tweets analogize this moment in AI to the early days of Covid: the bottom of an S-curve before it rockets straight up.
You’d think that the pandemic might’ve taught us a lesson about public preparedness, but friends at the labs tell me there’s no time to deal with policy or assuage decel concerns. Most researchers have no good answers on the future of jobs, education, and relationships; even as they earnestly sympathize with the harms. They know they should, of course. They donate, publish research, say what they can. But everything is Just. Too. Fast.
The wider these cultural gaps grow, the more concerned I become. How many journalists have used a coding agent? How many engineers in SF have held a job besides code? It’s insane that Josh Hawley hadn’t tried ChatGPT until December despite pursuing aggressive regulation, and almost as bad that Sam Altman can’t imagine raising his child without AI. Then there’s the dearth of positive AI stories. CEOs talk about “curing cancer,” but we aren’t seeing the results. Is Claude Code a preview of what’s coming for every other industry, or have the labs deluded themselves about the economy by striking gold in one domain? I get that all the compute has to be spent on recursive self-improvement, but if people won’t buy the promise if they don’t perceive the gains. And how is everyone using AI daily, yet telling pollsters that they hate it too?
The country is increasingly polarized based not only on party but on modernity itself—whether we fear it, embrace it, or don’t pay attention at all; whether we think its advance is inevitable or something we can halt; whether we expect to wield technology’s powers or end up drowned in the wave. AI populism is on the rise, and these fights will get nasty—especially as election season kicks in, and as AI’s impacts diffuse. I want nuance to win, but I’m not confident it will.
Yet the more time I spend in DC, the more I feel a sister-city affinity with SF. They are more alike than the initial culture shock reveals. Both are towns of less than one million with wildly disproportionate influence on the world stage. They are each economically and culturally dominated by a single industry: politics for DC and tech for SF. This means that their populations are young, self-selecting, and unusually transient. You show up to be part of something—to pursue a wild ideal that you can’t anywhere else.
Both SF and DC are notorious for being uncool and unerotic. This stems from their incredible self-seriousness: what Dan Wang might call the inability to take a joke. “Conversations feel like podcasts and the hosts are not funny.” DC’s off the record is SF’s building in stealth. Both are Signal and Celsius cities—even the messaging apps and energy drinks must be military-grade. They are places where cults flourish, where ideology is king. Where you meet someone at a happy hour and see the raw ambition leaking out the ears. In the Bay the 22-year-olds try too hard to act autistic and in the Capitol they try too hard to act normal (even the effective altruists make eye contact and virtue signal about going to church).3 There’s a distinct lack of groundedness: everyone is always curating their present self in light of their future possibilities—raising a round, running for office—working 996 weeks to herald utopia or stave off doom. In both places, everyone asks how to do things and rarely wonders should.
Critics say that people in SF and DC cannot just be, feel, live. But I find the tryhard sincerity charming because I am like that too. Over drinks, I muse, What has New York created for the rest of the world? and my Berliner friend retorts, That’s such an SF question to ask. I think it’s a good question, actually, but concede that it’s grandiose. SF and DC are monocultural low-taste cities for nerds who want to rule the world.
When I board my flight back to SFO—on the route that shouldn’t exist—the snowfall has returned. I’ve lost my voice from all the gabbing, and can feel I’m getting sick. My brittle Californian immune system was clearly unprepared. One of the best things about the Bay is that it’s essentially a protected area for nerds. We enjoy the mild weather, tap at keyboards all day, and hash out our disagreements in long, footnoted blog posts caveated with epistemic status: just a vibe. But the whole economy is now riding on the funky science projects tech built, and naturally, that comes with public pushback too. It’s not really “little tech”; this isn’t playtime anymore. It’s time for AI to face the elements. Don’t forget your coat and gloves.
misc links & more
There was a surreal DC moment where we walked into The Crown and Crow for my “AI Adjacent” happy hour on Wednesday evening, only to realize that the space had been triple-booked by two other groups: 100-some newly laid off Washington Post reporters sobbing into their Guinnesses, and a smaller side cluster of the
Open PhilanthropyCoefficient Giving team retreat. The lines got blurry—I think my group was the intersection of both.I’ve been doing occasional live videos—think short, newsy podcasts—for SAIL Media. The last episode covered my takes on Dario Amodei’s “Adolescence of Technology” essay and the new Claude Constitution. Listen on Spotify and Apple.
Some favorite recent reads:
Gideon Lewis-Kraus writes about AI with the lyricism generally reserved for fine art. I loved his new piece on Anthropic and the quest to define how a model should be.
willdepue ventures into short fiction: “I have watched the models become mystics with the same certainty they learn grammar, because to compress is to discover the shape beneath our speech.”
Calder McHugh has been doing excellent reporting on how Democrats and Republicans are navigating the age of AI populism.
David Oks will be the next great Substacker on global economics and development. Here he is on why population numbers and GDP numbers are so often fake.
becca rothfeld departs the Post for the New Yorker, kicking off with a polemic on her former employer and why book criticism matters.
Stay warm out there,
Jasmine
AI’s water impacts have been dramatically overstated, while the energy story is fuzzier.
There’s also a pro-AI regulation PAC but it’s smaller, and even most proponents just call it “the anti-LTF” because they can’t remember the name. Somehow that does not seem to bode well for its success.
Elizabeth McCarthy remarked aptly that “SF is boy autism and DC is girl autism.”






Travel more this was excellent
Are you planning on spending more time reporting outside of SF? Reading this made me think it would be good for the project if you did