What a great conversation! I just ordered Fred's book. His thoughts on the history of Silicon Valley reminded me of a project I did in library school (really, a Master in Information Science program), where I compared the history of Facebook to the history of the American public library system. Both are information institutions with outwardly similar goals (provide access to information), but they have radically different cultures and value systems that grew out of their historical contexts.
Meta, and Silicon Valley's approach to an information society more broadly, is about creating private systems designed for "users" who pay a price for access (their data, attention and/or dollars), while libraries are publicly funded community organizations that provide services to "patrons" for free (no data tracked, no fee charged). Profit and growth drive the former, while an ethical commitment to democracy, education and community inspire the latter. It's interesting to consider where the cultural divide between the two came from and why we, as a society, are choosing the former and not the latter as our model for the future.
I also appreciated Fred's thoughts on how today's top-down, authoritarian tech company culture is rooted in Brand's brand of 60s counterculture (i.e. the Lone Male Genius hitchhiking the galaxy), which--in my eyes, anyway--has always seemed historically linked to colonization and imperialism. (So many lone men intent on conquering worlds.) It's not lost on me that American librarianship is a feminized profession built around ideas of service (ok, servitude) and carework. I guess it's not surprising, given our country's hero worship of Great Men and general apathy toward women doing the unglamorous work of supporting communities, that Big Tech is winning.
This was terrific; the guest is a tremendously engaging speaker. I did not expect to find myself caring about ideological divides in 50-year-old communalism movements this evening.
Hopefully you don't mind a somewhat rambling comment because it was an odd feeling for me listening to this, because I feel a connection to many of the ideas being discussed but from a fairly different perspective -- being outside both California and the tech industry as it's generally conceived, and a generation between the two of you.
But much of it resonates. I've joked that the model for my life may have been reading Clifford Stoll's _The Cuckoo's Egg_ as a teenage, and aspiring to bike to work, and solve computer problems in relative obscurity, so I've always had a fondness for the counterculture aspirations of tech.
Also the oddities of my personal background are that my primary points of reference for counterculture experience come from growing up around the folk music community, rather than through tech.
So I kept wanting to suggest some additional ideas to throw into the mix. For example, I was surprised when Fred said, "The communes of the 1960s, I thought, were going to be wide-open social environments. On the contrary, they were incredibly segregated by race and incredibly heteronormative. They were dominated by charismatic men. I was very, very surprised by that." That is not a surprise to me at all and one of my points of reference would be the documentary about Ina May Gaskin ( https://www.birthstorymovie.com/ ) which is both a fascinating movie in it's own right and a interesting example of commune life.
But I would also say that when Fred talks about some of the contradictions within the counterculture movements, it's interesting to compare with what I know about the folk music community -- in which many of the same dynamics play out but it never turns into something where there's big money to be made. For example, this probably much farther into the weeds than either of you (or anyone reading this) are likely to want, but the middle chapter of this Masters Thesis about the Pacific Northwest folk community -- https://cedar.wwu.edu/cgi/viewcontent.cgi?article=2193&context=wwuet (pdf) matches up surprisingly well with the division Fred describes between New Left and New Communalists, with, broadly speaking, the Seattle folk group more interested in organizing and political involvement and the Bellingham scene (described as, "About 90 miles north of
Seattle in Bellingham, folk music communities of the 1970s operated under a parallel ideology of authenticity and connection through folk music." -- more like the New Communalists). But, in that case, neither group ended up particularly connected to money, and they each worked through the strengths and limitations of their approaches with varying degrees of success but still basically existing in the same folk music world (rather than transforming into something like cyberculture).
Finally, when I was looking that up, I noticed that it included an epigraph from Herbert Marcuse which also seems related to some of what Fred is talking about -- "It seems that the poems and the songs of protest and liberation are always too late or too early: memory or dream."
I frequently have the same feeling about the claims of a liberatory cyberculture; the early Wired Magazine/ Nicholas Negroponte vision was too early and yet, now it feels like at some point we've passed a turning point and it feels too late.
Nick -- Just a big thank you for your comment. I'm a veteran of the 1970s California folk scene myself, in a modest way and recognize those same dynamics -- in fact, I think that's one of the places my urge to study this stuff came from. Thanks again.
Thinking about it a little more coherently in the morning, the two things I'd want to emphasize are:
1) How much the entire story you're talking about with the counterculture roots of cyberculture is a story of money, as well as culture and personalities. For example the Ina May Gaskin reference is another example of someone coming out of the commune movement who ends up being really talented and influential, but doesn't have the same connection to money.
2) That, one of the interesting things about looking at different strands that flow out of the counterculture is that they cover different time ranges. For example (and again, I realize I'm off on a significant tangent from the original conversation), I was just thinking about Sandy Paton recently. He was a co-founder of Folk-Legacy records in 1961 and, as Wikipedia puts it, "One of his obituaries notes Paton's wide-ranging work on the label's releases, including not only in production and engineering but also in photography. Sandy and Caroline Paton also wrote liner notes for many of their label's recordings, contributing "a wealth of cultural knowledge" to the printed materials accompanying these albums."
In addition to being a significant influence on the folk revival, he was also a major figure in The Mudcat Cafe digital forums (mudcat.org) which was for many years the best reference on the web for traditional music.
So it's interesting that he influential to both 60s analog and 90s digital communities around traditional music, and was 10 years older than Stewart Brand and so he's doing things at a slightly different time.
Lots of cool ideas here, and a very interesting guest! However, I'd disagree with the notion (at 1:36:30) that SWEs are thinking while they work less than writing. In my experience it's mostly the same in terms of figuring out on the fly "how to express the thing", you alternate between confusion, hard thought, and trying something to see if it works vs "writing out the idea smoothly".
Things are a little more constrained in enterprise software where you usually work to implement a feature inside a large existing codebase rather than going tabula rasa, but there are still multiple ways to write the feature and it feels pretty similar to actual writing in my mind.
It's less visible, but I think the failure modes of "vibe coding" are therefore very similar in shape to the failure modes of letting an LLM write your paper from your notes without good supervision/the ability to correct itself.
agree that deskilling is still a concern with AI-assisted coding, but I think there is a nuanced difference in the failure mode I'm describing
in most engineering, you are trying to build toward a particular vision of an end product—and can yes, can choose better or worse routes to get there. but the "engineering" and the "product/design" phase can be separated.
in writing, the spec itself is often undefined—I think if you are 100% sure of your thesis before you start writing, you're doing it wrong. it's very likely that the process of drafting totally changes what the end essay is, because you frequently encounter evidence/logical problems you didn't anticipate at the beginning. i.e. "how you express it" *is* the end product the user/reader sees
(the SWEs I've talked to often make some distinction between coding tasks that are just focused on "reach this end goal asap" vs where they need to be more strategic, I probably don't get all the nuances here, but it sort of reminds me of writing-as-a-craft vs. rote copywriting/email-writing/etc)
“That was in 1969, and '69 was the year of the highest mortality rate among American soldiers in the field during the entire war. So at a moment when hundreds of thousands of young American men are dying”
Total American deaths in Vietnam from 1965-75 were 58K, peaking at 17K in 1968, down to 12K in 1969, and rapidly fell off thereafter. And this guy wrote a book on the Vietnam War?
I went from here to reading Scott Alexander's latest Substack article, which includes: "But a study a year ago (ie already obsolete) found that 76% of doctors used ChatGPT for clinical decision-making. One member of our team (SA) is a medical doctor in the San Francisco Bay Area, and can confirm that this feels like the right number. He and growing numbers of his colleagues use language models regularly - often typing in a treatment plan and asking the AI if it sees any red flags or can think of anything being missed."--To me, changing medical decision-making seems to suggest a tech that is already Very Important.
I don't think Fred would deny that AI can be useful — he says in the interview that he uses it to review his book outline/drafts, and calls it 70% good. I think the claim he's making is that it will not lead to human extinction/salvation, as many proponents say, and is not an internet-scale technology (which you can disagree with, and I might, but isn't insane to think)
You are such a kind person. I want you on my side, to interpret this ""My own take is that AI is a whole lot less important than people think it is. I'll hold that view until it becomes a lot more effective than it is."--as meaning "only" it is not going to kill or save us, when I thought he meant it is "a whole lot less important" than those who think it will transform education, science, most jobs, etc. It was just a conversation, I know. But when I then read Scott A's post--the whole 2027 paper blows my mind--I had to come back and vent.
I appreciate your willingness to engage. And I really appreciate your anthropological work in the Valley!
This was a great interview--you 2 learned as did I. As someone who went back and forth between both the political and communal wings in the 60s, my 20s, I found Fred's analysis very strange--as if he were talking about what people write in books rather than my lived experience of those times. (e.g. The Whole Earth Catalog was way too expensive for anyone in my commune to buy!) But that's always the way with written history, right?
I was struck by both of your yearning for higher ed to become small seminars! Like yearning for pigs to fly, I think. Would be spectacular but impossible. Go to a Cal State campus, where most 4-year degrees in Calif are awarded; imagine seminars there, how many profs that would take. Educated where?
But the part of the interview that made me think Fred perhaps should do a little field work was his claim, "My own take is that AI is a whole lot less important than people think it is. I'll hold that view until it becomes a lot more effective than it is." I thought that biologists, astronomers, mathematicians, medical diagnosticians, accountants, lawyers, mid-level managers, marketers, book publishers, taxi and truck drivers, not to mention coders and many other occupations were actually already finding AI quite helpful/threatening to their way of working. You seemed to accept Fred's claim as reasonable, even going on to kind of support his view that the tech titans are caught up in their own BS that is in fact probably really self-serving bullshit.
I would be very interested to know if that what you are starting to believe as you do your history of AI. Have I been misled to believe that, e.g. med schools and law schools are reeling with what the life of doctors and lawyers will soon look like? That long-haul truck drivers may in a decade of so be out of jobs? It's kind of a big deal to me if Jasmine Sun--my woman on the ground, with her ear to the ground, whom I trust--believes Fred may be right that AI is "a whole lot less important than people think it is."
Kindly let me know if this is so. I would then begin to reexamine my priors :)
as you can hopefully tell from my other posts, I think AI is pretty societally important— which is why I spend so much time writing about it. but we were nearing the end of a long interview, and I view podcasts as an opportunity to be generative more than to debate (I push back where I see opportunities but not at every instance).
I also think it's true that narratives of AI's importance—especially re: geopolitical/military strength, as I point out—can be self-serving, even while they are also earnest. like even if I was a real China hawk, I'd be far more worried about the US's decimated shipbuilding capacity than whether our LLMs are better than theirs.
re: small seminars; of course it's not totally realistic, but personally I'd rather have more seminars from TAs/PhD students than lectures from hotshot professors. I've learned more by working through texts peer-to-peer than sitting in a 500 person lecture hall.
I was sincerely wanting to know, as you seemed to be accepting his viewpoint as reasonable and to me it seems like almost a crank opinion! Not that I don't love cranks--I'm one myself when I'm at my best.
What a great conversation! I just ordered Fred's book. His thoughts on the history of Silicon Valley reminded me of a project I did in library school (really, a Master in Information Science program), where I compared the history of Facebook to the history of the American public library system. Both are information institutions with outwardly similar goals (provide access to information), but they have radically different cultures and value systems that grew out of their historical contexts.
Meta, and Silicon Valley's approach to an information society more broadly, is about creating private systems designed for "users" who pay a price for access (their data, attention and/or dollars), while libraries are publicly funded community organizations that provide services to "patrons" for free (no data tracked, no fee charged). Profit and growth drive the former, while an ethical commitment to democracy, education and community inspire the latter. It's interesting to consider where the cultural divide between the two came from and why we, as a society, are choosing the former and not the latter as our model for the future.
I also appreciated Fred's thoughts on how today's top-down, authoritarian tech company culture is rooted in Brand's brand of 60s counterculture (i.e. the Lone Male Genius hitchhiking the galaxy), which--in my eyes, anyway--has always seemed historically linked to colonization and imperialism. (So many lone men intent on conquering worlds.) It's not lost on me that American librarianship is a feminized profession built around ideas of service (ok, servitude) and carework. I guess it's not surprising, given our country's hero worship of Great Men and general apathy toward women doing the unglamorous work of supporting communities, that Big Tech is winning.
This was terrific; the guest is a tremendously engaging speaker. I did not expect to find myself caring about ideological divides in 50-year-old communalism movements this evening.
Really interesting conversation; thank you both.
Hopefully you don't mind a somewhat rambling comment because it was an odd feeling for me listening to this, because I feel a connection to many of the ideas being discussed but from a fairly different perspective -- being outside both California and the tech industry as it's generally conceived, and a generation between the two of you.
But much of it resonates. I've joked that the model for my life may have been reading Clifford Stoll's _The Cuckoo's Egg_ as a teenage, and aspiring to bike to work, and solve computer problems in relative obscurity, so I've always had a fondness for the counterculture aspirations of tech.
Also the oddities of my personal background are that my primary points of reference for counterculture experience come from growing up around the folk music community, rather than through tech.
So I kept wanting to suggest some additional ideas to throw into the mix. For example, I was surprised when Fred said, "The communes of the 1960s, I thought, were going to be wide-open social environments. On the contrary, they were incredibly segregated by race and incredibly heteronormative. They were dominated by charismatic men. I was very, very surprised by that." That is not a surprise to me at all and one of my points of reference would be the documentary about Ina May Gaskin ( https://www.birthstorymovie.com/ ) which is both a fascinating movie in it's own right and a interesting example of commune life.
But I would also say that when Fred talks about some of the contradictions within the counterculture movements, it's interesting to compare with what I know about the folk music community -- in which many of the same dynamics play out but it never turns into something where there's big money to be made. For example, this probably much farther into the weeds than either of you (or anyone reading this) are likely to want, but the middle chapter of this Masters Thesis about the Pacific Northwest folk community -- https://cedar.wwu.edu/cgi/viewcontent.cgi?article=2193&context=wwuet (pdf) matches up surprisingly well with the division Fred describes between New Left and New Communalists, with, broadly speaking, the Seattle folk group more interested in organizing and political involvement and the Bellingham scene (described as, "About 90 miles north of
Seattle in Bellingham, folk music communities of the 1970s operated under a parallel ideology of authenticity and connection through folk music." -- more like the New Communalists). But, in that case, neither group ended up particularly connected to money, and they each worked through the strengths and limitations of their approaches with varying degrees of success but still basically existing in the same folk music world (rather than transforming into something like cyberculture).
Finally, when I was looking that up, I noticed that it included an epigraph from Herbert Marcuse which also seems related to some of what Fred is talking about -- "It seems that the poems and the songs of protest and liberation are always too late or too early: memory or dream."
I frequently have the same feeling about the claims of a liberatory cyberculture; the early Wired Magazine/ Nicholas Negroponte vision was too early and yet, now it feels like at some point we've passed a turning point and it feels too late.
Nick -- Just a big thank you for your comment. I'm a veteran of the 1970s California folk scene myself, in a modest way and recognize those same dynamics -- in fact, I think that's one of the places my urge to study this stuff came from. Thanks again.
Thank you very much; I'm glad that's interesting.
Thinking about it a little more coherently in the morning, the two things I'd want to emphasize are:
1) How much the entire story you're talking about with the counterculture roots of cyberculture is a story of money, as well as culture and personalities. For example the Ina May Gaskin reference is another example of someone coming out of the commune movement who ends up being really talented and influential, but doesn't have the same connection to money.
2) That, one of the interesting things about looking at different strands that flow out of the counterculture is that they cover different time ranges. For example (and again, I realize I'm off on a significant tangent from the original conversation), I was just thinking about Sandy Paton recently. He was a co-founder of Folk-Legacy records in 1961 and, as Wikipedia puts it, "One of his obituaries notes Paton's wide-ranging work on the label's releases, including not only in production and engineering but also in photography. Sandy and Caroline Paton also wrote liner notes for many of their label's recordings, contributing "a wealth of cultural knowledge" to the printed materials accompanying these albums."
In addition to being a significant influence on the folk revival, he was also a major figure in The Mudcat Cafe digital forums (mudcat.org) which was for many years the best reference on the web for traditional music.
So it's interesting that he influential to both 60s analog and 90s digital communities around traditional music, and was 10 years older than Stewart Brand and so he's doing things at a slightly different time.
Lots of cool ideas here, and a very interesting guest! However, I'd disagree with the notion (at 1:36:30) that SWEs are thinking while they work less than writing. In my experience it's mostly the same in terms of figuring out on the fly "how to express the thing", you alternate between confusion, hard thought, and trying something to see if it works vs "writing out the idea smoothly".
Things are a little more constrained in enterprise software where you usually work to implement a feature inside a large existing codebase rather than going tabula rasa, but there are still multiple ways to write the feature and it feels pretty similar to actual writing in my mind.
It's less visible, but I think the failure modes of "vibe coding" are therefore very similar in shape to the failure modes of letting an LLM write your paper from your notes without good supervision/the ability to correct itself.
agree that deskilling is still a concern with AI-assisted coding, but I think there is a nuanced difference in the failure mode I'm describing
in most engineering, you are trying to build toward a particular vision of an end product—and can yes, can choose better or worse routes to get there. but the "engineering" and the "product/design" phase can be separated.
in writing, the spec itself is often undefined—I think if you are 100% sure of your thesis before you start writing, you're doing it wrong. it's very likely that the process of drafting totally changes what the end essay is, because you frequently encounter evidence/logical problems you didn't anticipate at the beginning. i.e. "how you express it" *is* the end product the user/reader sees
(the SWEs I've talked to often make some distinction between coding tasks that are just focused on "reach this end goal asap" vs where they need to be more strategic, I probably don't get all the nuances here, but it sort of reminds me of writing-as-a-craft vs. rote copywriting/email-writing/etc)
“That was in 1969, and '69 was the year of the highest mortality rate among American soldiers in the field during the entire war. So at a moment when hundreds of thousands of young American men are dying”
Total American deaths in Vietnam from 1965-75 were 58K, peaking at 17K in 1968, down to 12K in 1969, and rapidly fell off thereafter. And this guy wrote a book on the Vietnam War?
I went from here to reading Scott Alexander's latest Substack article, which includes: "But a study a year ago (ie already obsolete) found that 76% of doctors used ChatGPT for clinical decision-making. One member of our team (SA) is a medical doctor in the San Francisco Bay Area, and can confirm that this feels like the right number. He and growing numbers of his colleagues use language models regularly - often typing in a treatment plan and asking the AI if it sees any red flags or can think of anything being missed."--To me, changing medical decision-making seems to suggest a tech that is already Very Important.
I don't think Fred would deny that AI can be useful — he says in the interview that he uses it to review his book outline/drafts, and calls it 70% good. I think the claim he's making is that it will not lead to human extinction/salvation, as many proponents say, and is not an internet-scale technology (which you can disagree with, and I might, but isn't insane to think)
You are such a kind person. I want you on my side, to interpret this ""My own take is that AI is a whole lot less important than people think it is. I'll hold that view until it becomes a lot more effective than it is."--as meaning "only" it is not going to kill or save us, when I thought he meant it is "a whole lot less important" than those who think it will transform education, science, most jobs, etc. It was just a conversation, I know. But when I then read Scott A's post--the whole 2027 paper blows my mind--I had to come back and vent.
I appreciate your willingness to engage. And I really appreciate your anthropological work in the Valley!
This was a great interview--you 2 learned as did I. As someone who went back and forth between both the political and communal wings in the 60s, my 20s, I found Fred's analysis very strange--as if he were talking about what people write in books rather than my lived experience of those times. (e.g. The Whole Earth Catalog was way too expensive for anyone in my commune to buy!) But that's always the way with written history, right?
I was struck by both of your yearning for higher ed to become small seminars! Like yearning for pigs to fly, I think. Would be spectacular but impossible. Go to a Cal State campus, where most 4-year degrees in Calif are awarded; imagine seminars there, how many profs that would take. Educated where?
But the part of the interview that made me think Fred perhaps should do a little field work was his claim, "My own take is that AI is a whole lot less important than people think it is. I'll hold that view until it becomes a lot more effective than it is." I thought that biologists, astronomers, mathematicians, medical diagnosticians, accountants, lawyers, mid-level managers, marketers, book publishers, taxi and truck drivers, not to mention coders and many other occupations were actually already finding AI quite helpful/threatening to their way of working. You seemed to accept Fred's claim as reasonable, even going on to kind of support his view that the tech titans are caught up in their own BS that is in fact probably really self-serving bullshit.
I would be very interested to know if that what you are starting to believe as you do your history of AI. Have I been misled to believe that, e.g. med schools and law schools are reeling with what the life of doctors and lawyers will soon look like? That long-haul truck drivers may in a decade of so be out of jobs? It's kind of a big deal to me if Jasmine Sun--my woman on the ground, with her ear to the ground, whom I trust--believes Fred may be right that AI is "a whole lot less important than people think it is."
Kindly let me know if this is so. I would then begin to reexamine my priors :)
thanks for listening!
as you can hopefully tell from my other posts, I think AI is pretty societally important— which is why I spend so much time writing about it. but we were nearing the end of a long interview, and I view podcasts as an opportunity to be generative more than to debate (I push back where I see opportunities but not at every instance).
I also think it's true that narratives of AI's importance—especially re: geopolitical/military strength, as I point out—can be self-serving, even while they are also earnest. like even if I was a real China hawk, I'd be far more worried about the US's decimated shipbuilding capacity than whether our LLMs are better than theirs.
re: small seminars; of course it's not totally realistic, but personally I'd rather have more seminars from TAs/PhD students than lectures from hotshot professors. I've learned more by working through texts peer-to-peer than sitting in a 500 person lecture hall.
I was sincerely wanting to know, as you seemed to be accepting his viewpoint as reasonable and to me it seems like almost a crank opinion! Not that I don't love cranks--I'm one myself when I'm at my best.