Most jobs are a complicated bundle of tasks. The complex part is that humans are needed / used to suffer through the complication that often borders on chaos and orgs don't want to have to work on that part.
Really enjoyed this and found your willingness to engage with the political economy of the world in which AI is actually being developed (rather than assuming technology will make it irrelevant) refreshing!
#25 – I've worked with folks/orgs to help them navigate complex systems since 2009, and this is the first time I've heard "unruly" used. May have to start using that. 😉 Thank you!
With GPS, or alfalfa, or a Photoshop spot brush, the benefit is clear and the drawbacks less obvious. With what's currently being presented as AI, many of the advertised use cases seem actively bad to me (I don't want a film or a picture generated from datasets; I don't want a computer to write my emails for me; I don't think responsibility should be obfuscated behind a machine). Nor does it seem like it will replace the existing environmental impacts of tools we already use - it will add to them, despite the urgent need to reduce our emissions.
Yet as you say, the economic incentives mean many of us have little or no say in how, where, and how fast this tech is rolled out. That's our reality, but it isn't a law of the universe; it's the result of political choices to disempower organised labour and insulate many corporate actors from democratic control. It's notable that two major challenges have come from industries where labour power still exists. I don't think, personally, that generative AI is important enough to be either our saviour or our doom. But it's one more technology that seems to be rolled out without thought for the consequences by a small number of people whose priorities, to put it mildly, are not necessarily aligned with the rest of the world. When it comes to getting humans out of machine-shaped jobs - a sentiment I agree with - Silicon Valley's track record is not good.
Maybe GPS isn't the best example, and Spotify/Youtube would've been more effective, but I just disagree that LLMs aren't useful to people. A lot of people benefit from AI helping write emails (I've spoken to several ESL speakers / immigrants who use it to navigate legal, corporate, small business marketing, etc); I use it as an accompaniment to dense books on topics I don't fully understand; etc.
I'm not saying there aren't negative aspects too, and I agree we need to empower labor more (e.g. I am much more interested in how AI can improve work vs. surveil/substitute) + that the corporate labs are not to be fully entrusted with determining our destiny. But I don't think critics will succeed by denying the utility of AI.
I appreciate the reply! I didn't mean to say LLMs have no uses whatsoever; I take your point that there are situations where they might be genuinely useful. I'd be interested to know how much (and if) it improves learning for e.g. ESL speakers navigating bureaucracy, or guiding readers through dense books. I'd worry about it missing something, but then humans miss things all the time. Whatever my personal feelings, it strikes me as a fairly benign use of the tech.
But Spotify is an interesting example. It's helped me discover many artists, and I've bought albums I wouldn't otherwise know about as a result. At the same time, one of its biggest successes is figuring out how to not pay musicians. I see the same thing happening with LLMs: the use cases that I've seen promoted (and presumably get investment) are things like making movies or being your friend/partner, rather than less sexy ones like digitising old documents. I suppose what I'm trying to say is that I'm less concerned about LLMs in themselves, and far more concerned about tech companies once again being allowed to use "miracle tech" as a smokescreen to devalue labour. The ability to pay one's bills *is* a social justice issue.
Theyre bought in and on AI’s side for at least this administration in the US.
#9
And the populace which voted in our current admin is likely the least affected by AI in the immediate short term. Blue collar is safe until robotics hit.
#12
100% agree
#15
Dont know what you mean. Theyre generally getting better at everything over time. You could pause LLMs today and we’d still see 20% job loss.
#17
This feels like a lowbrow hit that means nothing.
#18
Is this supposed to relate to the IMO? Those are not calculator problems.
#22
Radiology is probably the first fucked in medicine.
#24
But a massive competitive pressure is changing that day by day.
#30
See palantir technology’s current valuation
#31
More like your company getting forward deployed unto by “AI engineers” who brain drain you and run the evals to do your job for you.
#35
Its more like a shitton of specific knowledge extraction, just in time data into prompt insertion, and a chain of prompts
#39
I think about this daily and I don’t know what remains.
I think what bugs me the most about nearly all of the discourse around AI is that it's talked about in the same terms as the word "life." But there's no such thing as "life" -- there are only living things, you know?
I am starting to see some really interesting uses of AI in my own work; I saw one this week, in fact, that blew me away. But the examples I'm seeing that strike me as actually useful applications are all pretty narrow; they're for clear -- but limited -- uses, with pretty strict guardrails.
Publicly, though, I see the opposite. AI gets talked about as an everything machine -- how do you even get your arms around that, mentally?
And, conversational interfaces aren't necessarily the best interaction layer for all tasks at all times, you know? You'd never guess that, though, from the way it gets talked about right now. (Mostly, anyway.)
Re: #7, I remain less pessimistic. Covid was a natural experiment in what happens when everyone is sent home, largely stuck inside and therefore online, and given a one time cash infusion that doesn't re-anchor long run expectations.
I believe (1) culture is downstream of material conditions, (2) absolute freedom from severe material deprivation (or the fear thereof) would allow culture to develop in largely positive directions, and (3) the covid checks did no constitute 2. (ofc I could be wrong, but I don't think our experience with covid relief provides signal on that question either way)
That's fair, Covid obv had a ton of other confounding stuff going on! But given the stats about how much Gen Z stays indoors/online, I'm not that optimistic that UBI sans an equally robust effort at social/community infra will do a ton
You mention it--"taste". I'd love to see you share your notes on how "taste" has recently become the au current concept that will save us humans from those damn, tasteless machines!
I've found Sangeet Paul Choudary's writing on the labor impact of AI the most interesting of what I've come across. Related to #38, he talks about how AI changes the logic of system and how work is coordinated..here is one quote from his new book "reshuffle" which I just started:
"system-based thinking doesn’t see jobs as fixed roles. It sees them as temporary groupings of tasks that make sense in a specific system of work. When AI changes the system, the job bundle gets unbundled, and the constituent tasks are reassembled in new forms. The same tasks may still exist, but they may no longer hold value in the new system. And even those that do will be rebundled into fundamentally new job bundles and roles. These new roles emerge in response to a new system. The impact of AI on jobs is, then, determined not just by AI’s impact on the job’s constituent tasks but by AI’s impact on the larger system of work."
From what I've read his recommendation to individuals is to look beyond how ai automates or performs a given task and try to think through how it restructures the system and whether your role will exist in that new system.
No gatekeeper saying we don't do plane-ride aphorisms, no SEO expert demanding h2 headlines. Just dense, stimulating text. More of that, please!
:)
Most jobs are a complicated bundle of tasks. The complex part is that humans are needed / used to suffer through the complication that often borders on chaos and orgs don't want to have to work on that part.
Really enjoyed this and found your willingness to engage with the political economy of the world in which AI is actually being developed (rather than assuming technology will make it irrelevant) refreshing!
thanks! I think the political economy matters a lot!
Thanks. Sharing this with my educator circle. Appreciate your look into the markets
#25 – I've worked with folks/orgs to help them navigate complex systems since 2009, and this is the first time I've heard "unruly" used. May have to start using that. 😉 Thank you!
With GPS, or alfalfa, or a Photoshop spot brush, the benefit is clear and the drawbacks less obvious. With what's currently being presented as AI, many of the advertised use cases seem actively bad to me (I don't want a film or a picture generated from datasets; I don't want a computer to write my emails for me; I don't think responsibility should be obfuscated behind a machine). Nor does it seem like it will replace the existing environmental impacts of tools we already use - it will add to them, despite the urgent need to reduce our emissions.
Yet as you say, the economic incentives mean many of us have little or no say in how, where, and how fast this tech is rolled out. That's our reality, but it isn't a law of the universe; it's the result of political choices to disempower organised labour and insulate many corporate actors from democratic control. It's notable that two major challenges have come from industries where labour power still exists. I don't think, personally, that generative AI is important enough to be either our saviour or our doom. But it's one more technology that seems to be rolled out without thought for the consequences by a small number of people whose priorities, to put it mildly, are not necessarily aligned with the rest of the world. When it comes to getting humans out of machine-shaped jobs - a sentiment I agree with - Silicon Valley's track record is not good.
Maybe GPS isn't the best example, and Spotify/Youtube would've been more effective, but I just disagree that LLMs aren't useful to people. A lot of people benefit from AI helping write emails (I've spoken to several ESL speakers / immigrants who use it to navigate legal, corporate, small business marketing, etc); I use it as an accompaniment to dense books on topics I don't fully understand; etc.
I'm not saying there aren't negative aspects too, and I agree we need to empower labor more (e.g. I am much more interested in how AI can improve work vs. surveil/substitute) + that the corporate labs are not to be fully entrusted with determining our destiny. But I don't think critics will succeed by denying the utility of AI.
I appreciate the reply! I didn't mean to say LLMs have no uses whatsoever; I take your point that there are situations where they might be genuinely useful. I'd be interested to know how much (and if) it improves learning for e.g. ESL speakers navigating bureaucracy, or guiding readers through dense books. I'd worry about it missing something, but then humans miss things all the time. Whatever my personal feelings, it strikes me as a fairly benign use of the tech.
But Spotify is an interesting example. It's helped me discover many artists, and I've bought albums I wouldn't otherwise know about as a result. At the same time, one of its biggest successes is figuring out how to not pay musicians. I see the same thing happening with LLMs: the use cases that I've seen promoted (and presumably get investment) are things like making movies or being your friend/partner, rather than less sexy ones like digitising old documents. I suppose what I'm trying to say is that I'm less concerned about LLMs in themselves, and far more concerned about tech companies once again being allowed to use "miracle tech" as a smokescreen to devalue labour. The ability to pay one's bills *is* a social justice issue.
#8
Theyre bought in and on AI’s side for at least this administration in the US.
#9
And the populace which voted in our current admin is likely the least affected by AI in the immediate short term. Blue collar is safe until robotics hit.
#12
100% agree
#15
Dont know what you mean. Theyre generally getting better at everything over time. You could pause LLMs today and we’d still see 20% job loss.
#17
This feels like a lowbrow hit that means nothing.
#18
Is this supposed to relate to the IMO? Those are not calculator problems.
#22
Radiology is probably the first fucked in medicine.
#24
But a massive competitive pressure is changing that day by day.
#30
See palantir technology’s current valuation
#31
More like your company getting forward deployed unto by “AI engineers” who brain drain you and run the evals to do your job for you.
#35
Its more like a shitton of specific knowledge extraction, just in time data into prompt insertion, and a chain of prompts
#39
I think about this daily and I don’t know what remains.
I think what bugs me the most about nearly all of the discourse around AI is that it's talked about in the same terms as the word "life." But there's no such thing as "life" -- there are only living things, you know?
I am starting to see some really interesting uses of AI in my own work; I saw one this week, in fact, that blew me away. But the examples I'm seeing that strike me as actually useful applications are all pretty narrow; they're for clear -- but limited -- uses, with pretty strict guardrails.
Publicly, though, I see the opposite. AI gets talked about as an everything machine -- how do you even get your arms around that, mentally?
And, conversational interfaces aren't necessarily the best interaction layer for all tasks at all times, you know? You'd never guess that, though, from the way it gets talked about right now. (Mostly, anyway.)
Re: #7, I remain less pessimistic. Covid was a natural experiment in what happens when everyone is sent home, largely stuck inside and therefore online, and given a one time cash infusion that doesn't re-anchor long run expectations.
I believe (1) culture is downstream of material conditions, (2) absolute freedom from severe material deprivation (or the fear thereof) would allow culture to develop in largely positive directions, and (3) the covid checks did no constitute 2. (ofc I could be wrong, but I don't think our experience with covid relief provides signal on that question either way)
That's fair, Covid obv had a ton of other confounding stuff going on! But given the stats about how much Gen Z stays indoors/online, I'm not that optimistic that UBI sans an equally robust effort at social/community infra will do a ton
You mention it--"taste". I'd love to see you share your notes on how "taste" has recently become the au current concept that will save us humans from those damn, tasteless machines!
Not about AI, and actually kinda contra taste-mania, but I wrote a blog post on taste a bit ago here: https://jasmi.news/p/taste?utm_source=publication-search
Thanks. Will go read that right now! I should have known you would have anticipated me. Enjoy your trip.
I've found Sangeet Paul Choudary's writing on the labor impact of AI the most interesting of what I've come across. Related to #38, he talks about how AI changes the logic of system and how work is coordinated..here is one quote from his new book "reshuffle" which I just started:
"system-based thinking doesn’t see jobs as fixed roles. It sees them as temporary groupings of tasks that make sense in a specific system of work. When AI changes the system, the job bundle gets unbundled, and the constituent tasks are reassembled in new forms. The same tasks may still exist, but they may no longer hold value in the new system. And even those that do will be rebundled into fundamentally new job bundles and roles. These new roles emerge in response to a new system. The impact of AI on jobs is, then, determined not just by AI’s impact on the job’s constituent tasks but by AI’s impact on the larger system of work."
From what I've read his recommendation to individuals is to look beyond how ai automates or performs a given task and try to think through how it restructures the system and whether your role will exist in that new system.
I haven't looked into his work, thanks for the rec!