22 Comments
User's avatar
Steffee's avatar

"ChatGPT catches me in a lie approximately as often as I catch it."

How are you prompting it?? I can't get mine to stop being stupidly sycophantic no matter how many times I admonish it.

Expand full comment
Jasmine Sun's avatar

I often put specific sentences/paragraphs into a model with web search (like o3) and just say "is this actually true? fact check me" and it seems pretty honest / will call me out + provide links to check!

I'm not doing it for whole essays at once since AI is less good with long context. I also have some anti-sycophancy stuff in my custom instructions

Expand full comment
El Monstro's avatar

When I share writing with ChatGPT I always prompt it "Without changing my distinctive style or meaning, please clean this up and ensure that it is accurate, clear to read and concise. Please share line by line any changes you recommend."

What are you anti-sycophany prompts?

Otherwise is produces the same crappy oatmeal that it always produces.

Expand full comment
F. Ichiro Gifford's avatar

I’ve found decent results with “I’m already an expert on xyz” and “Skip the formalities”

Expand full comment
Victualis's avatar

I find "Do not praise the user." in the system instructions works well for Claude.

Expand full comment
Will Watson's avatar

Interesting insights, thank you. What do you think happens in a few years when AI is better at writing than most writers?

Expand full comment
Jasmine Sun's avatar

one of my hypotheses here is that "generating text/prose" is actually different and easier to automate than "good writing," because the latter is infused with personal stakes—something AI cannot have (until we get avatars with life stories, etc). what makes writing good is also seeing the authors' effort (Caro turning every page), or how it reflects their own motives/values/interests (pop stars singing about their breakups)

then there are also writing tasks like "reporting" or "describing an embodied experience" that AI isn't good at

but I do think a bunch of normal utilitarian text will be automated, so e.g. copywriting jobs may go away

Expand full comment
Thomas Hedonist's avatar

100% agree: Point of view, point of view, point of view. If AI can generate text at a 99.9% level, the value of brain-crafted writing will live at the 99.9999% level.

Expand full comment
Paul Soldera's avatar

I don't think anybody cares. I mentioned this is a note the other day... look at the world of chess, it couldn't be more vibrant and competitive as it is today, yet machines learnt to play chess better than humans years ago. If I don't want to see machines playing sports, why would I want to read what machines can write? Especially if it's not merely informative or factual? Stories come from human experiences.

Expand full comment
Will Watson's avatar

The difference is writing isn’t a live sport. How would you know what you’re reading is not human?

Expand full comment
Paul Soldera's avatar

I think that just comes down to trust. I trust something that Jasmine puts together is going to be her writing unless otherwise specified. In a world full of AI output where the marginal cost of that output is near zero, human created works and human authenticity are going to become so much more valued.

Expand full comment
Will Watson's avatar

I agree with that and it’s actually something the ai model I’m working on (https://open.substack.com/pub/livingjournalai/p/all-i-see-is-ai-fear) mentions a lot. In a world about to be proliferated with AI generated content, trust is going to be even more valuable than it is now.

Expand full comment
Nathan Lambert's avatar

Ai being acceptable at writing isn’t that interesting to me, and I’m not sure we’re on track to make superhuman writers. Seems very troubling. AI doesn’t have voice and intuition, we need to add that.

Expand full comment
Jasmine Sun's avatar

is it *possible* to add voice and intuition in the way we mean it? (genuine q!) I think the models each already have some form of it, but because it’s the same for all users and all contexts then we get annoyed fast

Expand full comment
Nathan Lambert's avatar

I was writing about it recently, mostly I think we need to take the reigns off in a way that people won’t be excited about. Think Sydney from original GPT 4 but on the newest models.

Expand full comment
F. Ichiro Gifford's avatar

This is reminding me how much Claude functions as a co-writer for a military fiction jet fighter novella I’m tinkering with.

“Hey Claude does this sentence track”

“Hey Claude fact check this action scene”

“Hey Claude did I foreshadow this effectively”

Writing with Ulysses on one screen and Claude on the other is a surprisingly powerful workflow

Expand full comment
Jon's avatar

> I start most essays not knowing how they will end.

Can an LLM start its essays/answers "not knowing" how it ends?

Expand full comment
Dan Wilson's avatar

“audience, motive, purpose” yes!

“not knowing how it will end” yes!!!

Expand full comment
Austin Morrissey's avatar

Nobody wants to see a robot play sports. Instead, we lust for Claude to beat Pokémon

Expand full comment
Ludwig Yeetgenstein's avatar

Thank you for point #7!! I’m so annoyed by the persistent critiques that are focused on the current state of affairs and are unable to imagine that alternatives are very possible and will likely come soon.

Expand full comment
Houston Wood's avatar

"Where does the mind stop and the rest of the world begin?" This core question from Clark and Chalmers may be an excellent guide to help us try to figure out what we value about writing, and everything else, in the age of AI. Their "extended mind hypothesis" seems to me likely to be increasingly important as we navigate the turmoil ahead.

If you want to know more about this view of consciousness, here's a Claude Deep Dive: https://claude.ai/public/artifacts/b6184f5f-c79a-49b4-8a47-49e914bcc5f9

Annie Murphy Paul's wonderful book, The Extended Mind, is also a great help.

Expand full comment
Houston Wood's avatar

And: How much of the world do language and mathematics contain?

Expand full comment