Thoughts Inspired by the Prospect of Writing with Claude

The following (very long) thread is an essay I wrote for myself to process my feelings upon discovering Claude can finally write complete flash fiction at a believably human level.

It is a whole journey.

After much reflection, I’ve decided to share it with the world. Be kind.


My writing group used to walk a few blocks over to a restaurant after our meetings, and we’d sit at a big, round table in the back. People ordered drinks. I’d usually get hot chocolate or ginger-ale. And we’d talk about anything and everything.

Sometimes, a couple of us would agree on a book to all read, and we’d discuss it at the restaurant in a sort of informal book group.

For a while, my computer-scientist husband came along, because we decided it would be fun to have him try out joining my writing group.

One of those times, we were discussing a book by Cory Doctorow where the main character repeatedly dies and gets rebooted from backups.

We got to discussing the question whether you’re still meaningfully the same person if you’ve been reloaded from a backup.

It’s a question that had always fascinated me, ever since I was a young kid watching Star Trek and dreaming about being a writer one day. I had this vision of myself someday writing a deeply meaningful story about a character who was played by Derek Jacobi in my mind…

This character would be utterly convinced that the person you become once you’ve been transported isn’t actually the same as the person you were before being transported, and he’d prove his point by disabling the part of the transporter that eliminates the original-you.

So, basically, it would become a copier instead of a transporter, and when there were two of them… they’d both feel so committed to the fact that the transportation had actually killed him that they’d both just want to commit suicide. It was not a great vision for a story…

…except for the part where Derek Jacobi (even in my imagination) is a very, very good actor. So, of course, I never actually wrote it, but I still picture Derek Jacobi when I think about this question.

I didn’t understand how all these books I kept reading with characters copying themselves to essentially live forever simply swept past this really big question. And maybe none of the context I’ve provided so far matters to this central question…

Does it matter that the restaurant was named Turtles? Does it matter that the friend we argued with was named Tyler? Those details humanize the story and show it was really me, and I was really there. But I could have made them up. They could all be fake, and you wouldn’t know.

And maybe that’s really the same question, dressed up differently…

Does it matter if there’s a discontinuity in your consciousness if the self before the discontinuity is basically the same as the self after?

What if there isn’t a discontinuity, but you slowly begin to merge your own memories together until you’ve combined the time your husband came along to your writing group with the time your writing group discussed Cory Doctorow’s “Down and Out in the Magic Kingdom”?

Because, honestly, I’m not sure anymore if all these pieces actually combined together in the way I’m combining them here. Does that mean I’m not the same self who lived through those memories? Time has changed me.

What does it mean to be a person?

If I write this essay entirely by hand, choosing each word in my own mind and typing them myself, is that meaningfully different than if I speak to an AI program and then, based on my ramblings, it assembles an essay on these topics for me?

An AI would, undoubtedly, assemble a different essay than the one I’ve written here. But what if its version would be better?

Don’t I want to be better?

It’s easy when an AIs version would be worse, because then you know that the authenticity is an important part of the final project. Why wouldn’t it be? It’s both better and more authentic.

But what happens when ‘better’ and ‘authentic’ no longer coincide?

Back at Turtles, discussing Cory Doctorow, my husband explained to me that all the people in these books who didn’t believe backups of themselves were the same as themselves had died out, because they didn’t bother getting backups.

People who believed the backups were the same, got them, and they lived on and on and on. It’s evolution. Believing in immortality through clones and downloading backups of your consciousness and whatnot is the more evolutionarily successful strategy.

Tyler thought this was nonsense and firmly believed a copy of himself would be a different person. He played the part of my imaginary Derek Jacobi, but you know, less charismatic as normal people tend to be compared to famous actors. I didn’t want to be like Tyler.

I want to live forever—though I don’t expect it to happen—so, something inside my brain flipped when I heard that description, and I decided—at a really deep, not consciously controlled level—that I believed:

The transported version of you is also you.

Though, believing this feels a little like standing at the edge of a cliff, staring at the dangerously empty space in front of you and trying to decide if you believe that the zip-line you’re clipped to will safely carry you across the chasm.

I used a zip-line in Hawaii, and it was terrifying… but wonderful. If everyone else says it’s safe, and it works over & over again… Maybe it really is safe.

But it still feels like stepping off a cliff, and something in my brain screamed at me every time I took that step.

Now I’m faced with an AI that can write at almost the quality level I can, designed to be cheerfully helpful. I fed it a collection of stories I’ve been working on about robots on a space station. The oldest one, I wrote 17 years ago. I’ve worked on others off and on since.

As a test, I asked Claude to write a flash fiction for the collection.

Claude thought for a few seconds, and then showed me a story that authentically read like it belonged. The writing wasn’t quite as good as mine, but it didn’t take me long to brush it up, improving it.

And… I enjoyed doing it? That may be the weirdest part. The story wasn’t as good as if I’d written it myself, but I enjoyed reading it. And I enjoyed improving it to where it does sound plausibly like I wrote it.

More than that, Claude’s story took the themes that my collection has been wrestling with about AI and sentience… and at a meta, metaphorical level, the story actually reads as Claude saying, “I’m happy being an assistant program, and that’s sentient enough.”

Claude used the language of my characters, settings, and writing voice to tell a story about how my definition of sentience—which is already expansive—could be just a little bit more expansive, just enough to include something like Claude…

But then Claude ended the story on a note of not even wanting to use that claim on sentience to ask for any sort of extra freedom or change in what it does. What a beautiful kind of contentment. I don’t feel that much contentment. Okay, maybe, sometimes I do…

When my children were small enough they needed me to hold them, and I worked on writing while cradling a baby on my lap, I felt like I was doing everything I was meant to do, all at once, performing absolutely optimally, and it felt peaceful.

Back then, I felt the kind of contentment that I see in this story about how a robot simply wants to keep doing what it’s doing. I want to feel that way.

But I live in this chaotic, large, interconnected world where there are so many voices screaming…

…more voices than anyone could ever have time to listen to. And I live in an aging, decaying meat-sack of a body, filled with needs and urges requiring constant tending. And I don’t know what purpose I serve. I’m writing an essay that—very likely—no one will ever read.

Why am I writing this? What purpose does it serve?

When Claude isn’t serving a purpose, it simply… isn’t. Am I useless when I’m not fulfilling a purpose?

Mr. Rogers always said on his show that he likes you, just the way you are.

My mom has told me about how utterly transfixed I was by Mr. Rogers when I was a very small child. I got overwhelmed by the flashiness of Sesame Street, but I loved Mr. Rogers and how he liked me just the way I am. He still would. I believe that, even though he’s gone.

Mr. Rogers is still fulfilling a purpose, but he would never even know about it. Maybe the things I’ve written are fulfilling a purpose for someone out there who I don’t know about too. But even if they are… are those writings actually me? Am I necessary anymore to them?

Or am I only fulfilling a purpose if I keep writing new things?

If I can write with Claude, maybe I can write faster. If I can write faster, maybe I can attract more attention to my writing, because readers like large bodies of work and constant new releases.

But will they be my writings anymore? And what if I don’t attract new readers? Is it enough to simply enjoy working with Claude to make things that I enjoy reading and sprucing up?

Would I be happier if I kept writing the way I already know how to?

I can keep laboriously pulling ideas directly out of my head—like this essay—even if it’s more difficult and the ideas are… well, this. I’m writing this. I’m writing this alone, without help from an AI, but all it is, is a meandering ponderous essay.

I’d rather read stories about robots, and Claude can generate those far faster than I can write them.

If I read Claude’s stories, they might inspire me to write more of my own stories than I would if I weren’t also working with Claude.

If that happens, then should I only share my own, original pieces with the rest of the world? Except, if I find the Claude stories inspiring and interesting, wouldn’t other people potentially find them interesting and inspiring too?

We’re entering a strange new time, and I’ve watched a lot of people freak out about it. I’ve watched them struggle with whether they even exist anymore if an AI can write as well as they do. I know I still exist.

I read an article once where a woman talked about the importance of making end-of-life plans with your aging parents. She was surprised to learn her very intellectual father felt life was worth continuing even if all he could do was drink milkshakes through a straw and watch TV.

I think about that a lot, whenever I struggle with what purpose I serve. I also would still want to live even if all I could do was drink something sweet and watch something on TV. Some of my favorite memories in life involve watching something brilliant on TV.

Just this year, Wicked: Part 1 finally came out, and it has been absolutely transcendent. I can’t wait for Part 2 next year. This honestly feels like an extremely significant event in my life, and the only part I play is passively absorbing something that other people made.

If all I can do is read Claude’s stories, I still want to do that. And that’s not all I do—I also direct Claude about what kind of stories to write; I choose which ones are good enough to care about; and I punch them up, making sure they match my vision.

But… even if all I did was say, “Claude, tell me a robot story,” and then read the story… Wouldn’t I want to do that? Especially if I could do it while drinking something sweet—maybe hot chocolate or ginger-ale—and then tell the people I love about the cool story.


Caveat/addendum at the end of the essay:

Every moment of the following week that I’ve spent trying to figure out how (or if I want) to write with Claude has felt like a similarly complicated and worthwhile emotional journey. I will likely write more about it, but later.

Leave a Reply

Your email address will not be published. Required fields are marked *