🌲 A Personal Take on Using LLMs
Navigating the ethics -- and effectiveness -- of new AI tools as a writer.
One of the most common questions I get asked lately is how I’m using AI and large language models to help with my writing and research. I haven’t said much about it because I’m not leveraging it as much as other folks I see, and I don’t think I’m doing anything particularly revolutionary. Still, I do use various AI tools, and y’all do seem curious, so here goes…
Awhile ago, I asked some folks in Discord what the best Obsidian plugin with a ChatGPT integration was, and I got the somewhat cheeky response: “the user.”
Frankly, this annoyed me. I understand that there are reasons to object to how LLMs & other contemporary AIs were trained — although the jury’s still out on the legality of it all — but let’s be real here: pretending that people are as efficient as LLMs at the tasks LLMs are good at is roughly equivalent to pointing out how the longbow was more accurate and powerful than a musket, or that trained tailors make the best suits. It’s trivially true, but ignores the extreme usefulness of guns and factories.
Everybody has their own use-cases for LLMs, but here are three of my favorites:
Whenever I’m not sure whether a source has the answers I need, AI can summarize it for me, so I can judge whether it’s worth the time to dig into it on my own. While I would never rely on a computerized summary enough to cite it — any more than I’d rely on a newspaper headline — this sort of thing is great for quick evaluations, or what I and others sometimes call triage.
To that end, Elicit is the best implementation of AI I’ve seen for doing this with scientific papers. All it does is let you ask a plain-language question, then arranges information from scholarly journal articles into a handy table that can include anything from “the year and location this was published” to “a LLM-generated answer to your question based on this article.” You can even select the search results that are most like what you had in mind and then it’ll go find more along those lines. It is so much more useful for answering questions like “but why can’t I spend more than 10 minutes in a hot tub while I’m pregnant?” than boolean searches through resources like the JSTOR database, and unlike most AI does a great job of letting you check the source yourself if it seems to be relevant.
But Elicit isn’t the only way to leverage AI for summarization.
My favorite prompt is this one:
The following content is provided by a Twitter bot. You are a newsletter author trying to summarize interesting non-visual content for readers, with a focus on history, productivity, archaeology and anthropology. Please reorganize the Twitter content into a bullet point list of links worth investigating with a brief statement of why they might be useful.
That said, Readwise’s Ghostreader feature is great for quickly summarizing articles like those in an RSS feed. You can also hook up something like Raycast (mac only, alas) to an OpenAI API key, and summarize things that way.
For example, you can copy a lengthy email to your clipboard, and have it replace the email with a brief summary of its content in whatever format you like, ordered by whatever prioritization you like, so you can quickly figure out if there’s an emergency. With the fancier email software options, you can even leave this summary for yourself a note for when you go back to it later.
Technically, I suppose you could feed the LLM several articles, ask them to summarize them and rank them thru a particular lens (for example “read these 3 articles, extract information about how Roman militaries handled recruitment over the course of the republic and empire periods, then rank them according to how relevant they are to that question”, but I’ve never done this. If I did, I could theoretically use it as the basis of an article. A very rough basis, but not a terribly unethical one while AI still sucks at writing prose, because…
Whenever I see a really bad AI suggestion, it motivates me to keep writing. I can't stand seeing something wrong and not doing anything about it. I'm hardly unique in this phenomenon; one of the most obnoxious (but true) pieces of advice I've ever seen is that if you want to get a question answered on the internet, and merely asking doesn't get a response, say something on the topic that you know is wrong -- then people will leap to correct you, and you'll get an answer to your original question much easier.
I've never quite been able to bring myself to do this — the reputational damage seems like more trouble than an answer is worth, and I'm certainly not going to go through the trouble of making a sockpuppet account in order to get an answer... but it's always on my mind.
To be clear, I don't think this happens because people are unhelpful or anything. Often people won't answer your questions because they feel like they are missing information. But if they see that you've said something very wrong, and they know it's wrong, then they feel motivated to point out that you are wrong.
So, I think part of the reason people get so irritated by AI suggestions is because they feel like the computer is telling them they're doing something wrong. And in some ways, I think that's true. If you're using a writing tool and the AI keeps suggesting changes, it can feel like the computer is saying your writing is bad. Being constantly nagged about low-stakes things is annoying whether it's a computer doing it, or a spouse, or a child.
I can imagine that it feels a bit like when you're talking and somebody interrupts you to finish the sentence. Some people like that, because it makes them feel like the other person is engaged and listening and paying attention and on the same wavelength as you. Other people hate it because it throws off their train of thought and feels a little bit insulting — like the other person didn't want to hear what you had to say. So I imagine that preferences with these tools are, in some way, like conversation preferences.
For me, though, it's useful. Motivating, in the same way that some highly successful celebrities report that a big reason they kept pushing past the point of reason was a desire to spite someone who told them they wouldn't make it in their field. It's not really healthy for relationships, but computers don't care if we like them, so they just keep blindly pushing until we turn off the annoying feature.
So I think the AI suggestions and drafts can be helpful, even if (because!) they're irritating; they help me to focus and to stay on task, if only because fixing each little problem is an easier task than starting from scratch, even if — as is almost always the case — what I wind up with bears absolutely no resemblance to the original suggestion my computer shoved at me.
Overall, tho, using AI for actual longform writing is pretty terrible, with two exceptions:
Writing stories for children does not take a lot of originality in terms of prose-writing or insight. ChatGPT is godawful at writing good prose. As a mom, I am here to tell you that so are most children's book authors. The number of unforced errors in board books is just egregious -- a cow is not a "he" ! -- and some of the rhymes are terrible, even before you start getting into things like whether kids books have been ideologically captured by various culture wars factions, much less whether they fit your personal criteria of what is acceptable in a kid's book. Plus finding a kid's book that is perfectly aligned with your particular kid's particular obscure interest of the day is tricky, even with library access, because it's not timely. I personally find the proliferation of talking animals and pure fantasy in books aimed at three year olds annoying, for example.
Happily, ChatGPT and DALL-E exist, and I'm also here to tell you together they do a remarkably good job of writing tolerable short stories aimed at children. You can even print them out with generated images for the kids to color themselves if you're feeling crafty, although realistically I just read from my phone.
I've used prompts like:
“Write a version of oh my darling clementine song where she is not gone forever and is rescued from the mine…” because my son had an existential crisis when he heard me idly singing the original version. “It’s ok honey her daddy rescued her she wasn't really gone forever, he was just scared…” required some quickly amended lyrics, STAT, and ChatGPT came thru for me:
Then the miner, forty-niner,
Soon began to seek and find,
Thought he oughter find his daughter,
Now he's with his Clementine.Coming home he still don't want me,
She's robed in garments soaked in brine,
Though in fear I used to hate him,
Now she's here, I'll call him kindHow I missed her, how I missed her,
How I missed my Clementine,
Till I kissed her in front of mister
Father of my Clementine.
Amazing prose? Nah. Would I ever submit it to a literary magazine? Lol. But that doesn’t mean it wasn’t handy to have in my metaphorical back pocket, and stuff like that is the moment where LLMs really shine for me in my life. My son wants to hear the story of Alice in Wonderland but I can’t remember it exactly and don’t want to read the entire book just then?
“Hey ChatGPT, write an Irish folk song in the style of the Irish rover with a multitude of verses that use the same musical meter, but retells the story of Alice in Wonderland,” which gets you a surprisingly good poem meter for whatever tale you care to name!
In the year of our Lord, eighteen sixty-three,
Alice, a lass, so curious and free,
Chased a rabbit, in waistcoat so fine,
Tumbling headfirst into a land divine.(Chorus) Oh, Alice, dear Alice, in Wonderland so grand,
A magical journey, in a strange and myst'ry land,
With creatures so peculiar, and wonders untold,
She braved through it all, with a heart brave and bold.She met with a cat, with a grin so wide,
A Cheshire of sorts, his secrets inside,
He spoke in a riddle, and then disappeared,
But Alice pressed on, not a bit of her feared.[…]
It’s at least as good as the board book version I eventually bought…
But of course, there are longer prompts one can use when your kid comes home wanting a story of a particular type and you just don’t have time to get to the library:
“You are the mother of a 3-year-old boy who loves construction trucks. You are also a professional children's author who writes in the style of Mr. Rogers and Frank L. Baum. You follow the Montessori philosophy of focusing on real things instead of fantasy at this stage. Tell a story about a bulldozer known as Benny being used by a man named Dan who is building a playground in Boston. Include dialog, show don't tell, be specific. The plot should follow the plot mountain style with an obstacle, a climax, and a resolution of conflict. Include scientific facts about digging, grading the earth, and threats to construction sites from weather. Integrate these organically into the story.”
“Using words appropriate for a 3-year-old boy, write a 500 word story about a plumber who goes to a preschool and helps fix a water main break. Be detailed, specific, and make sure to provide step-by-step instructions and explanations suitable for a kid interested in STEM and engineering. Use a style appropriate for a Montessori school, i.e. everything should be realistic, with no mention of talking animals or fantasy creatures.”
That said, the most handy prompts I have are things that reformat original content, for example
You are the secretary of a journalist. You are not original or creative and you are worried about getting fired for inserting your own thoughts into the text. Reword the following audio transcript in a way that losslessly turns the information relevant to questions of (whatever my core thesis is) into an article about (whatever my core thesis is).
Restructure this article into a twitter thread.
Condense this article into 15 ideas for a title that matches the style of (four examples of article titles I’ve used in the past).
You are the secretary of a journalist. I am going to provide you with a bullet point list of links and mini reviews of their content. Please reformat it into an article where the links are attached to relevant sections of the review and the review is updated into proper spelling and grammar. Do this without being a numbered list, get rid of the stupid headings, and put all of the information into a bullet point list where the ONLY change is proper spelling and grammar. Do not editorialize, losslessly update my original draft to have proper capitalization and punctuation. instead of linking to "link" embed the URL in the actual sentence, like so: I am a [visual inspiration](link.com) but I am amazing as well.
For me, the important thing with using LLMs is knowing what they’re good at, and what they’re bad at. They hallucinate a ton sometimes, and ChatGPT doesn’t like to do more than 500 words at a stretch, but with things like highly targeted stories for children, generating decent rhymes, putting something onto the page so you can fix it later, mindlessly reformatting something, or getting a quick summary of something to help decide whether it’s worth diving into, they’re great…
…the same way a dishwasher is great. It can’t do everything — do not put your coffee maker into a dishwasher! It’s not as good as doing dishes by hand — it definitely can’t handle whole globs of chicken dip! But it’s a great tool nonetheless, and I’m glad it exists and I’ve got access to one, even if sometimes it’s annoying to clean the filter or replace a busted button or re-learn what the latest context windows are.
How about y’all? Are you getting anything out of LLMs these days?
Eleanor, the authenticity of your writing about your experiences as an author using an LLM brought a huge smile on my face. Because - yea - I get it. As a read down your post I kept murmuring - yep, uh hun, me too, RIGHT. Thank you for posting this. I'm going to reference this to others in a future post for people 'trying' to understand AI for writing. Thank you for just being so 'you' in your writing.
Thanks for writing this. I love the examples of the prompts.