Several years of weekly Personal Science posts has me thinking of ways to apply what I’ve learned to this newsletter itself.
Meanwhile, AI-generated content has become pretty good. How should Personal Science Week change to reflect these new tools?
A friend and long-time business colleague recently started a food and nutrition-related Substack, and it’s excellent! As a non-native English speaker, she normally struggles to write long prose like this —you can tell from her accent and her emails, which have the usual non-native nuances of English grammar and diction. Yet, her new posts flow perfectly. Did she hire a ghost-writer? A careful editor?
She has ChatGPT. She drafts her ideas, references, and images, and ChatGPT transforms these rough notes into a coherent draft. She tweaks it, gets more suggestions, and after a few iterations, her post is ready.
And I think: why don’t I do that? Personal Science could have 10x the output and suffer little (no?) noticeable lack of quality.
But look closer and you’ll see a subtle flaw in AI-generated content: it’s too perfect. I don’t mean it’s not good—her writing includes solid details and references, the organization is great with a compelling introduction and logical conclusion.
It lacks authenticity. I bet as AI-generated content increases in quantity and acceptability, we’ll find ourselves missing the human voice. Her post resembles a Wikipedia article: factual and concise but without soul. She comes across as a professional doing her job, and doing it well. No nonsense, no fluff. Good, useful information, well-packaged for maximal utility and effect.
New Rules for Content
In other words, as we integrate AI into our lives, maybe we’ll learn to prefer LLM-generated content when we’re looking for factual and practical information. When you need it to be correct and accurate, when you need it to be readable, go for the AI-generated stuff.
But when you need it to be human … you’ll want it written by a human.
Nobody wants an AI-generated thank-you note. The words in a Mother’s Day card are meaningless if they aren’t heartfelt
.
Why We Read Human Authors
We humans are mimetic creatures; we can’t help but watch and imitate others. We’re social beings, relying on what Heidegger calls Das Man, “the They,” to help us navigate our lives, to make sense of an otherwise messy and incomprehensible world. We need that connection, often finding it in reading and observing others.
People read the New York Times not just for information but to understand what the They are thinking about. Even if I disagree with the content, staying connected to that collective voice is essential for communication and social coherence.
LLM-generated content essentially channels “the They.” It mirrors the collective consensus without any individual perspective, reducing content to a reflection of the majority view, with nothing personal to imitate.
Human content will be more compelling than LLM-generated content only to the extent that it follows mimesis—our inherent tendency to imitate others. This means moving beyond just providing information or polished summaries. There’s no point in competing with AI on its terms—it excels at delivering the facts and organizing ideas efficiently. Instead, focus on creating content that enables mimesis—tapping into our inherent desire to connect and imitate others.
Emphasizing the Human Element
To stand out in a sea of excellent, LLM-generated content, human authors will have to include the kind of real, authentic asides that prove we’re human. Prioritize personal insights, unique perspectives, storytelling, and emotional resonance—elements that reflect genuine human experience and can’t be authentically replicated by AI.
If Personal Science is only good for practical tips or up-to-date news, then—hate to break it to you—AI is better. Have you seen Daily.AI or FutureLoop from Peter Diamondis? There’s a whole business built on auto-generated quality, targeted content and it’s pretty good.
Ultimately, human-generated content and LLM-generated content serve different but complementary roles. While AI content helps us align with broader societal contexts and understand the collective voice, human content draws us in with its mimetic appeal and perceived authenticity. The future of content creation lies in understanding and leveraging these distinct strengths—using human touch for connection and AI for coordination.
This week, I’ll summarize two other books of interest, plus my usual collection of ideas too dangerous for wide circulation.