What We Create Matters More Than How
A librarian recently asked me a question that perfectly captures where we are right now: "How can we make sure we're not buying books that were written by AI?"
I think my response surprised her: "If the content of the book is actually valuable, do you care?"
Her question reflects how we believe that the process of creation determines the value of what's created. But that's not how we actually experience most things anymore. I'd like to suggest that it might be time we acknowledged that shift.
The Photography Standard
In a previous post, I talked about how automatic and digital photography democratized visual storytelling. Photography was once dependent on the photographer's technical mastery of exposure and developing skills. But we actually judge photographs by the output. Most people don't care whether a stunning photograph was taken with a film camera, a digital SLR, or an iPhone. We really don't care about the f-stop settings or whether the photographer developed their own negatives. We look at the image itself and ask: Does this move me? Does this communicate something meaningful?
That's the standard we've now adopted for photography. And I think it's likely it will become the same standard we will apply to written content, creative work, and problem-solving outputs in an AI-enabled world.
The Great Conflation
Here's an uncomfortable question: have we been conflating two separate skills? We've treated writing ability as essential for thinking ability. It's so ingrained in how we define thinking and education that to separate them feels heretical.
If someone struggles to organize ideas into clear prose, we assume they're not a clear thinker. If someone can craft elegant sentences, we assume they have elegant ideas.
But what if that's only part of the story?
Some brilliant thinkers find writing torturous. Some skilled writers don't actually communicate anything profound.
There's a famous passage in Plato's Phaedrus where Socrates worries that writing itself will be "the death of thinking"—that it will make people rely on external marks rather than internal memory and understanding. That's not wrong. We did trade some cognitive capabilities for others when we adopted writing as our primary knowledge technology (when was the last time you recited an epic poem from memory?).
What if we're going through a similar shift with AI? We might ask: what becomes possible if we separate the skill of thinking from the mechanics of writing?
The Practical Reality
Here's why this isn't just philosophical musing: approximately 50% of the content on the internet is reported to now be AI-generated. We're past the point where we can pretend this is a niche issue we can screen out or work around.
Libraries cannot realistically avoid half of all published content based on creation method. Schools can't fail half their students for using AI assistance. Employers can't reject half of all applications. Publishers can't dismiss half of all submissions.
We're being forced to evolve our evaluation systems whether we're ready or not. The old gatekeeping methods simply don't scale in a world where AI collaboration is everywhere.
And let's be honest: human authors aren't perfect either. Books written entirely by humans contain errors, weak arguments, and unclear prose a lot of the time. The creation method doesn't guarantee quality in either direction.
What We Evaluate
If we can't judge content by how it was made, what should we judge it by?
The same things that actually matter:
- Accuracy: Is the information correct and well-sourced?
- Usefulness: Does this solve a problem or answer a question?
- Clarity: Is it well-organized and understandable?
- Impact: Does this help someone, teach something valuable, or move a conversation forward?
- Insight: Does this offer a fresh perspective or make meaningful connections?
These are outcome-based criteria. They measure what the work accomplishes, not how much the creator worked (suffered) to produce it.
This is a fundamental shift from effort-based to outcome-based value. We're moving away from "this must be good because it was hard to make" toward "this is good because it works, because it helps, because it matters."
Output Shaping: The New Essential Skill
Just as "vibe coding" has entered our vocabulary to describe an intuitive, flow-state approach to programming (meaning, we come up with an idea for a program and AI does the heavy lifting), we need a term for the parallel skill with AI-generated content: output shaping.
How different is vibe coding from having an idea and hiring a programmer? How different is output shaping from hiring a professional writer?
Output shaping is the art of directing and refining AI-generated work to match your vision and intent. It's not about passively accepting whatever the AI produces first; it's about actively steering a collaboration until you get (and improve on) what you envisioned.
Someone skilled at output shaping would: be
- Articulate what they want clearly enough to guide the AI
- Recognize when the output is close but not quite right
- Iterate and refine through multiple rounds
- Maintain their own voice and vision throughout the process
In our digital photography world, there is still skill in producing a good photograph. It's much easier, though, and there is arguably much more good output. By a huge magnitude.
Output shaping actually becomes a dividing line between effective AI collaboration and passive use. It's a skill that matters now. It's not whether you can manually craft each sentence, but whether you can shape the output to accomplish what you intended.
Again, I know this feels heretical. But I think it's inevitable. A new tool changes how things get done. I drive a car without knowing how to build one. Is this any different?
Find Your Problem
Claude's "Keep Thinking" campaign captures this beautifully. The ad I keep seeing opens darkly: "There's never been a worse time," with the word "problem" flashing across the screen. Then it pivots: "There's never been a better time to have a problem."
That reframing is really good. We're surrounded by challenges, yes—but we're also equipped with unprecedented tools to tackle them. The campaign positions Claude not as a shortcut or a replacement, but as a tool for people who "see AI not as a shortcut, but as a thinking partner to take on their most meaningful challenges."
AI is here to stay, so we can't just see it as a problem (although, like all new technologies, we are navigating tradeoffs); it's an incredible tool that potentially can help us address the problems we really care about.
So, that's an invitation: find a problem you care about. Bring your insight, your passion, your unique perspective. Then use AI to help you shape that vision into a solution.
Find your problem. Find a problem worth solving.
The Real Question
When that librarian asked me how to avoid AI-written books, it was a totally reasonable and understandable question. However, I think we're going to ultimately conclude it is the wrong question.
Aren't these the real questions?
- Does this book help someone?
- Does it solve a problem or answer important questions?
- Is the information accurate and well-presented?
- Will readers be better off for having read it?
We've accepted comparable standards for photography. We judge the image, not the process. It may be time to apply that same lens (smile) to AI. The question isn't whether you use AI to shape your output, it's whether your output shapes something meaningful in the world.