Why Writing Felt Like Torture Until AI
On the linearization tax, experimental minds, and the reps that remain
A reviewer once wrote back that I probably meant “piqued” — but allowed that it might have “peaked my interest” too. He wasn’t wrong on either count.
That note sat with me. Not because of the typo — I knew the word — but because of what it revealed. Here was someone who had spent years being paid to think, and he couldn’t reliably get ideas onto a page without them coming out wrong. Words slithered out in incomprehensible ways. Writing was torture.
But data analysis? That was different. I would design an experiment, collect data, run the analysis — and all of a sudden I had an answer. I could see how small changes affected behavior. Those changes told me something about how the mind worked. I would design another experiment and go again. That was the thrill of a lifetime. That was why I got into research.
Then AI appeared. My torture ended. Writing became just like analyzing data.
It took me a while to understand why.
The Experimental Mind
Economist David Galenson distinguished between two types of creative minds. Malcolm Gladwell’s 2008 New Yorker essay “Late Bloomers” brought the framework to a wider audience.
Conceptual innovators start with a clear vision and execute it. Picasso said: “To search means nothing in painting. To find is the thing.” He knew what he wanted and painted it, masterpieces in his twenties.
Experimental innovators work the opposite way. Cézanne made his art dealer sit for one hundred and fifty sessions before abandoning the portrait as a failure. His mind worked by discovering the image in the process of making it.
I recognized myself immediately. What I hadn’t seen before is that I’d been living this pattern long before I became a researcher.
At 20, I moved to Brazil and learned Portuguese. At 22, I transitioned back to English and Spanish. German came a decade later. Each language wasn’t just vocabulary. It was a different way of chunking reality, a different architecture for thought.
I wasn’t specializing. I was stretching, repeatedly, into unfamiliar systems, learning each one through immersion and trial and error. Repetition with reach.
Streps, before I had a word for it.
That’s also what data analysis is. You design an experiment. You probe a system. You observe what emerges. You refine and go again. You don’t know the answer before you start. You discover it through iteration.
The experimental innovator isn’t a type I belong to. It’s what I’ve been doing since I was 20 years old.
The Linearization Tax
Here’s the problem. All of that stretching across languages, across domains, across experimental designs built a mind that works associatively, in networks, across multiple systems simultaneously. I see patterns across neuroscience and multilingualism and sports and education at the same time.
Writing demanded something different. Writing demanded that I force those networked, multidimensional thoughts into the sequential march of sentence after sentence after sentence. It demanded linearization, and it demanded it before I could even discover what I meant.
That’s the tax. In a fifteen-year period, I wrote more than fifty grant proposals to get four of them funded. Every rejected proposal represented hours, sometimes weeks, of wrestling my networked thinking into the sequential, bureaucratic prose that grant reviewers demand. Hours that weren’t spent on the actual experimental work I needed funding to do. The linearization bottleneck wasn’t just slowing down my creative output. It was actively impeding my scientific progress.
The multilingual dimension made it worse. I wasn’t just translating ideas into words. I was negotiating between language systems, each pulling in a different direction. No wonder words slithered out wrong. I was forcing a multidimensional semantic space through a one-dimensional channel.
Galenson would recognize this as the double tax experimental innovators pay: the inherent cost of discovery, compounded by a cognitive mismatch between associative minds and the linear demands of prose. For someone like me, that combination made writing feel like torture even when the ideas were genuinely good.
What AI Actually Does
AI leaves you exactly as you are, an experimental innovator, discovering through iteration and trial and error.
It eliminates the linearization tax while preserving the experimental discovery process.
AI doesn’t generate the insight. It externalizes what years of work already built.
I remember the first time it shifted. A little over a year ago, working with ChatGPT on an early draft, I realized I wasn’t grappling with each word anymore, wrestling it into place and then wondering if it worked. Instead I was weaving and reweaving. ChatGPT would respond to my thinking in ways that made me think differently. I would try a version, get feedback, try another. Writing stopped being pure output. It became output with input, which changed the output, which changed the input, on and on. It felt interactive in a way writing never had before.
It felt exactly like analyzing data.
The process feels like heaven because it finally matches how my mind actually works. Decades of stretching across languages and domains built a mind that thinks in networks. AI is the first writing tool that doesn’t punish me for that.
The Reps Remain
But one thing hasn’t changed.
I still need the one hundred and fifty sessions. The extensive reading, the cross-domain synthesis, the years of experiments that gave me something to say. The Portuguese at 20, the German after that, the fifty grant proposals that taught me how institutions think even when they rejected how I thought.
As Cristina Lozano put it: you have to have done the work first.
AI removes the linearization tax. The reps remain.
But they were never just reps. They were streps, stretching and repetition, across languages, across domains, across decades. The discomfort was the mechanism. The reach was the point.
That part is still ours. It always was.



As a writer with a multilingual experimental mind, this makes fascinating reading!