Last September, I moved to Boston with a plan. I’d buy a bike; I’d try to keep up with Anna on sunlit runs along the esplanade; I’d get a job slinging matcha; and, because it was new and engaged and a mode of sustained attention I liked, I would subscribe to a few magazines. After a drawn-out process of deliberation, I settled on The New Yorker, and, to compensate, the London Review of Books. Issues started arriving, and I started reading them, and then more issues arrived, and more, and I tried to keep up, and I failed, and before long the apartment at 81 Joy was stacked with to-be-reads.
Months later, after a few runs and no bike and lots and lots of matcha, I received a letter from Harper’s magazine. Somewhere in the dark alleyways of data tracking and sharing and selling, it seems I had been flagged as a potential subscriber to the Nation’s Oldest Monthly, in circulation since 1850. The letter was cordial and persuasive and just a little bit pleasingly ironic; it also included some facts pulled from something called “Harper’s Index,” like this fun statistic:
Number of college graduates currently working as astronomers, physicists, chemists, mathematicians, or web developers: 216,000.
As waiters and bartenders: 216,000.
Or this one:
Average number of times each week US surgeons operate on the wrong patient or body part: 40.
I read those facts, tried to square the second with my Pitt-based medical knowledge, re-read the letter, then checked the “yes” box at the bottom of the page to announce that I did, in fact, want to be sent one free issue of Harper’s. I dropped it in the mailbox on the corner of Columbus and Springfield the next time I took the orange line to Anna’s.
Then I forgot about it; and so, for now, should you.
It snowed a lot this February. For a while the weather went through an intermediary period of melting, which was characterized to me by a glass bottle held in a perfect sunlit fist of ice on Joy Street; then it snowed again. The repeat blizzards struck up a dismal mood among the baristas at Blank Street, who started whispering about something called “Blank Street 3.0” — which is an increasingly awful myth about the structure of the company’s newest stores. It’s said that 3.0 stores will not allow baristas to move from their assigned stations during work, and, in some tellings, to even turn their heads; that duties (register, hot bar, cold bar, etc.) will rotate each hour, precisely on the hour; that, for some reason, baristas will have to pour the espresso shots into drinks in front of customers’ eyes; and that all this will be videotaped and monitored for compliance and productivity by AI.
There’s no way that any (well, most) of that is true, and if it is I will certainly be long gone before it comes to 122 Causeway. But it fit into a February through-line, one of those unavoidable monthly themes — unavoidable maybe because it is completely everywhere these days, in every corner and conversation and ad. I mean, of course, AI. The first weekend of February, Byron and I chatted about Claude’s “soul” and constitution (which I exclude from quotation marks because it’s, like, a literal constitution). The next weekend, I had to sign an AI rider on my contract as a production-assistant-slash-extra for Omri’s film. The following week, Anna and I bemoaned the influence of AI on the high schoolers she teaches and the students I tutor. And, recently, Jack and I jabbered back and forth good-humoredly about AI and copyright, while Jake, who had gotten us drunk for precisely this purpose, took notes for an assignment.
AI isn’t really a topic you can sum up, nor one it’s possible to be totally right about at this juncture, I think. I’m also inclined toward an AI-skeptical position that people with different exposure and expertise might validly disagree with. But after a month of thinking, here are a few things that, in February 2026, seem worthwhile to consider about AI:
One: AI makes intelligence a resource, like energy or materials. This is part of Anthropic CEO (and newfound liberal warrior, apparently) Dario Amodei’s point in a massive essay called “Machines of Loving Grace,” which Byron showed me and which, along with its darker twin “The Adolescence of Technology,” is worth a read if you have a spare six hours. AI, he argues, provides enough raw “intelligence” (ie, logical, computational power) to puzzle through pretty much anything. Instead, solutions will be bottlenecked by other factors, like how much time it takes to run an experiment. The implications of this seem pretty big, to me, not least because “intelligence” is no longer a specifically human contribution; but then I wonder if that definition of “intelligence” was ever really the most important thing humans were bringing to the table.
Two: The real risk of AI is not job loss — it’s the impact on learning and abilities. AI will certainly change the job market (last fall, in the midst of writing a piece called “I Am Not AI” about my job writing study guides, I got replaced by AI), but its most concerning effect in the long term will be to make it unimportant for people to learn to think in the first place. To the high schoolers I tutor or Anna teaches, AI is a constant and basically unregulatable temptation to outsource learning. It means it’s easy for kids to avoid acquiring the hard skills of reading analytically, writing complexly, and thinking deeply. Above all, it means kids are no longer pushed in the same way to put in mental effort — and that’s bad for the future in pretty much every way.
Three: AI is really hard to regulate, for lots of reasons. One, it’s a stark collision between a private sector designed to move quickly and a government structured for slow and measured change. Two, the onus lies on, well, not the congress one would hope for. Three, AI is a ridiculously powerful economic force, and it’s tough to get people who benefit from those trillions to support or agree on regulation. Four, AI takes society into spheres that have never had to be regulated before, and puzzling through the social, ethical, philosophical, practical, and political implications of that fact is just a really hard thing to do. At the same time, we have to regulate, both with policy and with economic incentives — because for every Dario Amodei there’s an Elon Musk, and in the interim AI just keeps getting more engrained.
So, four: AI is a decision. It’s a decision made by a small group of people interested in “progress,” but actually interested, primarily, in money — the same people who gave us the collective “decision” of social media and other technologies that operate as vast experiments to the detriment of most. This time the decision is that we would rather have a society that’s frictionless than one that’s rigorous. We would rather be efficient than diligent. We would rather be productive but average than slow but exceptional. We want things to be easy, not hard. We prefer to be generic, not singular. AI is a decision; and it’s already been made.
On a gray day in late February when I was thinking bleakly about that fourth point, I checked the mailbox and was surprised by my one free issue of Harper’s. The cover featured a man sitting in a black-and-white, Beckett-style wasteland, his head replaced by a colorful bouquet of balloons. (A figure for dopamine? Fantasy? The creation of meaning amid a meaningless universe? Just a nice picture?) The article that caught my eye, called “Child’s Play,” described the adventures of columnist and notorious essay stylist Sam Kriss as he hung around in San Francisco talking with people who are considered “highly agentic.” As far as I can tell, that just means people who act without thinking very much. People like Roy Lee, a 21-year-old Columbia dropout who founded a poorly functioning AI-powered company called Cluely, which he designed to help people “cheat on everything.” (His net worth is around $20 million.) These people are willing to take risks because they don’t worry about them; they don’t consider implications, social cost, or really anything beyond financial payoff. They just do. The argument of the piece was that being “highly agentic” is seen as immensely profitable. People with lots of money in Silicon Valley are willing to bet big on the likes of Roy Lee — because the development of AI means anyone can do basically anything if they have enough initiative.
I thought that was alarming; but the more I considered it, the more I thought it was actually just… pretty stupid. And therefore reassuring. Here was my initial thought process:
Alarming: “Highly agentic” means acting without thinking. That means not weighing impact. Money offers the ability to put un-thought-through ideas into action fast. That means a world of companies like Cluely and an economy that reflects those values; it also means ignoring things like social good.
Stupid: “Highly agentic” means acting without thinking. That means not weighing anything. In other words, it means no judgement. AI can do a lot, but that only elevates the importance of making good decisions. (There’s a recent op-ed on this theme by a Harvard freshman in the Boston Globe, if you’re curious.) I mean that morally, but also just economically; even today, a smart decision will generally turn out better than a dumb one, regardless of how much money is involved.
Another article near the back of Harper’s helped me puzzle through these reactions with more nuance. It was a meditative piece about the painter Caravaggio, driven, like his paintings, by the particularly resonant light of the Mediterranean (which is one of those topics that unavoidably straddles the line between ridiculous and true). The author describes her visits to the beach upon which Caravaggio died, lingers in descriptions of pine branches turned to embers by a Roman afternoon, narrates her son’s new and unexpectedly fitting joie de vivre, and gives a remarkably gentle analysis of some of Caravaggio’s starkest works. It’s a good article, and one whose crux is a question of attention, of the attention people pay to life and to art, to beautiful things, the comfort to be found in beauty. The joy of a year in Italy, attributed at first to the light, is refigured in the end as due to the renewed sense of attention which comes with the privilege of time apart.
This was the article which tempted me to pay twelve dollars and give the Harper’s marketing team a total victory. It clarified the importance of judgement by spotlighting its relationship to deliberation, solace, beauty, patience — these particularly human attributes. It made me realize just how much I think the phrase “highly agentic” misunderstands agency. The most famous literary work about this topic is probably Hamlet, which is all about the titular (eponymous?) prince’s attempts to navigate the murky waters between thinking and acting while watching his murderous uncle, his rival Laertes, and Laertes’ sister Ophelia act decisively in one way or another. But closer readings reveal even Hamlet to really be a play about the complex tangle that makes up “agency” — about madness and mourning and remembrance, how they collide with social norms, how they disturb a person’s sense of self, how they destabilize the relationship between appearance and reality. Shakespeare seems more interested in the social and emotional factors that knock thought and action out of joint than in anything “highly agentic.” In other words, agency is a knot of so many things, of attention and identity and values and relationships and intellect and background and soul and constitution and just plain feeling, that to reduce it to “action” misses the point.
So maybe a human vision of agency means the “decision” of AI has not already been made. People still get to choose how they interact with AI: how and how much they use it, how they regulate it on an individual level, how they set social boundaries around it, how they talk about it, how they construct its cultural position, how they give it to their kids. They get to respond to how they feel about it. They bring themselves to the table. There are lots of systemic factors that make it hard to have faith in that; but I think it’s important to try.
Helpfully, when you pay attention, life is full of great little moments of that kind of agency. The end of February brought the launch of new spring-themed drinks at Blank Street, which, paired with great weather, meant the busiest day of the year; and though one person did say, verbatim, “I’m not a coffee drinker so I had to ask Chat what to order” before requesting a “big iced single-shot extra vanilla americano with skim milk” (a drink which no human barista in history has ever recommended), the range of interactions that day, the sun enlivening the street outside, the dogs lolling about the shop, the cherry glaze glistening across crowns of matcha, Regina’s pizza for lunch, the strollers and gossip and big puffy croissants, people just being happy and acting how they feel... well, it’s not so bad. The world is nice and vivid and material, and it turns out Boston has pretty great light. February’s over. It’s practically spring.