Why Love-Work Is Different Than Hate-Work

A great read, and not just because of how deeply I felt the distinction between love-work and hate-work. I also really enjoyed how Horton described research eras in psychology. All fields have this sort of thing, and knowing them helps make sense of how we got to our current kind of thinking. (And that we’ll someday leave it behind!)

You must know, on some level, that doing work you love is psychologically different from doing work you hate. They don’t just feel different, emotionally. The two forms of work have different psychological textures. They involve different actions. They have a different cadence, different aims, different outcomes, and draw upon different wells of energy

Why Your Brain Fights You - by James Horton, PhD.

AI verifiability, but compared to what?

Much of the advice around using AI is that if you use it, then you need to verify what it produces. This is presently good advice. But I'm doubtful it will be good advice in the long-run.

Consider how little verification happens in large institutions by leaders who are making decisions. Of course, many bad decisions get made this way, but also many good ones. The difference is in the quality of the work put before the decision-makers. Eric Drexler explains it well in this recent article (emphasis mine):

Consider how institutions tackle ambitious undertakings. Planning teams generate alternatives; decision-makers compare and choose; operational units execute bounded tasks with defined scopes and budgets; monitoring surfaces problems; plans revise based on results. No single person understands everything, and no unified agent controls the whole, yet human-built spacecraft reach the Moon.

AI fits naturally. Generating plans is a task for competing generative models—multiple systems proposing alternatives, competing to develop better options and sharper critiques. Choosing among plans is a task for humans advised by AI systems that identify problems and clarify trade-offs. Execution decomposes into bounded tasks performed by specialized systems with defined authority and resources. Assessment provides feedback for revising both means and ends. And in every role, AI behaviors can be more stable, transparent, bounded, and steerable than those of humans, with their personal agendas and ambitions. More trust is justified, yet less is required.

Framework for a Hypercapable World | Eric Drexler

Professors Are Conservative, Actually

Politically, academics are much more liberal than the average person. But Paul Bloom makes the excellent point that, in areas related to their work, academics are actually deeply conservative.

Asking a prof about AI is like asking a taxi driver to weigh in on Uber. I think I have good reasons for my (conservative) defense of tenure, but you’d be forgiven for assuming that, having worked for and benefited from the protections of tenure, I don’t want them taken away. Part of professors’ unwillingness to give up on lectures is that they take a long time to prepare—once that time is invested, we don’t want to start anew. We certainly don’t want to transform the university in a way that risks making us obsolete.

I feel this deep in my bones. It’s so hard to get universities to change, and professors are the primary reason why. AI—as I’ve written before—is coming for us in a way that most of my colleagues are not at all prepared to face. But they will have to face it in the end.

Why are so many professors conservative? - by Paul Bloom

Joshua Gans on Vibe Researching

From an economist, about his extensive experiments with AI-driven academic research. I don’t have much of a research background, but his experiences seem right where expected.

My point is that the experiment — can we do research at high speed without much human input — was a failure. And it wasn’t just a failure because LLMs aren’t yet good enough. I think that even if LLMs improve greatly, the human taste or judgment in research is still incredibly important, and I saw nothing over the course of the year to suggest that LLMs were able to encroach on that advantage. They could be of great help and certainly make research a ton more fun, but there is something in the judgment that comes from research experience, the judgment of my peers and the importance of letting research gestate that seems more immutable to me than ever.

The creativity and nuanced judgment of what constitutes a good research are still missing in AI. What 2026 will bring is less certain, in my opinion.

Reflections on Vibe Researching | Joshua Gans' Newsletter

A Rare-Blood Donor Saved Millions of Lives

Australia's most prolific blood and plasma donor, James Harrison, has died at age 88. Known as the "Man with the Golden Arm," Harrison is credited with saving the lives of 2.4 million babies over the course of more than half a century.

Harrison died in February of last year. Of course, many, many people played a critical role in all the good that he did (nurses, doctors, researchers, phlebotomists), but Harrison also did his part and showed up, time after time.

Is there a better illustration of what it takes to make such an impact? Whatever we do, we have to keep showing up.

(I also posted this over at my other site, How to Help. If you don't know it, check it out.)

Blood donor James Harrison, who saved 2 million babies, has died | NPR

Some Provo Street Photography

Some Provo Street Photography

Since my focus has been learning landscape photography, I've never really done any street shooting. Thanks to a kind invitation, I had a chance to head downtown and try my hand at it. (Thanks Daren, Jason, and Justin!)

A bit of a grey day, and I need to get more comfortable taking photos of people. I mean, these look like a landscape photographer got lost downtown. 😂 But here are my favorite shots from today.

My Ten Favorite Photos of the Year

My Ten Favorite Photos of the Year

This is the year I made landscape photography an official hobby rather than just a thing I enjoyed doing with my iPhone. All of these photos were shot on a Fujifilm X-T5. These aren't in any particular order. It was hard enough just choosing ten!

1. Bryce Canyon

This is from spring break with the family last April. It was sunny and warm the day before, then an overnight snow blanketed the park. This was the first photo I took where I looked at it later and thought, “Holy cow. I took that?!?” I’ve since learned that I do that a lot, with my favorites being more accidental than deliberate. 😂

2. Flaming Gorge at Sunrise

On a campout with the young men in our congregation. I was up early in my tent and realized that I’d rather be out with the sunrise than laying in my sleeping bag failing to get more sleep. This photo is, I think, the best composition I did, even if I didn’t know at the time of shooting.

3. Flaming Gorge at Sunset

Same camping trip, but at sunset. There’s something about the colors in this one—orange, blue, and deep green—that I absolutely love.

4. Flaming Gorge Overlook

My son and I were driving home from this trip and decided to take a different way home than we came. There was an overlook sign so we pulled over. (How many overlooks have I driven past in my life?) I love the shapes and angles in this one.

5. Proposal Rock, Oregon

The couple in the distance here is my son and his now-wife, our first daughter-in-law. They were engaged at the time of this picture, but it wasn’t posed. I just happened to look up at the right time. This was a family trip in August and we had just one day of rain that week. Rather than spend it inside all day, we braved this little excursion. Bad weather makes some of the best photos.

6. Sea cave in Oregon

I love these colors so much. As I’m starting out with this hobby, I instinctively look for vistas. But I’m learning to see things closer to me.

7. The view from Timp

Katie and I hiked Mt. Timpanogos this fall. (Well, most of it. I had to turn around because of a strained calf.) This isn’t Timp itself, but the view across the valley on Timp’s north side. I don’t know how I got the sky to come out this color, but I love it so much, especially with the fall colors on the mountain.

8. John Irvine Trail, CA

We celebrated our 25th wedding anniversary and my 50th birthday in October, so Katie and I took a trip to the coastal redwoods in California. It was an absolutely magical week. This was a long hike, about 13 miles round trip. I’m glad I had more experience with my camera by the time of this hike, so I could better capture the contrasting light and dark of a redwoods trail.

9. King of Gold Bluffs

Coming back from this same hike. Elk roam this part of California, and I’d been hoping to see some but there weren’t any all day. And then on the drive out from Gold Bluffs Beach, we ended up driving through an entire herd of them. The patriarch was just ten feet from the car, so we paused to get his picture. That stare!

10. Sunrise over Capitol Reef

Another trip from the summer, while Katie was in charge of Girls’ Camp for our congregation. I came down to help cook dinners and woke up early one morning, couldn’t sleep, and went into the park. The funny thing about this picture is that it’s a pretty big crop of a much larger composition. Someday I’ll have a lens long enough to punch in on details like this without much cropping.


Looking back at this year just has me even more excited for the year to come! I don't know where I'll be going, but I look forward to seeing beautiful places.

Adversarial vs. Cooperative Teaching

Whatever your opinion of AI, I found this idea of teaching being either adversarial or cooperative to be really interesting. I definitely find myself using both perspectives depending on the situation (and the student). I’d rather be cooperative the vast majority of the time.

Your prediction about the effect of AI on education depends on whether you see teaching as an adversarial process or as a cooperative process. In an adversarial process, the student is resistant to learning, and the teacher needs to work against that. In a cooperative process, the student is curious and self-motivated, and the teacher is working with that.

AI has Educators Polarized - by Arnold Kling - In My Tribe

Low-Ambition Companies Will Suffer from AI

There are two ideas in the AI Zeitgeist that you come across almost daily. The first one is this:

"If you want to be a competitive worker, you need to know how to work with AI. Because if you don’t, you’ll be outpaced by the workers who do."

The second idea, even sometimes part of the same take, goes like this:

"Companies that adopt AI are going to do layoffs because AI agents can do the work of humans that are slower and more expensive."

It baffles me that these two ideas can somehow coexist when they are very obviously at odds with each other. At the very least, they misrepresent how AI agents work and the role that humans play in managing them.

The Overstated Autonomy of AI

My mom lives in Southern California, where she can take a Waymo to get around. She absolutely loves it. She’s relieved to not have to talk to an Uber driver, she likes the pace and consistency in how a Waymo drives, and she loves the convenience of doing it all from her phone.

But she still has to tell the Waymo where she wants to go. It doesn’t decide for her. Nor does it schedule the trips for her. Even if in the near future it started to recognize her habits, noting how she wants to go to the store at 8am on Wednesdays, it is still deriving its purpose from my mom’s intentions. And this is all for a pretty narrowly defined task, go from point A to point B. AI today doesn’t self-generate intention.

A manager who decides to replace employees with AI agents might think, “I’ll just give these AI agents my intentions and manage the agents instead of people.” Even assuming a fleet of agents can actually do extensive work autonomously today (they can’t), there’s still a huge constraint: the manager’s intentions.

Intentions need detail to lead to good decisions. They need elaboration. You can’t just tell an agent, “I want to make a lot of money,” and expect it to fill in the blanks. There are too many blanks. If such a thing were possible, a manager could just tell their employees the same thing. “Go make me money.” That’s hardly management at all, if you think about it.

Anthropic, the makers of Claude, illustrated all of this perfectly in a video they released just this morning. Meet Claudius, the AI agent who runs a vending machine business.

In this video, Anthropic is transparent about some of the pitfalls they encountered trying to get an autonomous AI agent to run a simple, profitable business. Claudius was easily manipulated by customers, confused about what was real and what wasn’t, and lost a lot of money. In the end, it only worked when they gave Claudius a boss (Seymour Cash, another AI agent). Of course, Seymour had his own bosses, the humans designing the experiment.

This is all part of work being done by Andon Labs, who designed a benchmark from this experiment called VendingBench (recently updated to Version 2). The purpose of the benchmark is to test how well agents can sustain a set of complex tasks over a long time horizon. Even brand-new frontier models, while capable of making a profit, can still end the benchmark prematurely.

The reality today is that AI agents are not truly autonomous. In my opinion, they won’t be for a long time to come. (There are good arguments they should never exist.) To succeed, they need to know how to choose what to work on, especially for anything longer than just a few hours at a time. For now, they don’t have a way to meaningfully make that choice absent human direction.

(Perhaps in the near- or far-distant future AI agents will choose entirely on their own what problems to solve or what products to produce. In the dystopian versions of this, we have no reason to think that they’ll want to produce anything that’s actually helpful to human beings.)

Elaborate, detailed intention is what matters to a successful AI agent, otherwise it’s a Waymo with no destination. This is why prompt engineering is a thing. And if you want a team of agents, you need to elaborate intentions for each of them, repeatedly. No one human manager can do this at scale, just as they can’t effectively manage a team beyond a certain number of employees. The manager is the constraint.

Ambition

Recognizing the constraint, I can’t think of any reason for a manager to replace employees with agents. Except if the manager is low-ambition, thinking “My old team could do X, and now I can have agents that do X.” Why in the world, if you can use faster and cheaper AI, would you stop at X?

Instead, keeping employees and training them is the only reasonable thing to do because it means expanding the constrained resource of intention. Employees who share the vision of the team, can make decisions about intention to make it granular and elaborated enough for agents to go do the work.

Software development is where the biggest employment impacts are happening now. And companies are already starting to see the mistake of replacing developers with agents. It turns out junior developers are worth more with AI, not less. I’ve done a lot of coding with AI agents since May. When the agent screws up or produces something buggy, the likeliest cause is my lack of skill in not giving the coding agent a clear enough set of intentions. I’ve had to learn extensively about how different technologies work so I can get the agent to write better code. Laying off developers, instead of giving them AI agents to direct, is low-ambition and shortsighted.

There are definitely workers today performing commodity tasks, things like data entry that AI can do easily and quickly. But those workers are squandered if they’re just laid off. What goes out the door with them is the ability to manage AI agents with intention. Organizations will need more of that, not less.

Even just in the short-run, I’m confident that the market will reward high-ambition companies that are hiring and training people to direct AI agents. Those companies will produce far more and make it faster. And they will leave the low-ambition, fire-all-the-humans companies in the dust.

2025’s biggest impacts—for better or for worse

This is absolutely fascinating and I’ve already spent too much time on this page when I should be finishing grades. All the biggest scientific or technological changes of 2025, ranked.

How did the world change this year? Which results are speculative? Which are biggest, if true? We collected and scored 202 results. Filter by field, our best guess of the probability that they generalise, or their impact if they do.

Frontier of the Year 2025 — Renaissance Philanthropy

Journals are publishing fake citations, too

Apropos to my earlier post about the Springer textbook with fake citations, academic journals are seeing a rash of the same thing.

What Heiss came to realize in the course of vetting these papers was that AI-generated citations have now infested the world of professional scholarship, too. Each time he attempted to track down a bogus source in Google Scholar, he saw that dozens of other published articles had relied on findings from slight variations of the same made-up studies and journals.

Incidentally, the Heiss in this quote is my friend Prof. Andrew Heiss, one of the smartest people I know.

AI Chatbots Are Poisoning Research Archives With Fake Citations | Rolling Stone