The Age of Living Software

The world of software is changing so quickly that it's hard to keep up with all of the new ways we can use it. One of those ways became apparent to me yesterday.

Custom Software

I'm on the admissions committee for my department, and about three or four weeks ago, we all met together to discuss ways that we could use AI to streamline and enhance our review of student applications to our Master of Public Administration program.

I took notes on everybody's feedback and ideas, and I also recorded the meeting and exported a transcript. I used these for a two-hour planning session with Claude exploring and detailing what the app would be like. After that, Claude Code and I spent probably another five to six hours building and tweaking it and getting it ready to go for everybody. This is a fully functional Next.js/React app with a Convex database using email auth, and then connected to my university's OpenAI endpoint.

The app ingests a CSV file with all of the student applicants and their details, then ingests the PDFs of every student's application. Those PDFs contain things like letters of recommendation, their statement of intent, resumes, and transcripts. All the PDFs were then analyzed by GPT-5.2 Thinking—assessing things like grades in quantitative classes, a demonstrated interest in public service, and so on.

Everybody on the committee loved it. I’d show screenshots, but they contain private data so I’ll just describe. As they reviewed each applicant assigned to them, reviewers saw a panel on the left-hand side showing the AI summary of the applicant, their statement of intent, their transcript, and their letters of recommendation. In the middle was a view of the actual PDF so that the full application of the student was there to read. On the right-hand side was our scoring mechanism, where we scored each candidate on a variety of dimensions and left comments. Not a design breakthrough, but tidy, efficient, and orders of magnitude more convenient than our previous approach.

Living Software

This was all really cool and it worked really well. But the especially fun part (and the part I wanted to comment on in this short post) was the page that I had created for our admissions decisions meeting. It had all the applicants listed where you could click on one and it would expand to show each of the reviewers' comments and scores. We used it together to go through all 123 applications and make admit, deny, or waitlist decisions on each one.

But here's the amazing part: this meeting review page was just something I designed quickly, thinking through the basics of what we needed. Then throughout the first hour of the meeting, as we came across user interface improvements we could make, we just made them.

“It would be really nice if we could see a count of each declared emphasis and how many we’ve admitted so far.”

“Great idea! Give me a minute.”

“Can we make this part float like a frozen row in Excel.”

“Sure!”

Each time, I pulled up Claude Code to prompt the change, pushed to GitHub, Vercel rebuilt, we refreshed the page, and in a few short minutes the software was substantially better. We easily made a dozen changes to the app on the fly.

As a treat, I secretly had Claude Code make a celebration screen that appeared when we made the final decision. Digital confetti makes everything better.

It was all frankly amazing, and it shows where we are now—where software doesn't have to be a "take it or leave it" proposition. (This being how most users have been forced to experience software for decades.) Instead, the app was a living and adaptive thing that fit our needs in the moment. Such a model of software is mind-blowing when you think about it. "One size fits all" is an old paradigm now, and it's exciting to think about software that adapts and changes in a living way as you use it.

Anthropic Will Win Against DoD

Regarding what I wrote yesterday, this piece is an expert overview of the laws at stake and why DoD’s supply chain risk designation for Anthropic is doomed to fail.

From the government's perspective, Claude does pose some concerning vendor reliability issues. But the specific actions Hegseth and Trump took have serious legal problems. The designation exceeds what the statute authorizes. The required findings don't hold up. And Hegseth's own public statements may have doomed the government's litigation posture before it even begins.

Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System | Lawfare

Anthropic doesn’t have to work for anyone, including the government

I’ve seen enough takes on the Anthropic/DoD conflict since it all went down last week, and I’m surprised at how often this important principle is being left out of the conversation:

There are many freedoms enjoyed by Americans—and therefore American businesses. One of them is that we don’t have to work for the government if we choose not to.

If I want to be employed by the government, I can choose from the range of options the government offers. If they want to hire me, I can work for Reclamation and help maintain dams, or for the Social Security Administration to process claims, or for the military to defend the United States. But once I’ve decided to work for Reclamation, it doesn’t mean the U.S. Government can also require me to work as a janitor, a Congressional aide, or a spy. Note that it doesn't matter if what the government wants is entirely legal. If we can’t come to an agreement, they can fire me or I can quit.

Anthropic chose to quit, and it’s nonsense that this is some sort of veto over the powers of a democratically elected government. You can argue that Anthropic shouldn’t have the beliefs they have about AI and military action or government surveillance. You can make a moral claim that they should want to support the military. But if your argument is that Anthropic refusing to do so is some sort of corporatocracy, then you're ignoring essential rights.

The point isn’t that corporations should have power over government. The point is that people, and therefore their businesses, have power above government. That power appears in the voting booth, of course. But it also comes in all the other freedoms we enjoy because of the limits on Constitutionally designed government.

The Department of Defense offered Anthropic a job, which the company accepted. When the terms of employment changed, Anthropic quit to uphold their values. This is fundamentally how a free society with a limited government should operate.


Footnote: I get that there are laws entitling the government to force its citizens into certain behavior, but these are constrained by the first, fourth, fifth, and fourteenth amendments of the Constitution, as a start. All of these favor Anthropic’s right to refuse the government’s demands.

If AI companies were consumer tech from a decade ago

Late night noodling, but even in the light of day this still feels right to me.

If we mapped current #AI companies to consumer tech from the 2010s: Anthropic = Apple. Focused on high quality for a smaller market. Stubborn and opinionated in annoying ways, but innovating in important ones. Sets trends. Genuine in its principles, whether or not you agree with them. 1/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

OpenAI = Google. Market-defining from the start. Now wants to be everything for everyone. Staffed by nerds who are at odds with management, and management wins. Began with noble intentions (remember "Don't be evil"?), but revenue overruled. Has a graveyard of failed public projects. 2/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Google/DeepMind = Microsoft. Workman-like quality, only the best at one or two things. Preserving the ecosystem drives every decision. Staffed by some of the smartest people around who get slowed down by the bureaucracy. Won't lose, but won't win either. 3/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Meta = RIM (Blackberry). Already lost but doesn't know it. Wastes money on big swings that are only affordable because of its legacy business. Corrosive leadership doesn't realize the best hope for the company is to step aside. 4/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

xAI = Samsung. Fast follower only. Plays the scrappy underdog, but really just flash over substance. Run by a corrupt, image-obsessed leader who uses government influence for profit. Has a rabid, contrarian fanbase, mixed with people who don't care enough to pay for something better. 5/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Perplexity = Snapchat. A truly unique offering, but the people who don't use it don't get why it exists. The people who do use it love it. Likes doing weird things as a way to stand out. Always treated like a quirky little brother. 6/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

DeepSeek/Kimi/Z.ai/etc. = Huawei/Xiaomi/Oppo/etc. Providing insane value as long as you are willing to ignore the idea that the Chinese government uses them to spy on you. Tinkerers & geeks love them, of course. An ecosystem of YouTubers will rush to review every new model. 7/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Obviously I'm opinionated, and this is not a perfect list but fun to think about. Anything I missed? 8/8

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Why Love-Work Is Different Than Hate-Work

A great read, and not just because of how deeply I felt the distinction between love-work and hate-work. I also really enjoyed how Horton described research eras in psychology. All fields have this sort of thing, and knowing them helps make sense of how we got to our current kind of thinking. (And that we’ll someday leave it behind!)

You must know, on some level, that doing work you love is psychologically different from doing work you hate. They don’t just feel different, emotionally. The two forms of work have different psychological textures. They involve different actions. They have a different cadence, different aims, different outcomes, and draw upon different wells of energy

Why Your Brain Fights You - by James Horton, PhD.

AI verifiability, but compared to what?

Much of the advice around using AI is that if you use it, then you need to verify what it produces. This is presently good advice. But I'm doubtful it will be good advice in the long-run.

Consider how little verification happens in large institutions by leaders who are making decisions. Of course, many bad decisions get made this way, but also many good ones. The difference is in the quality of the work put before the decision-makers. Eric Drexler explains it well in this recent article (emphasis mine):

Consider how institutions tackle ambitious undertakings. Planning teams generate alternatives; decision-makers compare and choose; operational units execute bounded tasks with defined scopes and budgets; monitoring surfaces problems; plans revise based on results. No single person understands everything, and no unified agent controls the whole, yet human-built spacecraft reach the Moon.

AI fits naturally. Generating plans is a task for competing generative models—multiple systems proposing alternatives, competing to develop better options and sharper critiques. Choosing among plans is a task for humans advised by AI systems that identify problems and clarify trade-offs. Execution decomposes into bounded tasks performed by specialized systems with defined authority and resources. Assessment provides feedback for revising both means and ends. And in every role, AI behaviors can be more stable, transparent, bounded, and steerable than those of humans, with their personal agendas and ambitions. More trust is justified, yet less is required.

Framework for a Hypercapable World | Eric Drexler

Professors Are Conservative, Actually

Politically, academics are much more liberal than the average person. But Paul Bloom makes the excellent point that, in areas related to their work, academics are actually deeply conservative.

Asking a prof about AI is like asking a taxi driver to weigh in on Uber. I think I have good reasons for my (conservative) defense of tenure, but you’d be forgiven for assuming that, having worked for and benefited from the protections of tenure, I don’t want them taken away. Part of professors’ unwillingness to give up on lectures is that they take a long time to prepare—once that time is invested, we don’t want to start anew. We certainly don’t want to transform the university in a way that risks making us obsolete.

I feel this deep in my bones. It’s so hard to get universities to change, and professors are the primary reason why. AI—as I’ve written before—is coming for us in a way that most of my colleagues are not at all prepared to face. But they will have to face it in the end.

Why are so many professors conservative? - by Paul Bloom

Joshua Gans on Vibe Researching

From an economist, about his extensive experiments with AI-driven academic research. I don’t have much of a research background, but his experiences seem right where expected.

My point is that the experiment — can we do research at high speed without much human input — was a failure. And it wasn’t just a failure because LLMs aren’t yet good enough. I think that even if LLMs improve greatly, the human taste or judgment in research is still incredibly important, and I saw nothing over the course of the year to suggest that LLMs were able to encroach on that advantage. They could be of great help and certainly make research a ton more fun, but there is something in the judgment that comes from research experience, the judgment of my peers and the importance of letting research gestate that seems more immutable to me than ever.

The creativity and nuanced judgment of what constitutes a good research are still missing in AI. What 2026 will bring is less certain, in my opinion.

Reflections on Vibe Researching | Joshua Gans' Newsletter

A Rare-Blood Donor Saved Millions of Lives

Australia's most prolific blood and plasma donor, James Harrison, has died at age 88. Known as the "Man with the Golden Arm," Harrison is credited with saving the lives of 2.4 million babies over the course of more than half a century.

Harrison died in February of last year. Of course, many, many people played a critical role in all the good that he did (nurses, doctors, researchers, phlebotomists), but Harrison also did his part and showed up, time after time.

Is there a better illustration of what it takes to make such an impact? Whatever we do, we have to keep showing up.

(I also posted this over at my other site, How to Help. If you don't know it, check it out.)

Blood donor James Harrison, who saved 2 million babies, has died | NPR

Some Provo Street Photography

Some Provo Street Photography

Since my focus has been learning landscape photography, I've never really done any street shooting. Thanks to a kind invitation, I had a chance to head downtown and try my hand at it. (Thanks Daren, Jason, and Justin!)

A bit of a grey day, and I need to get more comfortable taking photos of people. I mean, these look like a landscape photographer got lost downtown. 😂 But here are my favorite shots from today.