Regarding what I wrote yesterday, this piece is an expert overview of the laws at stake and why DoD’s supply chain risk designation for Anthropic is doomed to fail.
From the government's perspective, Claude does pose some concerning vendor reliability issues. But the specific actions Hegseth and Trump took have serious legal problems. The designation exceeds what the statute authorizes. The required findings don't hold up. And Hegseth's own public statements may have doomed the government's litigation posture before it even begins.
I’ve seen enough takes on the Anthropic/DoD conflict since it all went down last week, and I’m surprised at how often this important principle is being left out of the conversation:
There are many freedoms enjoyed by Americans—and therefore American businesses. One of them is that we don’t have to work for the government if we choose not to.
If I want to be employed by the government, I can choose from the range of options the government offers. If they want to hire me, I can work for Reclamation and help maintain dams, or for the Social Security Administration to process claims, or for the military to defend the United States. But once I’ve decided to work for Reclamation, it doesn’t mean the U.S. Government can also require me to work as a janitor, a Congressional aide, or a spy. Note that it doesn't matter if what the government wants is entirely legal. If we can’t come to an agreement, they can fire me or I can quit.
Anthropic chose to quit, and it’s nonsense that this is some sort of veto over the powers of a democratically elected government. You can argue that Anthropic shouldn’t have the beliefs they have about AI and military action or government surveillance. You can make a moral claim that they should want to support the military. But if your argument is that Anthropic refusing to do so is some sort of corporatocracy, then you're ignoring essential rights.
The point isn’t that corporations should have power over government. The point is that people, and therefore their businesses, have power above government. That power appears in the voting booth, of course. But it also comes in all the other freedoms we enjoy because of the limits on Constitutionally designed government.
The Department of Defense offered Anthropic a job, which the company accepted. When the terms of employment changed, Anthropic quit to uphold their values. This is fundamentally how a free society with a limited government should operate.
Footnote: I get that there are laws entitling the government to force its citizens into certain behavior, but these are constrained by the first, fourth, fifth, and fourteenth amendments of the Constitution, as a start. All of these favor Anthropic’s right to refuse the government’s demands.
Late night noodling, but even in the light of day this still feels right to me.
If we mapped current #AI companies to consumer tech from the 2010s:
Anthropic = Apple. Focused on high quality for a smaller market. Stubborn and opinionated in annoying ways, but innovating in important ones. Sets trends. Genuine in its principles, whether or not you agree with them.
1/x
OpenAI = Google. Market-defining from the start. Now wants to be everything for everyone. Staffed by nerds who are at odds with management, and management wins. Began with noble intentions (remember "Don't be evil"?), but revenue overruled. Has a graveyard of failed public projects.
2/x
Google/DeepMind = Microsoft. Workman-like quality, only the best at one or two things. Preserving the ecosystem drives every decision. Staffed by some of the smartest people around who get slowed down by the bureaucracy. Won't lose, but won't win either.
3/x
Meta = RIM (Blackberry). Already lost but doesn't know it. Wastes money on big swings that are only affordable because of its legacy business. Corrosive leadership doesn't realize the best hope for the company is to step aside.
4/x
xAI = Samsung. Fast follower only. Plays the scrappy underdog, but really just flash over substance. Run by a corrupt, image-obsessed leader who uses government influence for profit. Has a rabid, contrarian fanbase, mixed with people who don't care enough to pay for something better.
5/x
Perplexity = Snapchat. A truly unique offering, but the people who don't use it don't get why it exists. The people who do use it love it. Likes doing weird things as a way to stand out. Always treated like a quirky little brother.
6/x
DeepSeek/Kimi/Z.ai/etc. = Huawei/Xiaomi/Oppo/etc. Providing insane value as long as you are willing to ignore the idea that the Chinese government uses them to spy on you. Tinkerers & geeks love them, of course. An ecosystem of YouTubers will rush to review every new model.
7/x
A great read, and not just because of how deeply I felt the distinction between love-work and hate-work. I also really enjoyed how Horton described research eras in psychology. All fields have this sort of thing, and knowing them helps make sense of how we got to our current kind of thinking. (And that we’ll someday leave it behind!)
You must know, on some level, that doing work you love is psychologically different from doing work you hate. They don’t just feel different, emotionally. The two forms of work have different psychological textures. They involve different actions. They have a different cadence, different aims, different outcomes, and draw upon different wells of energy
Much of the advice around using AI is that if you use it, then you need to verify what it produces. This is presently good advice. But I'm doubtful it will be good advice in the long-run.
Consider how little verification happens in large institutions by leaders who are making decisions. Of course, many bad decisions get made this way, but also many good ones. The difference is in the quality of the work put before the decision-makers. Eric Drexler explains it well in this recent article (emphasis mine):
Consider how institutions tackle ambitious undertakings. Planning teams generate alternatives; decision-makers compare and choose; operational units execute bounded tasks with defined scopes and budgets; monitoring surfaces problems; plans revise based on results. No single person understands everything, and no unified agent controls the whole, yet human-built spacecraft reach the Moon.
AI fits naturally. Generating plans is a task for competing generative models—multiple systems proposing alternatives, competing to develop better options and sharper critiques. Choosing among plans is a task for humans advised by AI systems that identify problems and clarify trade-offs. Execution decomposes into bounded tasks performed by specialized systems with defined authority and resources. Assessment provides feedback for revising both means and ends. And in every role, AI behaviors can be more stable, transparent, bounded, and steerable than those of humans, with their personal agendas and ambitions. More trust is justified, yet less is required.
Politically, academics are much more liberal than the average person. But Paul Bloom makes the excellent point that, in areas related to their work, academics are actually deeply conservative.
Asking a prof about AI is like asking a taxi driver to weigh in on Uber. I think I have good reasons for my (conservative) defense of tenure, but you’d be forgiven for assuming that, having worked for and benefited from the protections of tenure, I don’t want them taken away. Part of professors’ unwillingness to give up on lectures is that they take a long time to prepare—once that time is invested, we don’t want to start anew. We certainly don’t want to transform the university in a way that risks making us obsolete.
I feel this deep in my bones. It’s so hard to get universities to change, and professors are the primary reason why. AI—as I’ve written before—is coming for us in a way that most of my colleagues are not at all prepared to face. But they will have to face it in the end.
This is an incredible moment of courage that happened in 2023. I’d never heard about it until this article. Nathan saved dozens of lives.
Nathan started to think of himself as being in the right place at the right time. His instinct was to get this man and his bomb away from the front of the hospital, and from other people, and to keep him talking.
From an economist, about his extensive experiments with AI-driven academic research. I don’t have much of a research background, but his experiences seem right where expected.
My point is that the experiment — can we do research at high speed without much human input — was a failure. And it wasn’t just a failure because LLMs aren’t yet good enough. I think that even if LLMs improve greatly, the human taste or judgment in research is still incredibly important, and I saw nothing over the course of the year to suggest that LLMs were able to encroach on that advantage. They could be of great help and certainly make research a ton more fun, but there is something in the judgment that comes from research experience, the judgment of my peers and the importance of letting research gestate that seems more immutable to me than ever.
The creativity and nuanced judgment of what constitutes a good research are still missing in AI. What 2026 will bring is less certain, in my opinion.
Australia's most prolific blood and plasma donor, James Harrison, has died at age 88. Known as the "Man with the Golden Arm," Harrison is credited with saving the lives of 2.4 million babies over the course of more than half a century.
Harrison died in February of last year. Of course, many, many people played a critical role in all the good that he did (nurses, doctors, researchers, phlebotomists), but Harrison also did his part and showed up, time after time.
Is there a better illustration of what it takes to make such an impact? Whatever we do, we have to keep showing up.
(I also posted this over at my other site, How to Help. If you don't know it, check it out.)
Since my focus has been learning landscape photography, I've never really done any street shooting. Thanks to a kind invitation, I had a chance to head downtown and try my hand at it. (Thanks Daren, Jason, and Justin!)
A bit of a grey day, and I need to get more comfortable taking photos of people. I mean, these look like a landscape photographer got lost downtown. 😂 But here are my favorite shots from today.
This is the year I made landscape photography an official hobby rather than just a thing I enjoyed doing with my iPhone. All of these photos were shot on a Fujifilm X-T5. These aren't in any particular order. It was hard enough just choosing ten!
1. Bryce Canyon
This is from spring break with the family last April. It was sunny and warm the day before, then an overnight snow blanketed the park. This was the first photo I took where I looked at it later and thought, “Holy cow. I took that?!?” I’ve since learned that I do that a lot, with my favorites being more accidental than deliberate. 😂
2. Flaming Gorge at Sunrise
On a campout with the young men in our congregation. I was up early in my tent and realized that I’d rather be out with the sunrise than laying in my sleeping bag failing to get more sleep. This photo is, I think, the best composition I did, even if I didn’t know at the time of shooting.
3. Flaming Gorge at Sunset
Same camping trip, but at sunset. There’s something about the colors in this one—orange, blue, and deep green—that I absolutely love.
4. Flaming Gorge Overlook
My son and I were driving home from this trip and decided to take a different way home than we came. There was an overlook sign so we pulled over. (How many overlooks have I driven past in my life?) I love the shapes and angles in this one.
5. Proposal Rock, Oregon
The couple in the distance here is my son and his now-wife, our first daughter-in-law. They were engaged at the time of this picture, but it wasn’t posed. I just happened to look up at the right time. This was a family trip in August and we had just one day of rain that week. Rather than spend it inside all day, we braved this little excursion. Bad weather makes some of the best photos.
6. Sea cave in Oregon
I love these colors so much. As I’m starting out with this hobby, I instinctively look for vistas. But I’m learning to see things closer to me.
7. The view from Timp
Katie and I hiked Mt. Timpanogos this fall. (Well, most of it. I had to turn around because of a strained calf.) This isn’t Timp itself, but the view across the valley on Timp’s north side. I don’t know how I got the sky to come out this color, but I love it so much, especially with the fall colors on the mountain.
8. John Irvine Trail, CA
We celebrated our 25th wedding anniversary and my 50th birthday in October, so Katie and I took a trip to the coastal redwoods in California. It was an absolutely magical week. This was a long hike, about 13 miles round trip. I’m glad I had more experience with my camera by the time of this hike, so I could better capture the contrasting light and dark of a redwoods trail.
9. King of Gold Bluffs
Coming back from this same hike. Elk roam this part of California, and I’d been hoping to see some but there weren’t any all day. And then on the drive out from Gold Bluffs Beach, we ended up driving through an entire herd of them. The patriarch was just ten feet from the car, so we paused to get his picture. That stare!
10. Sunrise over Capitol Reef
Another trip from the summer, while Katie was in charge of Girls’ Camp for our congregation. I came down to help cook dinners and woke up early one morning, couldn’t sleep, and went into the park. The funny thing about this picture is that it’s a pretty big crop of a much larger composition. Someday I’ll have a lens long enough to punch in on details like this without much cropping.
Looking back at this year just has me even more excited for the year to come! I don't know where I'll be going, but I look forward to seeing beautiful places.