Q1 2026 AI thoughts
It's complicated.
The first quarter of 2026 has seen a seismic shift in the capabilities of AI tools directly applicable to the kind of work I do every day in a small consulting firm. AI agents that can work autonomously for long periods of time have become a reality, and their abilities are changing weekly.
The pace of change and the capabilities are, if nothing else, fascinating.
Whether they’re good or bad, produce meaningful net economic or societal value, contribute to or detract from human wellbeing are all up for debate. It’s not at all clear to me and I have had different ideas on different days.
Below are a few of my thoughts about AI as we stand here at the end of Q1 2026.

Infinite productivity is a compelling illusion
As I dove headlong into Claude Cowork, Perplexity Computer, agent “skills”, and what these tools could mean for the kind of work I do my initial reaction (after some shock) was one of extreme excitement and optimism. It seemed like these things unlocked near infinite productivity.
Every idea could be brought to life in a fraction of the time. I could direct my agents to create plans and strategic frameworks, then execute on them step by step. They would faithfully work independently (sometimes for up to hours at a time) ticking tasks off their self-created list, checking their work, and finally presenting me with imperfect but totally passable work product.
I connected various tools, gave my agents access, spun up new workflows and invented all kinds of neat ways to take advantage of these new capabilities. I was the conductor of an orchestra of agents.
As it turns out, that’s exhausting. It’s more or less the manager’s schedule on steroids, with a hundred 2-5 minute “meetings” a day and a good recipe for frying your brain. Here’s another article on this same problem. Even the hosts of the All-In Podcast, all techno-optimist AI superfans, talked about experiencing it on their latest episode.
This is something anyone using AI agents is going to have to contend with and figure out how to manage. It’s also a pretty inconvenient finding for anyone hoping AI can deliver on its most optimistic promises.
Trying to keep up with AI is exhausting and not very much fun
Speaking of AI-induced exhaustion, trying to stay at the forefront of the wave is wearing me out. I’ve experienced many moments of wonder and excitement, but the predominant feeling is one of overwhelm and futility. Even as someone actively trying to stay current, and spending a lot of time and energy doing it, the technology is advancing at an incomprehensible rate.
For the moment it seems possible to at least stay ahead of the pack, if not at the bleeding edge, with reasonable effort. Most people know, or care, very little about AI. So, for now, you don’t have to be an expert to derive an advantage compared to most peers.
I don’t know how long this dynamic will last, though, for at least two reasons. First, recursive improvement (using AI tools to get better at using AI tools) could create a flywheel whereby an early lead becomes insurmountable, and creates such an advantage that all lesser relative advantages are lost in the noise. Alternatively (and I think more likely in the ultimate) is that the capability of the underlying technology advances to the point where “being good at using AI” is no longer a relevant skill. It will either be so easy that everyone can achieve the same results or even the idea of “using” AI will become obsolete; AI will just do the thing.
For now I think it still makes sense to build skills and adapt to the extent possible while maintaining health and wellbeing. But that takes discipline and explicit boundaries, often intentionally incorporating friction to counteract the allure of infinite productivity discussed above. That’s my plan for now anyway.
We are starting to see recursive self-improvement
I mentioned recursive improvement above in the context of people building AI skills. The more interesting application of the term is related to the AI models themselves, where in early 2026 we seem to be on the cusp of recursive self-improvement.
Recursive self-improvement means an AI that builds the next version of itself all by itself. That may sound far fetched, but the frontier AI labs are already heavily leveraging their AI to build the next versions of their products.
Recursive self-improvement is important because once it is possible then the pace of improvement is no longer governed by availability of human labor. The AI labs can spin up as many instances of their AI researchers as they have power and compute to sustain.
Time will tell how this pans out and we’ll see if the pace of model improvement continues on the exponential curve it’s been on, but there doesn’t seem to be much evidence to believe it won’t. Both Anthropic and OpenAI are rumored to be releasing new models in the next few weeks so we’ll get some new data points.
The pace of change is hard to overstate
My last thought here is just to reiterate that the speed at which this technology is changing is like nothing I’ve ever experienced. If your information is even a month out of date you’re woefully under-informed. A strategy from December 2025 is meaningless in March 2026.


