Today and Tomorrow
Looking at both ends of the AI spectrum.
2025 - Models vs. Applications
There’s an interesting dynamic at the moment where it’s not clear where the value will accrue in AI development.
Will it be at the model layer, by companies like OpenAI, Anthropic, Meta, Google, or DeepSeek?
Or will it be at the application layer, in tools like ChatGPT, Claude, Perplexity, Covve (an AI-powered personal CRM app I just found), or myriad others?
In some cases, like ChatGPT and Claude, the application is made by the same company that developed the underlying model.
In others, like Perplexity, the app developers use a variety of underlying models (created by the model builders) to power their apps.
Here’s a recent example of this dynamic playing out:
Just a couple of weeks ago we witnessed DeepSeek (a model builder) make waves with it’s R1 model, which surprised a lot of people due to its nearly-state-of-the-art capabilities, efficiency, and the fact that DeepSeek released it open source. With DeepSeek being based in China, some people were hesitant to use the DeepSeek-hosted version of R1 due to concerns about data security and privacy. Perplexity (an app developer) quickly addressed this by incorporating a U.S.-hosted version of R1 into its product within days.
It will be interesting to see if, over time, the value accrues more to consumer-facing tools (e.g. Perplexity) or the underlying models (e.g R1). Will the models themselves become a commodity? Or will the state-of-the-art stay sufficiently far ahead of the pack that it maintains its value?
2035 - Where are we going?
Now the other end of the spectrum… with all of this new technology and the pace of change, what will the impacts be on a large scale?
The people who presumably know the most about what capabilities are coming down the road had some interesting things to say about this topic over the last week.
First, Sam Altman (CEO of OpenAI) described his vision of the future like this (link, emphasis added):
Anyone in 2035 should be able to marshall [sic] the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine.
Apparently spell check hasn’t proliferated 100% yet, so perhaps take the prediction with a grain of salt, but can you imagine what the world could look like if Sam and his colleagues are successful and everyone has a team of Einsteins at their disposal?
Dario Amodei (CEO of Anthropic) is apparently concerned we’re not taking the issue seriously enough (link, emphasis added):
Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.
Time will tell whether these companies, or others, actually create such powerful artificial intelligence. The more I learn and consider the topic, though, the more optimistic I get. What started as a fear reaction back in 2023 has become one of great curiosity and gratitude.
What a wild time to be alive. Of all periods in known history, having a front row seat to today’s current events seems to be a special gift.
The fear of the unknown is a reasonable, rational instinct that seems hard-coded to protect us from danger.
Looking back at actual history, though, shows us that technological improvement is not only what humans instinctively do, but overall it is responsible for a steady improvement in our safety, wellbeing, and capability.
Yes, there are bumps in the road. Yes, technology disrupts current ways of doing things and brings certain ways of life to an end.
But humans are incredibly adaptable. We have repeatedly used technology to create bountiful new ventures, and exciting new ways of interacting with life that were previously inconveivable.
What I’m coming around to is a recognition that, while I can see neither the road nor the destination clearly, I feel a growing confidence that we’re headed in the right direction.
It’s more instinct and intuition than rational analysis, but part of it is a clear-eyed look at the past and extrapolation from human history to date.
I’m excited to see how this all shakes out.



So much to think about