The rise of motivated software
Software isn't dead, but opinionated software might be soon. What comes next?
The chat trap
You've heard it, I’ve heard it: the growing sentiment that "software is dead."
Why build apps when ChatGPT can do everything? Why design interfaces when you can just type what you want? AI is eating software, just like software ate the world.
But if software is dead, why are people still opening dozens of apps every day? Why do companies still pay hundreds of thousands of dollars for CRMs when an LLM with an MCP for a database would save so much spend? Why hasn't everyone switched to ChatGPT for everything?
The answer is simple. AI is powerful, but talking to a computer isn't always the best way to get things done. AI has a UX problem1.
What’s messed up is we’ve learned this lesson before. In 2017, big tech companies bet everything on chat. Apple and Facebook and X née Twitter all looked at WeChat in China, where you can order food, pay bills, and book flights all through messaging, and thought "this is the future!" They built chatbots for everything.
It just didn’t translate.
Why? Because when everything is possible, nothing is top-of-mind. Chat gives you infinite options, which means infinite surface area to test and infinite ways to get confused. Most chatbots became what Laura Burkhauser at Descript called "phone trees in trench coats"2. It’s the same frustrating "Press 1 for billing" experience, just dressed up in a chat bubble.
Now in 2025, we're making the same mistake again. But this time, we can learn from it.
Software is still hanging in there
The statistics on AI use are in flux, constantly. The MIT study that 95% of enterprise use has no ROI is going around like crazy3. It’s exacerbated by the fact that, when you look outside the US, 75% of knowledge workers use AI tools at work, but only 7-22% (depending on country) are using work-provided tools4. 40+% consistently are bringing tools from home instead, using their personal ChatGPTs instead of the Copilots and Bedrocks afforded them. Even still, only 44% of users (58% of the group using AI at work) say they’re “heavily-reliant” on AI while at work5.
What’s going on with the numbers? Well, inconsistent definitions of what it means to “use AI” and “get value,” for one. Strong differences between usage patterns at early adopter companies and non-adopter companies, for another. Older companies are slotting agents into existing bureaucracy, with amusing but low-ROI results6.
Different intent behind the adoption has an effect as well: some companies feel existential pressure to become “AI-native” (though they define that differently too), while others are checking a box from a board mandate, and that box does not include validating user stickiness over time.
And the user stickiness story in AI is one that isn’t being told much right now. AI companies are throwing down stellar (annualized) ARR and revenue-per-employee numbers, but lurking behind those numbers are skyrocketing churn metrics. Most demos look pretty sharp. The problem is what happens after the demo.
Here's an example: a friend who runs a proserv firm tried switching his team to an AI project management tool driven mostly by chat UX. The rep promises "it can do everything!" Naturally. And technically, that was true: you could badger the software into doing what you wanted, by chatting with it, for almost 100% of the feature portfolio. But after two weeks, his team was back on their old software.
"Every time someone wanted to check a deadline," he told me, "they had to type a question, wait for the response, and hope the AI understood what they meant. Burning tokens the whole time. We used to just… click on the calendar button.” One second versus thirty seconds, fifty times a day, every day, every week7.
This is the gap between potential and practice. AI can do amazing things, but if those things take longer than the old way, people don't switch.
Those who do: well, they’re there for the experiment, and they have no issue churning after a month if the value isn’t there. In fact, that’s the plan from the get-go. Some people on the leading edge of the “crossing-the-chasm” diagram aren’t early adopters, they’re just taste-testers.
The three ages of software
So, if software’s clearly changing but not-so-clearly surviving, where does that leave us? Evolving.
To understand what we're evolving into, let's look at where we've been. Software has advanced through three major ages, each fixing the problems of the last while creating new ones.
Age 1: Software That Just (Barely) Does Something (Sometimes)
In the beginning, there was embedded software, which came welded to hardware. Buy a DEC system, get DEC software. It did one thing, hopefully. When it broke - and it always broke - you called expensive consultants to fix it.
Users had one request: please don't crash. The bar wasn't high because there was no bar. Having any digital system at all felt like magic.
Age 2: Software That Does Everything (If You Can Figure It Out)
Then came the revolution: unopinionated software, freed from the shackles of predetermined hardware. Office. Lotus. Adobe. These platforms promised ultimate flexibility. Build anything! On any hardware! Customize everything! If you could dream it, you could (…probably) configure it.
But there was a catch. With great power came great confusion. It's like being handed a full set of professional chef's knives when all you wanted was to make a sandwich. Some people created masterpieces. Most people nicked themselves.
This era made consultants rich. Not fixing broken software anymore, but teaching people how to use it. Every company had that one person who "really knew Excel"—the keeper of mysterious formulas and secret shortcuts. Cottage industries like “Salesforce developers” cropped up, necessary experts in handling the arcane inner workings of systems designed to be so complex that they could theoretically do anything, given enough recursion (and enough budget).
Age 3: Software That Does What Its Founder Wants (Hope That’s What You Want Too)
Smart companies noticed something: most users were trying to do the same few things. What if, instead of infinite options, we only gave them the best way8?
Enter opinionated software. Box decided how folders should be organized. Slack decided how teams should communicate. Notion decided how documents should be structured. These tools didn't just provide features—they taught you a philosophy.
This worked brilliantly if you agreed with their philosophy. But what if you didn't? What if your team had spent years perfecting a different approach? Tough luck. As Stewart Butterfield put it: "There's no worthwhile software that doesn't involve behavior change."
The consultants didn't disappear. They just changed their pitch from "Let me teach you the features" to "Let me change your company culture."9
The fourth age - software that seeks your goals
Now, you could argue that with the rise of vibe coding, software from all three previous ages is basically commoditized. AI can copy any of it, at any time, right?10
But I posit we're approaching something new. Software that learns what you're trying to achieve and helps you get there. Not by forcing you into its workflow, but by molding itself to yours.
I call it motivated software.
Imagine you're planning a product launch. Today's software structures your UI and data, makes you think in terms of tasks, deadlines, and assignees. You create tickets, set dates, assign people. The software tracks whether Task A is done, but it doesn't understand that Task A only matters if it helps Product B succeed. (Sometimes, in the case of opinionated software, even if Product B succeeds the software will still complain that Task A didn’t get done in just the right ideological way).
Motivated software works differently. It has your same high-level goal: successful product launch. It takes inputs from connected systems: KPIs, documents, updates, system analytics. It notices that sales enablement for infrastructure products always takes longer than scheduled, so it quietly adjusts future timelines for impacted launches. It tracks that marketing needs extra lead time when engineering runs late, so it alerts them early: marketing doesn’t have to depend on someone from engineering manually reminding them each time. It detects that Sarah's qualification calls are thorough but slow, while Mike's are fast but sometimes miss edge cases, and it routes customer inquiries about the new product accordingly.
This isn't just automation. It’s open-ended pattern recognition, and it’s adaptive optimization11. The software develops a model of what you're trying to achieve and constantly searches for better paths to get there, building and tweaking its reward models constantly to stay in-line with yours.
Think of it like this:
Embedded software was a tool that, well, existed
Unopinionated software did what you told it, for better or worse
Opinionated software was a teacher that told you what to do
Motivated software will be a partner that helps you succeed
Isn’t that just an agent?
For people up to speed on the AI ecosystem, the most obvious question about this philosophy is: why can’t an agent just do this too?
To be clear, agents plural are absolutely part of this infrastructure. Motivated software takes advances in LLMs, reasoning models, tool use, and agent-to-agent protocols and meshes them together in a logical way that would be obnoxious to do manually, every time, for every use case.
Individual agents lack three things that a parent motivated software platform brings to the table:
Dimensionality - heavy users of agents know that unlocking their best performance comes from giving them small, granular, well-scoped tasks. You could theoretically ask an agent “fix my company’s budget” but - you’d be much better off (in both token cost and outcomes) deploying and orchestrating lots of smaller targeted agents, each handling specific tasks (“audit our FP&A last quarter,” “review last week’s fraud reports for irregularities,” “design a net-payment terms strategy to improve our cash-on-hand”). Motivated software coordinates this effort and selects which tasks are most relevant, when. Think of agents as a point-vector in this system: one gust of wind, rolling up into motivated software’s weather front.
Persistence - it’s no secret agents run into context window issues. Modern agents typically operate with around 100-200k token context windows. More are ramping up to 1M tokens in practice (and labs have up to 10M tokens in window in experimental models). So why is persistence an issue? Even with larger and larger context windows (and you have to assume that LLM builders will continue to innovate here) the cost of one agent holding on to the details of one conversation gets prohibitive, quickly. If you need nuanced details on every customer in your GTM strategy for each incremental agent call, you either need to have every single call expend increasing numbers of tokens (or risk compacting away useful details), or you need a way to retain and sort through context, feeding it to the right agent at the right time. The human could manually copy-paste context in each time, but wow - what a waste of human potential! - and what a great way to introduce suboptimal context calls and room for error.
Hyperparameters - agents don’t dynamically improve themselves or think critically about when to call themselves under which conditions. We as users can do that, but there’s an upper bound on how much we want to be thinking about “the problem of how to optimize an agent to solve our problems,” versus “our problems themselves.” Motivated software abstracts this away - we can program in the criteria for selecting the right agents, system instructions, and parameters, or better yet: we can have agents optimize this over time, creating meta-optimization layers that create recursive customization and sophistication that boosts how closely the software hews to our needs and preferences12.
Why this matters
Here's why motivated software is different from just adding AI to existing apps:
It remembers. Not just your data, but your context. Not just from this conversation, but longitudinally: it knows where to pick up the thread where the last agent dropped it, without burning billions of tokens to do so. It knows that when you say "the Johnson project," you mean the proposal for the client you met last Tuesday, not the internal project run by the engineer who happens to have the same name.
It learns your patterns. Sometimes, you know you’re stuck repeating a mistake, but you don’t know how to fix it. Sometimes, you don’t notice you’re repeating your mistakes until someone catches you, often way too late. Motivated software can nudge us out of ruts: watching you reschedule the weekly standup three times because West Coast teammates can't make the morning slot, it suggests a better time that works for everyone. Seeing you struggle to convert cold emails, it proffers more customized openers and blocks time on your calendar to protect research. Tracking stress signals in your meeting transcripts and messaging apps, it starts pushing late-night meetings and calls to help you recover and avoid burnout.
It connects intentions to actions. When you message "We need to speed up delivery," it doesn't just record the comment. It identifies bottlenecks in your workflow, suggests process improvements, and tracks whether changes actually improve delivery speed. Motivated software is fundamentally about ingesting your goals and reinforcing behaviors and processes that advance them, tuning all these RNNs around us to our advantage.
It incentivizes models to ask for help. Current AI confidently gives wrong answers because saying “I don’t know” in a chat window 50% of the time doesn’t feel great. Motivated software needs to get away from that, to understand and convey its confidence level because being confidently-incorrect regresses you from your goal. Motivated software calibrates AI answers against certainty over time. When uncertain, it asks clarifying questions or brings in human judgment.
Most importantly, it pulls agents out of the chat window and into the underlying primitives. It’s not possible to get the level of performance you need from motivated software by tacking on a chat window - simply asking a bot to give an answer or run a tool does too little, too slowly to have the software form-fit the customer and seek after their goals. LLMs have to be in the infrastructure, the libraries, the core utilities in order for this work.
This is hard to build. Really hard. Current AI technology isn't quite there yet. We need:
AI that can accurately understand goals, not just commands
Memory systems that maintain context over months, not minutes
Context uptake that costs thousands of tokens, not millions
Confidence calibration so the system knows what it doesn't know
Search capabilities that explore solutions without getting stuck
Feedback loops that ensure the software improves, not just changes13
A whole new world
Even just three to five years from now, this could make day-to-day workflows look really different.
Let’s say, in 2030, you run a growing sales team. Your software isn't just tracking deals; it's trying to help you succeed. It notices you're adding more reps but deal velocity isn't increasing proportionally. It identifies that deal approval has become a bottleneck and suggests a solution: parallel approval tracks based on deal size and risk profile.
You're skeptical: do you need that much process for your team size? But you give it a shot. The software spins out a custom agent, adjusts your approval workflows, and monitors the experiment, measuring not just approval speed but discount rates, legal review quality, and rep satisfaction.
After two weeks, it reports: 31% faster approvals, discount rates held steady, slight improvement in contract quality scores. It brings receipts: line-item audit trail of each deal by track, links to updated pipeline metrics, confidence scores against each routing decision it made, flags where human judgment might have differed (there's always room for interpretation). It asks: "What do we keep? What do we change? Do we codify this?"
This isn't science fiction. It's the logical next step from where we are today. The building blocks exist:
Large language models that understand context
Pattern recognition that spots trends
Optimization algorithms that search solution spaces
Feedback systems that learn from results
What's missing is the connective tissue—the infrastructure that lets these pieces work together seamlessly.
Deep dynamic customization as a moat
The companies that figure this out won't just have better software. They'll have sustainable competitive advantages that are nearly impossible to copy.
Why? Because motivated software's value isn't in its features. You can screenshot the UI and dump a feature list into Lovable, but the entire value prop is in the time-series bespoke context you’ve built up user-by-user, tenant-by-tenant: you can’t clone that. The value builds over time as the system learns the unique patterns of your organization.
It's like the difference between a new EA and one who's worked with you for years. They could have gone to the same school, worked for the same firms, but the experience is totally different. The tenured assistant doesn't just follow instructions: they anticipate needs, prevent problems, and make connections you might miss. That knowledge can't be transferred instantly to someone new.
The same protection applies to motivated software. A competitor can copy your interface, match your features, even poach your engineers. But they can't copy the accumulated understanding of how your specific organization works best.
Motivating your software
Building motivated software requires rethinking our entire approach to software development:
Start with goals, not features. Instead of asking "What should this button do?", ask "What is the user trying to achieve?" We express this as a meta-layer in “user journeys” now, but motivated software implants this directly into UI/UX and adds abstraction on top of that: goals are themselves composable and manipulatable.
Design for learning, not just usage. Build systems that get better over time, not just systems that work on day one. We compartmentalize this type of instability into our DS/ML teams in most engineering orgs today: motivated software will broadly assume that frontend, backend, data engineering, infrastructure, security, and analytics are all in a much greater state of flux, and in different states at different customers. Understanding what we’ve built will become its own form of challenge.
Embrace uncertainty. Create software that knows when it doesn't know and can gracefully ask for guidance. More to the point: create software that is guaranteed to not be in its optimal state, or even know its own optimal state, at the point of initialization.
Measure outcomes, not activities. Track whether users achieve their goals, not just whether they click the buttons. Metrics like intercept rate, derivatives, and precision will matter more than days active and clickthrough rates: “facts” will get necessarily more complex, more recursive, and require more storytelling to understand and follow over time.
This is hard. It requires new technical capabilities, new design patterns, and new ways of thinking about software. But the payoff is enormous: software that doesn't just serve users but partners with them.
The future is motivated
Software isn't dead, it's evolving.
The chat interface revolution failed not because AI isn't powerful, but because power without direction is meaningless, donuts in a parking lot. Motivated software provides direction by understanding what users actually want to achieve.
This is a fundamental shift in how we think about tools. For the first time, our software can grasp our intent and seek out our goals, not just execute our commands.
The companies that understand this shift and build software motivated by user success rather than feature completeness won't just win in the market. They'll define entirely new categories of what software can be, and they’ll leave previous generations in the dust.
The age of motivated software is coming. The question isn't whether it will transform how we work. The question is: who will build it first?
It’s an older source, but there’s not much comparable data cross-border from recent months.
Different studies, so the numbers aren’t quite apples-to-apples. Take that with a grain of salt, but the baseline of companies using AI (67% vs 75%) is fairly close.
See Making agents work. There’s no good reason to throw compute (and therefore cash) at a task that would be waste if a human did it. (There is good reason to throw compute at friction).
At Slack we used to call these “papercuts:” small UX irritants that feel awful in aggregate when you use that particular software all the time (but don’t matter if you only need it once a month or once a quarter).
The title of this article is a callback to Stuart Eccles’ article from a decade ago, The rise of opinionated software. Aaron Levie and DHH are other notable advocates.
Ever notice how the concept of “digital transformation” survives all three eras? Wonder why that is.
There’s a couple notable arguments in the opposite direction [1] [2] [3] that bear calling out. But there’s a core issue: once you encode your opinion in source, nothing stops a competitor from identifying it and prompting an AI to replicate it. Try asking an agent to build a Linear or Superhuman or Replit clone. Their opinions aren’t moats. Their taste isn’t a moat.
It’s not “understanding” but it could feel like it on the user’s end. Anthropomorphization is not a good habit to get into when dealing with LLMs - it’s the root of like 90% of bad takes in the AI space.
I think this is where I distinguish these “eras” from Andrej Karpathy’s “Software 1.0 / 2.0 / 3.0,” which is less about how the software interacts with users and more about how the developer interacts with the codebase.
These may sound like givens, but there are a slew of agentic experiments that resulted in fragile, expensive failures that came nowhere near their stated objectives. If Anthropic and Cognition can’t “wing it” and succeed, odds are you and I can’t either.