Most of What We Called Loyalty Was Just Friction
Source: @VibeMarketer_

A recent post from @VibeMarketer_ argues that AI agents are about to become the buyer — and that businesses built on human inertia are exposed. It's gotten wide circulation. Some of it deserves the attention.
What actually lands
The piece names something most people in commerce know but rarely say out loud: a huge share of enterprise value sits on top of customer passivity. Subscriptions that renew because cancelling is tedious. Insurance policies that persist because comparison shopping is boring. Travel platforms that win because assembling an itinerary yourself is painful. The author calls these "friction moats" and argues agents will dissolve them. That framing is sharp, and it's correct.
But the strongest move in the piece is a specific, practical one that's easy to miss. Buried near the end, the author references "dual-native brand systems" — the idea that companies should now deliver two sets of assets: human deliverables (Figma files, PDFs, design systems) and agent deliverables (markdown, YAML, JSON that AI tools can parse). That's not a prediction. That's something you could act on this quarter. Most businesses haven't even considered whether their product information is machine-readable. That paragraph alone is worth your time.
Where it doesn't earn its confidence
The piece falls apart at "compete on actual value, not friction." It sounds clarifying. It isn't. It's circular.
The entire argument rests on agents being able to determine what's "best." But "best" is precisely the hard problem. Best price? Best quality? Best return policy? Best for this specific user's unstated preferences about sustainability, or aesthetics, or risk tolerance? An agent optimising across those dimensions has to make tradeoff decisions that are, in practice, subjective. The post treats this as solved — agents "just optimise for price and fit" — and then moves on to the implications as if the foundation were solid. It isn't. The interesting question isn't whether agents will comparison-shop. It's how they'll weigh competing priorities for a human who hasn't articulated them. That question goes entirely unasked.
There's also a complete silence on delegation psychology. People will hand low-stakes purchasing to agents readily enough — office supplies, cloud storage, commodity goods. But the post lumps in insurance, real estate, and travel as though the delegation threshold is the same. It isn't. We trust a sat-nav to pick the route. We don't trust it to pick the destination. The gap between "an agent could do this" and "a human will let an agent do this" is where most of these predictions quietly die, and the piece never looks at it.
The takeaway
Read it for the friction-moat framing and the machine-readability point. Discount the timeline by half and the confidence by more than that. The observation about what's vulnerable is strong. The prediction about when and how it breaks is doing work it hasn't earned.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.