Software Is a Social Contract
Source: George Sivulka on X

George Sivulka, CEO of Hebbia, writing in defense of vertical software:
Lines of code are the least interesting part of software. Software that works means that somebody sat down and designed a workflow, and the logic of that workflow encapsulated the way that things are done. Software is a stored process. It's not a neutral tool: it's an opinion for how a group of people should collaborate, encoded in a durable system. Software is a social contract.
This is the best single paragraph I've read in the entire "do foundation models eat vertical software" debate. Most of that discourse treats enterprise software as code plus interface plus data — a stack that AI can replicate layer by layer. Sivulka's point is that the actual product is the agreement. Bloomberg isn't sticky because the terminal is hard to replace. It's sticky because an entire professional generation learned to think and communicate through it.
General-purpose AI tools, however strong they get, can't be opinionated in that way. That's not a criticism of them: it's a structural reflection of what they have to do.
Right. Anthropic and OpenAI aren't going to learn that two MDs at the same bank have entirely different standards for what a good CIM summary looks like, and that both standards are correct within their respective teams. That's not a solvable-at-scale problem. That's a firm-by-firm, desk-by-desk grind.
Now — Sivulka is the CEO of a vertical AI company making the case for why vertical AI companies should exist. So: grain of salt, obviously. And he glides past the real bear case, which isn't that foundation models will replicate the orchestration layer today but that at some capability threshold the orchestration layer becomes thin enough to not sustain a business. He asserts that won't happen. He doesn't argue it.
The reliability point is where he's on the firmest ground. "90 percent right is the same as 100 percent wrong" isn't rhetoric in finance — it's just how the domain works. A confidently wrong deal summary formatted beautifully is worse than no summary at all. General-purpose tools are structurally bad at this because they can't know what "right" means at the level of a specific team's specific standards.
The question everyone in enterprise software should be asking isn't "can a model do this task?" It's "can a model replace the shared expectations my organisation built around this tool?" First question is getting easier to answer every quarter. Second question is a different problem entirely.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.