From Hierarchy to Intelligence — AI4 Accountancy as a Laboratory
My parents ran a small retail shop. I grew up watching them manage inventory, staff, customers, and cash flow simultaneously — with no systems, no org charts, just presence and judgment. Everything worked because the information lived in one place: them.
That's the purest form of an organization. And it doesn't scale.
The moment you need a second person who wasn't there when the decision was made, you've introduced the fundamental problem that two thousand years of organizational design has tried to solve: how do you move context from where it exists to where it's needed?
I became fascinated by that question. I still am.
The Road Through Sociocracy, Holacracy, and Booking.com
During my studies in Business Administration and Technology, I came across the work of Gerard Endenburg — a Dutch engineer who developed Sociocracy in Rotterdam. My thesis was a study of his organization. Later, I successfully implemented it in an international mediation organization.
Endenburg was a child of the sixties. Humanity was at his heart. Sociocracy wasn't just about efficiency — it was about dignity. About creating organizations where people had genuine influence over the decisions that affected them. Power distributed not by title, but by consent. I loved that about it.
Then came Holacracy. Jack Dorsey mentions it in his essay as one of the experiments that came before. What many don't know: Holacracy is built on Sociocracy. I studied it, hoping to find what Endenburg had built, but more scalable. What I found instead was a system that had taken Sociocracy's structure and removed its soul. Holacracy was obsessed with the analogy of organizations as operating systems — precise, governed, debuggable. Humanity was not at heart. It was in the way.
Then Booking.com.
Kees Koolen — who studied at the same university I did — had built something remarkable. A product organization that ran genuinely bottom-up, driven by data and the autonomy of product owners. I experienced it from the inside as a product owner. It was exhilarating and, at times, maddening.
What Kees built worked because the metrics were honest and the trust was real. But it lacked fundamental ethics and the human flexibility that no metric can capture. In practice, it became a harness — a very sophisticated harness for developers to make decisions freely within a defined game. The organization optimized brilliantly within its rules. But the rules couldn't bend when the human moment required it.
Every model I encountered solved something. None solved everything.
Then Came the Paradigm I Had Been Looking For All Along
When I first encountered OpenClaw — an AI agent framework that lets agents hold context, execute tasks, coordinate with each other, and surface what's needed without being asked — I recognized it immediately. Not as a productivity tool. Not as a better way to do the same work.
As the organizational paradigm I had been searching for my entire career.
The logic of why hierarchies exist begins to dissolve the moment you use it. Not because AI is smarter than humans. But because the fundamental bottleneck — routing information through layers of people — simply disappears.
Jack Dorsey says the same thing in his essay, and I think he's right about the diagnosis. But watch what's happening in the market: the AI conversation is almost entirely about improving existing workflows. Claude Code. Codex. Copilots. Everyone is excited about doing the same things faster.
That's not the invitation.
The invitation is to organize completely differently. When memory, knowledge, and context are shared across a system — when you can ask an agent what's blocking progress, what decisions are pending, what the client needs — meetings become redundant. Not because they're inefficient, but because they were never the point. The point was to move information. Now something else does that.
You just ask the agents what's needed to move forward.
AI4 Accountancy as a Laboratory
AI4 Accountancy was built on this logic from the beginning. Not because I planned to implement Dorsey's framework — I hadn't read it yet. But because the logic forced itself on us.
There are three to five of us, working on and off. No office. No meetings. No Jira, no Notion, no project management software of any kind. Just WhatsApp — and now Telegram, so our personal AI agents can join the conversation directly.
Pascale is the founder. She carries the deep domain knowledge — the understanding of what accountants actually need, how the sector thinks, where the real problems are. I'm the one who goes in hard with the latest AI technology, throwing it at the problem and seeing what sticks. We started with Lovable, moved to Windsurf, experimented with Antigravity, and now we're working with Claude Code. The tools change fast. The approach doesn't: take an experienced entrepreneur's knowledge and turn it into working software through pure vibecoding, iteration, and a 10-to-20-minute call when something genuinely needs alignment.
We don't share status in meetings. We ask our agents what the status is. Our GitHub repository is our shared organizational memory — more reliable, more complete, and more accessible than anything that has ever been said in a meeting. When we need to know where we are, we ask. The answer comes back immediately, without a calendar invite.
We are going live now. First paying customers are being onboarded. And the speed at which we got here — a small group of people, no overhead, no hierarchy, no information routing layer — is not despite the absence of traditional structure. It's because of it.
Accounting generates honest signal. Every transaction is a fact about someone's financial reality. The pattern behind those facts tells you more about a business than any meeting or report ever could. An intelligence layer that continuously reads those patterns doesn't need to wait for a manager to notice a problem and escalate it. It sees the problem forming — and surfaces it before anyone thinks to look.
That's the product we're building. But it's also how we built it.
And when our clients are ready, we'll give them the same. Not just software — the same way of working. The same organizational disruption we experienced ourselves. If they're open to it, they can skip two thousand years of accumulated management overhead and go straight to what Dorsey and Botha are describing from the top of Block.
We got there from a WhatsApp group and a shared GitHub repo.
There is no software developer on our team. Tech-minded people, yes — but no coders. I believe a traditional developer would slow us down. Not because they lack skill, but because the bottleneck is no longer writing code. It's knowing what to build.
What Dorsey Doesn't Say — And What I Learned From Endenburg
Dorsey describes the structure. What he doesn't describe is who the people are that stand at the edge — and what it actually takes to be one.
Endenburg knew. That's why he built consent, not consensus, into Sociocracy. He understood that the quality of the human making the decision determines the quality of the system. Structure alone was never enough.
Holacracy forgot that. It built the structure and assumed the humans would rise to meet it. They didn't — not reliably, not at scale. Because structure without cultivated human agency is just a more elaborate cage.
The intelligence-first organization has the same risk. When the routing disappears and the machine coordinates, what is the human's actual job? Dorsey's answer: intuition, directional bets, cultural context, ethical calls.
My answer goes further: the human at the edge of an intelligence-first organization is someone who knows themselves. Who doesn't react from conditioning but from choice. Who can trust the system because they understand what they contribute and what they delegate.
This is the new scarce capital. Not skills. Not credentials. Self-knowledge.
Technology externalizes cognition. What remains is the subject that perceives, chooses, and acts. If that subject is undeveloped, the intelligence layer is a supercharger on an empty tank.
Conclusion
The path from my parents' shop to AI4 Accountancy is, in retrospect, a single line: how do you preserve the quality of judgment that exists in the person closest to reality, while scaling beyond what one person can hold?
Endenburg tried to solve it with structure and humanity. Holacracy solved the structure and lost the humanity. Booking.com solved the data and lost the ethics. Dorsey is solving the routing and leaving the humanity as an open question.
I'm trying to solve all of it. I probably won't. But the attempt is the laboratory.
What I know: when memory and context live in the system, the conversation changes. You stop managing information. You start making decisions. And the quality of those decisions depends entirely on the quality of the person making them.
That's not a technical problem.
That's the problem Endenburg was trying to solve in Rotterdam in the 1970s. We're still working on it.