On February 4, 2026, Anthropic launched a Claude Code derivative. Software stocks lost $285 billion in a single day. Thomson Reuters fell 16%.

The market didn't panic because of a better chatbot. It panicked because it saw an architecture that makes applications peripheral. The cockpit from which work gets done is no longer each individual software application. It's the AI platform, with applications reduced to data sources at the edges.

The SaaS disruption narrative is well understood by now. What most investment professionals can't yet articulate is: what exactly is this architecture, and why does it create edge that existing software cannot?

This paper explains the architecture: what it is, how it works, and why its structural properties matter for investment firms specifically.

What Claude Code Actually Is

Forget everything you associate with AI chatbots. Claude Code is an AI platform that operates directly on your computer. Your files, your data, your systems. It reads documents. It writes financial models. It pulls data from APIs. It produces memos, decks, analyses, emails. It executes.

The distinction from ChatGPT or Copilot is not a matter of degree. It's structural. ChatGPT is a conversation. Claude Code is a computational environment with the full reasoning power of a frontier AI model, connected to everything on your machine and anything accessible via the internet.

Think of it this way. ChatGPT is like calling a brilliant analyst on the phone. You can ask questions and get answers, but they can't touch your files, run your models, or produce deliverables. Claude Code is like sitting that analyst at a desk in your office, giving them access to every system you use, and saying: help me get this done.

That analyst happens to have read effectively everything ever written. And they work at machine speed.

This is the architecture that erased $285 billion of software market cap. Not a feature upgrade to existing products. A different way of working that makes the products themselves peripheral.

Why SaaS Can't Get There

AI's defining capability is global inference: reasoning across the vast majority of recorded human knowledge. Fragment that capability into application-sized pieces (one AI for your documents, another for your CRM, another for market data) and you don't get a collection of slightly smaller AIs. You get a phase transition. A change in kind, not degree. It's like having a cardiologist who knows nothing about the patient's medications, stress, or family history. That's not a slightly worse doctor. That's a categorically different service.

This is why buying more AI tools doesn't move you toward AI adoption. It's a phase boundary, not a staircase. The levels require different architectures.

That's the architectural fact underneath the SaaS disruption. Now the question is: what's the architecture on the other side?

The Architecture That Creates Edge

Four components, operating as one indivisible system. We call this the atomic unit of AI adoption. Atomic because removing any component doesn't give you a weaker system. It gives you a categorically different one, like removing a proton from gold and getting platinum.

Global inference. The full reasoning power of a frontier model, unconstrained by application boundaries. Not allocated to specific features. Available for whatever you need, whenever you need it. In a given morning, you might call on it for deep expertise in tax law, then pivot to strategic positioning for a portfolio company, then build an earnings model, then draft a board presentation. One continuous collaboration, not four separate tools.

Encoded local context. Your firm's knowledge, written lightly enough that the original insight survives. Deal memos, investment frameworks, past IC decisions, sector playbooks, operational lessons. Not a data warehouse. More like a field guide written for a very capable new colleague. This is what steers inference away from generic consensus toward the edges that matter specifically to your firm.

Most firms overvalue their data here. The model has already read everything publicly available, probably more thoroughly than your internal documents. The genuinely valuable local context is what the model can't already have: the specific deal dynamics, the competitive insight only a few people have noticed, the institutional knowledge that hasn't been written down because nobody thought to.

Computational environment. The ability to act, not just think. To produce a financial model, not describe one. To pull data from a source, transform it, output a deliverable, and feed that deliverable into the next task. Without this, AI is a conversation partner. Smart, but unable to do anything. Claude Code is this component. It turns reasoning into execution.

The live human mind. Twenty years of pattern recognition in an industry. The sense that something is off before you can articulate why. The decision to pursue this angle rather than that one. No encoding captures this fully. It's not a gap to be closed by better documentation. It's permanently illegible — the human isn't a temporary limitation of the system. The human is a permanent component.

What Happens Inside the System

Here's where most descriptions stop. Four components, all necessary. But the interaction produces something none of them contain individually.

When a senior investment professional opens a session with full context loaded — the firm's knowledge base, computational tools ready to execute, and frontier inference available for any task — the model doesn't just answer questions better. It becomes a different kind of collaborator.

The mechanism is what we call joint search across two fundamentally different knowledge structures. The model's knowledge is weighted by prevalence — what appears frequently across all of human text is strongly connected. The investment professional's knowledge is weighted by consequence — what actually happened when real money was on the line. These are structurally different maps of the same territory. Neither is complete. Neither is reducible to the other.

In practice, it looks like this: the model surfaces a pattern from across its enormous knowledge base — perhaps a structural parallel between the current deal and dynamics in an adjacent industry. The human recognizes something. Not what they were looking for, but something significant in light of what they know from decades of experience. This recognition generates a new direction. The model follows it into territory neither could have reached alone.

The search objective is being rewritten in real time by what the search itself reveals.

A concrete example. A portfolio manager running this architecture on a hedge fund's research process dispatches what we call a hunting party — a directed, adversarial search for vulnerable consensus. The system identifies what the market believes about a name, decomposes the load-bearing assumptions, and attacks them with specific evidence. Each finding generates the next question. The attack is a chain, not a fork — path-dependent, where step three only exists because step two happened.

The PM reads the output in sixty seconds and catches what the system missed — not because the system was wrong, but because the PM's consequence-weighted experience flags a dimension the prevalence-weighted model wouldn't prioritize. The PM redirects. The system executes. The thesis either survives the full chain of attacks or it cracks. Either way, the output has a conviction quality that no amount of parallel bull-and-bear analysis can produce.

This is not a feature that can be added to Bloomberg.

Motor Versus Sail

Bloomberg's ASKB is an impressive engineering achievement. But it illustrates the fundamental limitation.

ASKB is a set of pre-defined agents, each specialized in querying a specific Bloomberg content silo. When it does quantitative work, it generates BQL — Bloomberg's existing query language. The data access boundary is the same one that existed before AI. Every new workflow needs to be designed, built, tested, and shipped by Bloomberg developers.

Bloomberg built a motor. Engineered velocity. Predictable, reliable, constrained. Same speed regardless of how much more powerful models become.

The Claude Code architecture is a sail. Maximum surface area exposed to model capability. Minimum structure constraining it. When models get 2x better — and they will, repeatedly — the motor needs to be rebuilt. The sail just goes faster.

But the deeper problem is strategic. Bloomberg's business model requires democratization. They sell the same terminal to 325,000 subscribers. ASKB makes every subscriber slightly better at the same things simultaneously.

This is the opposite of edge. If every fund running Bloomberg gets the same ASKB workflows on the same day, the insight window is zero. The analytical advantage is zero. Everyone does the same work slightly faster.

Put differently: Bloomberg democratizes AI across finance. The Claude Code architecture concentrates it within a single firm.

The AI-native startups — AlphaSense, Hebbia, Rogo, Brightwave — have the same structural limitation. They give you AI-powered read access to documents. They don't write to your systems of record. They don't accumulate your institutional knowledge. They don't compound your firm's IP. They deliver the same product to every customer.

As models improve, the orchestration layer commoditizes. What took a $350 million startup to build last year becomes reproducible by a capable team with direct access to Claude Code. These tools are useful today. Whether they remain durable businesses as models continue to improve is an open question.

Why This Compounds

The Claude Code architecture has a growth function with two independent variables.

Variable one: model improvement. Every frontier model release makes every deployment better automatically, with zero development work. When Anthropic ships a more capable model, everything the architecture does improves — reasoning quality, execution speed, breadth of capability. No feature request needed. No developer sprint. No upgrade cycle.

Variable two: institutional knowledge accumulation. Every analysis, every deal memo, every IC decision, every operational playbook adds to the firm's encoded local context. Six months in, the system knows your firm in a way no software product ever could. The output stops being generic and starts being distinctively yours — because the context steering inference is distinctively yours.

These two variables multiply, not add.

A firm running this architecture for two years has something fundamentally different from a firm starting today, and that gap widens. It does not narrow.

No other product in the market improves on both axes. Bloomberg improves when Bloomberg ships updates. AlphaSense improves when AlphaSense ships updates. The Claude Code architecture improves when Anthropic ships a better model AND when a firm does more work through the system.

The compounding rate is structurally higher. And the accumulated institutional intelligence becomes the real switching cost — not contractual lock-in, not data format dependency. The knowledge a firm builds into the architecture is what makes leaving prohibitively expensive, because it cannot be ported to a point solution that doesn't understand it.

The MCP Accelerant

One more structural shift. The Model Context Protocol (MCP) is unbundling data from interfaces. FactSet announced MCP support in December 2025. Bloomberg has built MCP-compatible tool exposure. PitchBook is partnering with AI providers to make private market data consumable outside their terminal.

The data that used to require a specific vendor's interface is becoming directly consumable by any LLM — including the Claude Code architecture. The terminal is no longer the only way in.

For firms running the Claude Code architecture, MCP turns every data provider into a plug-in. One reasoning engine, accessing everything, constrained by nothing except the quality of the firm's judgment.

Implications

The firms building this architecture now are creating advantages that cannot be replicated by purchasing software. The atomic unit is institutional — the people, the context, the architecture. It can't be licensed. It can't be copied. And it compounds.

A firm that has run the full system for two years will operate at a level that no collection of SaaS AI tools can approach. Not because the tools are bad, but because the architecture is categorically different — the way a fully integrated organism is categorically different from a collection of specialized cells in separate dishes.

The technology is here. Models are intelligent. The computational environment exists. The remaining question is organizational: which firms can assemble the complete system, and how quickly the compounding advantages of early movers become structural.

Diego Espinosa

CEO & Co-Founder, Kith AI Lab. Former #1 ranked equity research analyst, Research Director at Bernstein, and $10B portfolio manager.