Perspective

The Practice Is the Moat

Last updated: March 2026

58% of small businesses say they use AI (U.S. Chamber of Commerce, 2025vendor-adjacent; treat as directional). 3% have integrated it into how they actually make decisions [primary source unverified — widely cited but attribution to Kellogg Northwestern could not be confirmed; the 55-point gap directionally matches independent survey data].

That gap isn't a failure story. It's a window. And it's closing faster than most business owners realize.


The Wrong Question

The question most businesses are asking right now: Which AI tool should we get?

It's the wrong question — not because the tools don't matter, but because the tool is the part that will eventually be irrelevant. Within the next two to three years, AI assistants will be as common, as cheap, and as undifferentiated as email. Every competitor you have will have access to the same ones. The tool will be table stakes.

The question that determines who's ahead in three years is different: What will we have built by then that our competitors haven't?


What the History Actually Shows

Think about spreadsheets. Every business has them. Excel has been available to everyone for thirty years.

And yet: organizations that built systematic practices around their data — consistent models, clear ownership, habits of measuring what matters — consistently outperform peers with identical tools. The advantage isn't the software. It's the discipline, the muscle memory, and the accumulated years of knowing which numbers tell you what.

The same pattern shows up everywhere a technology went from new to ordinary:

CRM software. When Salesforce launched in 2000, the companies that built real sales processes around it — standardized qualification, tracked cycle times, accurate forecasting — developed advantages that lasted seven to ten years. The companies that bought the software without the process discipline had expensive contact databases and no competitive advantage. Same tool. Wildly different outcomes (ScienceDirect, organizational CRM adoption research).

Cloud computing. Netflix moved to Amazon Web Services starting in 2008. By the time cloud became standard infrastructure in the mid-2010s, Netflix had built fifteen years of engineering practices on top of it — deployment culture, resilience patterns, the ability to ship hundreds of changes a day. Every company can now use the same infrastructure. Almost none can operate at that level. The infrastructure is the commodity. The capability isn't (AWS Netflix case study).

The pattern is consistent across every wave of business technology: first-mover advantage is not reliable — dot-com era research found that 47% of market pioneers failed versus 8% of fast-followers (Golder & Tellis, Journal of Marketing Research, 1993). What is reliable is the advantage built by organizations that used the window to develop genuine capability before the tool became ordinary.


Why Right Now Is That Window

Here's the math:

AI tools are commodifying in roughly 18–36 months. The signs are already visible — foundation models from Anthropic, OpenAI, and Google are competing primarily on price and ease of use, which is what happens when a technology is approaching commodity status. The pattern is compressing faster than any previous technology wave: generative AI reached 39% adoption in 21 months, faster than the internet and PCs combined (Harvard Kennedy School, 2024).

Building a real organizational practice — not just using a tool, but actually changing how decisions get made, how work gets done, how new people learn — takes 6–12 months (organizational learning research synthesis; Federal Reserve SMB survey data, 2025).

The window where building that practice creates durable competitive advantage: the time between now and when the tool becomes table stakes. Simple arithmetic.

But there's a counterintuitive piece: the current moment is actually better for building than a year ago. Gartner has moved generative AI into the "trough of disillusionment" — the phase where early hype gives way to frustration and many organizations step back. 95% of AI pilot programs are failing to deliver measurable returns (MIT, 2025). This looks like bad news. It isn't. The trough is when competitors give up and when the organizations that kept building quietly pull ahead. Every previous technology wave had one. The cloud companies that kept investing in 2009 when enthusiasm cooled built advantages that persist today.


Where AI Tools Actually Sit on the Map

There's a framework used in enterprise strategy called Wardley Mapping. Its core insight is simple: technologies evolve from rare and custom-built to common and commodity, and the competitive value of something drops as it becomes more standardized. The strategic move is to understand where a technology sits on that evolution curve — and what that means for how you should use it.

AI tools right now sit in a specific position on that curve:

Evolution:  Genesis → Custom-Built → Product → Commodity

Foundation models (GPT-4, Claude):    [               ===Product→Commodity==]
AI agents / RAG / workflow tools:      [        ===Custom-Built→Product=====]
Organizational AI practice:            [===Custom-Built=======================]

Foundation models are already commoditizing — the price wars between Anthropic, OpenAI, and Google are the signal. Specialized tools (AI agents, workflow automation, domain-specific models) are moving from custom-built toward product. Organizational practice — the habits, judgment, and institutional knowledge that make AI useful for your specific business — is still in the early custom-built phase. It hasn't even started commoditizing.

The strategic implication: value moves up the stack as the lower layers commoditize. The tool becomes the commodity. The practice built on top of the tool becomes the differentiator. This is not a prediction. It's the pattern that plays out in every technology wave, visible in hindsight with spreadsheets, CRM, and cloud. The difference is that right now, you can see it happening in real time.

What your competitor's AI subscription actually means: If a competitor tells you they're "using AI," they have what you can also have today, for the same monthly cost. The question is whether they've built anything on top of it that you can't replicate by subscribing. The gap between stated AI use (58%) and genuine strategic integration (estimated low single digits across multiple surveys) suggests almost nobody has yet. The 18–36 month commodification clock suggests that window is finite.


The Difference Between a Subscription and a Practice

Here's a useful way to see the gap:

Most of the 58% who "use AI" are at the first stage — individuals experimenting, drafting emails, generating copy, asking questions. This is real use. It's also personal productivity, not organizational capability. If the one person who uses it well leaves, it leaves with them.

A practice looks different:

  • New people learn how to use it from colleagues, not tutorials
  • You can say specifically what it changed — time saved, error rate reduced, decisions made faster — not just "it's helpful"
  • The practice survives personnel changes because it lives in how the team works, not in one person's habits
  • When a better tool comes along, the practice transfers to it without starting over

That last point deserves more than a bullet. When a better AI tool arrives — and one will, probably within 18 months — a business that has only learned to use ChatGPT has to start over. A business that has built a practice around AI-assisted decision-making transfers that practice to the new tool. The habit of framing problems as AI-addressable questions, the institutional knowledge of which outputs to trust and which to verify, the workflow discipline of integrating AI into actual processes rather than treating it as a side tool — none of that is tool-specific. It moves. The learning compounds across tools rather than resetting with each one. This is the mechanism by which early practice-builders accumulate advantages that aren't erased by the next model release.

The research term for why this is hard to build quickly is "absorptive capacity" (Cohen & Levinthal, 1990): organizations can only absorb external knowledge if they already have related internal knowledge. Before AI can help a team make better decisions, that team needs to be able to articulate how they currently make decisions. Most can't, precisely. And that's where most AI implementations stall — not at the tool, but at the internal clarity that the tool requires.

This is also why competitors can't easily replicate a mature practice even once they recognize it exists. A business that has been running AI-assisted processes for two years has made thousands of small adjustments — which prompts work for their specific customer base, where AI judgment is reliable and where it needs human oversight, how to onboard new employees into AI-augmented workflows. That accumulated calibration can't be purchased. It can't be copied by subscribing to the same tools. It's built through iteration, which takes time, which is exactly what a competitor starting in 2027 won't have.

This is also why 70% of AI value comes from organizational change, not the algorithm (BCG research, 2024). The subscription is 10% of the job.


The Knowing-Doing Gap

There's another problem named specifically in organizational research: the gap between knowing what to do and actually building the habit of doing it (Pfeffer & Sutton, 2000).

Reading about AI is not the same as changing how you work. Attending a demo is not the same as running a two-week test on a real problem with real measurement. Understanding that AI could help your customer service team is not the same as watching the first ten AI-drafted responses, editing what's wrong, figuring out where the tool succeeds and where it needs guidance, and doing that until the team has collective judgment about when to trust it.

The knowing-doing gap is the reason 95% of pilots fail. They acquire knowledge. They don't complete the cycle. The organizations that close the gap do one specific thing differently: they pick a real problem — not a broad initiative — and they measure what happens.


What Minimum Viable Practice Actually Looks Like

Not: "We're implementing AI across the business."

Instead: one problem, one team, two to four weeks, one metric.

A small manufacturer picks: reducing the time to draft customer quotes from 3 hours to 45 minutes. They run it for a month. They track: how much time did it actually take? How many drafts needed significant revision? What prompts worked, what didn't? At the end of the month, they know something real. They've started building a practice.

A service business picks: responding to the ten most common customer questions faster and more consistently. They run AI-drafted responses for a month, track how often they go out without changes, track customer follow-up rates. At the end of the month, they know something real.

The timeline from that first test to genuine organizational practice — where multiple people use it, it's embedded in regular workflows, and the practice survives when someone leaves — is 6–12 months if you're intentional about it. That's not long. But it requires starting, not planning to start.

The businesses that have actually integrated AI into strategy — whatever the precise percentage — didn't get there by evaluating tools. They got there by picking a problem, running a test, learning from it, and doing it again.

How to read the 30-day mark. After the first month of a focused test, three things tell you whether a practice is forming or stalling: (1) team members are modifying the AI's output based on collective judgment, not just accepting or rejecting it wholesale — this signals calibration is happening; (2) someone other than the initiator can run the process without a handoff document; (3) you can state one specific thing that changed, in a number. If none of these are true at 30 days, the test hasn't started — it's been observed from a distance.

A note on industry. The urgency is not uniform. Finance and tech sectors are moving fastest by most measures; retail has strong pilot-to-production conversion rates. Professional services and manufacturing are earlier in the curve. If your competitors are in a sector that's been slower to move, your window may be longer than 18 months. If they're in finance or tech, it's shorter. The math applies in all cases; the timeline adjusts.


The Honest Caveats

Not every business has strong AI use cases right now. The OECD surveyed 5,000+ small businesses across seven countries in 2024 and found that 57% of non-adopters cite "not applicable to my work" as the main reason — not cost, not skills, not distrust. If your business is primarily hands-on skilled work, the case is genuinely weaker.

The productivity paradox is also real: the original computing wave in the 1970s and 80s increased computing capacity dramatically before measurable productivity gains appeared — because the gains required organizational restructuring that took years to complete. The same dynamic is likely playing out now. Expect the benefits to lag your investment, not precede it.

And the risk of early adoption is real: 72% of CIOs report breaking even or losing money on AI investments. The asymmetry cuts both ways. Moving early means paying the learning costs. Waiting means paying the catch-up costs. Neither is free.

The businesses that navigate this successfully are consistent in one way: they start with a specific problem that has measurable impact, not with the question of which tool to use. The tool follows the problem. The practice follows the test. The advantage follows the practice.


What Competitors Will Do

Most of them will wait. The Gartner trough creates permission to wait — when 95% of pilots are failing and expectations are cooling, waiting feels prudent.

When the tools mature and the framing shifts from "experimental" to "standard practice," the late majority will move. They'll subscribe. They'll run workshops. They'll experiment.

By then, the organizations that built practices in 2025 and 2026 will have two to three years of refinement, institutional knowledge, and the kind of team judgment that only comes from doing a thing long enough to get good at it. The new entrants will have the same tools. They won't have the same capability.

The spreadsheet analogy holds: everyone has had Excel for thirty years. The question was never who had it. The question was always who built the discipline to use it.


Sources: U.S. Chamber of Commerce, "Empowering Small Business," 2025 (vendor-adjacent); U.S. Census Bureau Business Trends & Outlook Survey (2025); Federal Reserve Small Business Credit Survey (2025, n=6,525); OECD, "Generative AI and the SME Workforce" (2025, survey year 2024, n=5,000+, 7 countries); MIT NANDA, "The GenAI Divide," 2025; BCG, "Where's the Value in AI?" 2024, n=1,000 CxOs; Bick, Blandin & Deming, "The Rapid Adoption of Generative AI," NBER/Harvard Kennedy School, 2024; Cohen & Levinthal, "Absorptive Capacity," Administrative Science Quarterly (1990); Pfeffer & Sutton, The Knowing-Doing Gap (2000); Golder & Tellis, "Pioneer Advantage: Marketing Logic or Marketing Legend?", Journal of Marketing Research (1993); Gartner Hype Cycle for AI (2025); Gartner CIO Survey, October 2025, n=506; Simon Wardley, Wardley Maps; Will Larson, "Wardley mapping the LLM ecosystem," December 2024.