The Regulator nobody lobbied against

The AI industry spends big on lobbying. OpenAI’s CEO personally argued against safety regulations and transparency requirements. The industry’s message was consistent. Regulation will kill innovation. A patchwork of state laws will fragment compliance. Let us self-regulate.

They possibly were fighting the wrong fight.

In January 2026, ISO (ISO stands for Insurance Services Office . For Rest of the World readers: not be confused with the International Organization for Standardization), the organization whose standardized forms underpin 82% of US property and casualty insurance policies, introduced two new endorsements. CG 40 47 excludes AI-related liability from bodily injury and personal injury coverage. CG 40 48 excludes AI-related personal and advertising injury. These are not proposals. They are live policy language that carriers can adopt immediately.

And they are adopting it. In the US: WR Berkley, Cincinnati Financial, Frederick Mutual, and Philadelphia Insurance have all filed their own AI exclusion wordings. Philadelphia Indemnity now excludes coverage for any claim involving generative AI-created content. Hamilton Select excludes any claim involving generative AI use, period. The same happens elsewhere in the world too.

This is what regulation looks like when it does not come from a legislature.

The mechanism is simple. Without liability insurance, a business cannot get a bank loan. Banks require it. Without a certificate of insurance showing adequate coverage, a business cannot become a vendor for any large enterprise. Procurement departments require it. Without coverage, a business in a regulated sector, banking, healthcare, manufacturing, cannot operate at all.

No amount of lobbying changes what an insurer writes into a contract. There is no congressional hearing, no public comment period, no executive order. An underwriter in Hartford or London looks at the risk, decides the price, and sets the terms. If the risk is too uncertain to price, they exclude it. Done.

We have seen this before. Cyber insurance is the template. In the early 2010s, companies treated cybersecurity as optional. Then insurers started requiring specific controls as conditions of coverage. Multi-factor authentication. Endpoint detection. Encrypted backups. Incident response plans. Companies that did not comply simply could not get insured. Within five years, the industry self-regulated, not because of any law, but because of a market mechanism that made noncompliance economically impossible.

The environmental liability precedent is even more dramatic. When insurers pulled coverage for asbestos-related claims in the 1980s, it effectively killed the industry. Companies that could not get insured could not operate. The market accomplished in a few years what regulators had struggled with for decades.

The EU adds a second front. The revised Product Liability Directive now extends strict liability to AI systems. Developers and importers are liable for harms without having to prove negligence. That liability has to be insured or absorbed. For most companies, absorbing it is not an option. They need coverage. And coverage now comes with conditions, or does not come at all.

Three things this means.

  • First, stop watching, in the US Washington instead of Hartford. In the rest of the world figure out where the ISO equivalent sits. The regulatory action that will actually change your AI operations is coming from insurance underwriters, not legislators.
  • Second, check your existing policies now. If your carrier has adopted the AI exclusions, your general liability coverage may already have a gap you have not noticed.
  • Third, build governance before you are forced to. The cyber insurance playbook is clear. Companies that had controls in place before insurers required them got better terms. Companies that scrambled after the fact paid more, got less coverage, and lost contracts while they caught up.

The AI industry spent millions lobbying against regulation that legislators had not even written yet. Meanwhile, the regulation that actually matters was and is being written by actuaries.


Sources:

  • ISO AI exclusion endorsements (CG 40 47, CG 40 48): Independent Agent/Verisk (2026)
  • Carrier AI exclusion filings: Zelle Law (2026)
  • Cyber insurance precedent: Stimson Center (2024), UCI Law
  • EU Product Liability Directive: MIT/Harvard Digital Society Review
  • AI lobbying spend: Nature (2024), MIT Technology Review (2025)
  • Insurance as regulation mechanism: Modulos AI, NBC News
  • Regulatory markets framework: Schwartz Reisman Institute, University of Toronto

Posted in Artifical Intelligence, Business, Think Different | Leave a comment

The Token Tax

Every time an AI model gets called, GPU cycles get burned. GPUs cost 6-8x more per operation than traditional CPU compute. That is structural, and right now it is mostly hidden to everyday (enterprise) users.

OpenAI projects a cash burn of $25 billion in 2026 and $57 billion in 2027. They burn $2 for every $1 they earn on inference. Anthropic runs a similar ratio, burning roughly $3 billion a year against $5 billion in annualized revenue (Though being private and with the most recent Claude Code success the top line numbers might be different). Both companies are selling tokens below cost. The difference is covered by venture capital. OpenAI alone needs $665 billion in cumulative capital through 2030, with break-even not expected before then.

This is not a business model. It is a price war funded by other people’s money.

The question rarely asked: What happens when the subsidy ends?

Token prices have dropped aggressively. Anthropic cut Opus pricing by 67% in a single release. OpenAI launched budget tiers at $0.05 per million input tokens. The industry is racing to the bottom on price while racing to the top on cost. This math does not work forever.

The compounding problem is agentic workflows. A single user request that triggers an AI agent does not make one API call. It makes ten or twenty. The agent reasons, checks, iterates, calls tools, and reasons again. Each loop burns tokens. Enterprise AI budgets now spend 85% on inference alone, up from 55% two years ago. Even if the per-token price drops, the per-task cost keeps climbing because the number of tokens per task is exploding.

Here is where it gets uncomfortable. 84% of companies already report measurable gross margin erosion from AI infrastructure costs. 26% report erosion above 16%. And 90% of CIOs say cost management is limiting the value they can extract from AI. This is happening at subsidized prices. At real prices, the numbers get worse.

Think about which use cases survive a 3-5x price increase. Enterprise decision support where a single AI-assisted analysis saves a million-dollar mistake? That survives. A customer service chatbot handling routine queries at $0.002 per conversation? That probably survives. But the long tail of AI features bolted onto SaaS products, the auto-summarizers, the AI-generated email drafts, the routine code completion, these are built on cheap tokens. When tokens stop being cheap, these features either get cut or their costs get passed to users who may not value them enough to pay.

The counterargument is efficiency. NVIDIA’s Blackwell GPUs deliver 50x better token output per watt than the previous generation. Google’s TPU v7 claims a 4x improvement with 67% better energy efficiency. Custom silicon can reduce inference costs by 40-60% compared to general-purpose GPUs. These gains are real. The question is whether they arrive fast enough to offset the demand growth from agentic workloads that multiply token consumption per task by 10-20x.

Three things to watch. First, track your cost per business outcome, not your cost per token. A cheaper token that gets used twenty times per task is not cheaper. Second, identify which of your AI use cases are viable at 3x current token prices. If the answer is “none of them,” you have a subsidy dependency, not a strategy. Third, watch the funding rounds. When the next OpenAI or Anthropic raise comes with down-round terms or profitability conditions, the price war ends and the repricing begins.

The venture capital subsidy on AI compute is the largest indirect price support in the history of enterprise software. It will not last. The businesses that planned for real costs will be fine. The ones that built on subsidized tokens will learn the same lesson every business learns when someone else stops paying part of the bill.

Sources

  • OpenAI burn rate projections: Medium, “The Burn Rate Crisis” (2026)
  • Anthropic revenue and burn rate: Finout (2026)
  • Token pricing comparison: Finout, “OpenAI vs Anthropic API Pricing” (2026)
  • Agentic inference cost growth: AnalyticsWeek, “Inference Economics” (2026)
  • Margin erosion data: CloudZero, “State of AI Costs” (2025)
  • NVIDIA Blackwell efficiency: NVIDIA blog
  • Google TPU v7: AI Ireland (2026)

Posted in Artifical Intelligence, Business, Think Different | Leave a comment

The Paradox Is back

In 1987, Robert Solow looked at two decades of corporate IT investment and wrote one of the most quoted lines in economics. “You can see the computer age everywhere but in the productivity statistics.”

Nearly forty years later, replace “computer” with “AI” and the sentence still works.

US nonfarm labor productivity grew 2.2% in 2025, down from 3.0% the year before. Total factor productivity sat at 0.8%. A study of CEOs found 90% reporting no measurable productivity impact from AI. Apollo’s chief economist put it plainly. “You don’t see AI in the employment data, productivity data, or inflation data.” Meanwhile, companies increased AI spending by 85% in a single year. $203 billion in venture funding flowed into AI startups. And the economy barely moved.

We have spent the last seven posts in this series explaining why. Each one, it turns out, describes a different reason the productivity statistics stay flat.

The faster horse problem. Most companies use AI to do existing work faster, not to create new economic value. A process that took ten hours now takes six. That is a 40% improvement inside the firm. But if the output is the same, GDP does not register it as growth. Productivity statistics measure output, not speed. When you accelerate the old process instead of building a new one, the numbers do not move.

The belt-and-shaft problem. Organizations have not reorganized around AI. They bolted it onto the existing layout, the same way factories bolted electric motors onto steam-era floor plans. Erik Brynjolfsson, who first formalized the Solow Paradox in 1993, now describes a “Productivity J-Curve.” When a general-purpose technology arrives, productivity actually dips before it rises. The dip happens because the real gains require reorganization, and reorganization is expensive, slow, and painful. The original computer paradox took a full decade to resolve, from 1987 to the mid-1990s. We may be in year two of a similar curve with AI.

The TikTok AI problem. Investment goes to demos, not to production. 80% of AI projects fail to deliver value. 95% of generative AI pilots never scale. The money is flowing, but it is flowing into experiments that do not reach the point where they affect output statistics. You cannot measure the productivity impact of a pilot that was abandoned after 14 months.

The job title problem. Organizations delegate AI to a role instead of redesigning operations. The Chief AI Officer gets a mandate but not the authority to change how the business actually works. Result: technology adoption without process change. The numbers reflect this. 75% of measured AI productivity gains are concentrated in 20% of firms. The ones that redesigned. The other 80% adopted AI and changed nothing structural.

The token tax problem. The economics are subsidized. Companies are building on artificially cheap compute, which means the cost side of the productivity equation is distorted. When 84% of companies report margin erosion from AI infrastructure, the net productivity impact, output minus input, shrinks even where output improves.

The recomposition problem. The real value of AI is not inside individual firms. It is in how firm boundaries and value chains get recomposed. But GDP measures what happens inside national boundaries through existing accounting categories. When a BPO firm shifts from selling headcount to selling outcomes, when a manufacturer stops owning a warehouse because AI-coordinated logistics made it unnecessary, those shifts are structural. They change where value sits, but the statistical framework was not built to capture value migration across organizational boundaries in real time.

So the paradox is real, but it is not mysterious. It has specific, identifiable causes. And every one of them is fixable.

The original Solow Paradox resolved when three things happened at once:

  • Hardware got cheap enough for broad deployment.
  • Organizations finally reorganized around the technology instead of layering it onto old structures.
  • And a generation of managers who understood the technology natively entered decision-making roles. All three took roughly a decade.

For AI, the hardware is already cheap, arguably too cheap given the subsidies. The organizational redesign is barely starting. And the generational shift in management thinking is years away.

Three implications.

First, do not use the flat productivity numbers to argue that AI does not work. It works. The problem is that most organizations are not doing the work that makes the gains visible at scale.

Second, do not wait for the macro statistics to validate your AI strategy. By the time productivity data catches up, the companies that reorganized early will have a structural advantage that is very difficult to close.

Third, reread the previous posts in this series. Each one describes a specific failure mode that keeps AI out of the productivity numbers. Fix those, and you will not need to wait for the paradox to resolve itself. You will be the resolution.

Sources

  • Solow original quote: New York Times Book Review (1987)
  • US productivity data: Bureau of Labor Statistics (2025)
  • CEO productivity survey: Fortune, “CEOs Admit AI Had No Impact” (2026)
  • Apollo economist quote: Torsten Slok
  • AI contribution to GDP: St. Louis Fed, “Tracking AI’s Contribution to GDP Growth” (2026)
  • Productivity J-Curve: Brynjolfsson et al., NBER Working Paper 24001
  • Gain concentration (75%/20%): PwC (2026)
  • McKinsey, “Is the Solow Paradox Back?” (2025)
  • MIT Sloan, “A Calm Before the AI Productivity Storm”

Posted in Artifical Intelligence, Business, Think Different | Leave a comment

The big Recomposition

In the previous posts I argued that most companies think about AI linearly. Make the process faster. But keep the layout the same. Bolt the electric motor onto the old belt-and-shaft system and call it AI-progress.

But there is a bigger question that almost nobody is asking. What if AI does not just change how work gets done inside your company? What if it changes why your company is shaped the way it is?

Oliver Williamson won the Nobel Prize in 2009 for a deceptively simple insight. Companies exist because coordinating work internally is cheaper than buying it from the market. That is it. The boundary of any firm sits wherever the cost of doing something yourself becomes higher than the cost of getting someone else to do it.

Those costs have specific names. Search costs, finding the right supplier or partner. Monitoring costs, making sure they deliver quality. Coordination costs, managing handoffs across organizations. Contracting costs, negotiating and enforcing terms. Every company you have ever worked at is shaped by these four forces. The departments that exist, the functions that are in-house, the work that gets outsourced. All of it traces back to where those costs tip.

AI just tipped them all at once.

No previous technology did this. The telegraph reduced search costs. You could find a supplier in another city without traveling there. Containerization reduced coordination costs. Standardized boxes replaced custom loading at every port. The internet reduced distribution costs. You could sell directly without a physical storefront. Each technology moved one or two cost categories and reshaped one or two industries as a result.

AI hits all four simultaneously. It reduces search costs because an agent can scan, compare, and qualify suppliers in minutes. It reduces monitoring costs because machine-audited quality checks replace manual inspection. It reduces coordination costs because protocol-based integration replaces bilateral negotiation. It reduces contracting costs because automated compliance tracking shrinks the surface area for disputes.

When all four drop at the same time, firm boundaries do not just shift. They recompose. Functions that companies kept internal because market coordination was too expensive suddenly become cheaper to buy. And functions that were outsourced suddenly become worth bringing back inside, because AI made internal coordination cheap enough to justify full control.

Both movements happen simultaneously. That is what makes this different from a simple outsourcing wave or a simple insourcing trend. It is a recomposition. The pieces come apart and go back together in a different configuration. Same LEGO bricks, different structure.

You can see it happening now in three places.

In outsourced services, the BPO industry is being recomposed. FTE-based pricing, paying for headcount, dropped from 42% of contracts to 28% in three years. Outcome-based pricing, paying for results, grew from 20% to 39%. The relationship between buyer and provider is being redrawn because AI made it cheap enough to monitor outcomes instead of supervising people. The BPO firm that used to sell labor is becoming a firm that sells completed work. That is not an improvement to the old model. It is a different business.

In manufacturing supply chains, AI-driven demand sensing and real-time quality monitoring are changing which parts of the supply chain companies own and which they contract. When you can monitor a supplier’s output quality in real time through machine inspection, the case for vertical integration weakens. When you can coordinate just-in-time delivery across dozens of suppliers through automated scheduling, you no longer need to own the warehouse. The transaction cost that justified the old structure disappeared.

In professional services, legal research, financial analysis, compliance screening. These functions existed inside large firms partly because the cost of finding, coordinating, and monitoring external specialists was too high. AI is collapsing those costs. The result is not that law firms or banks shrink. It is that the boundary between what they do internally and what they source externally is moving. New specialist firms emerge to serve functions that used to require an in-house team. Old generalist providers lose work that gets absorbed back into the client organization.

The trap is thinking about this as “transformation,” a word that implies the same company changes shape over time. What is actually happening is faster and more structural. The economics that determined why your company has the shape it has are shifting underneath you. Departments that exist because coordination with the outside was too expensive may no longer need to exist. Partners you outsourced to because internal capacity was too costly may no longer be the right answer either.

Three questions:

  • First, list the functions you keep in-house. For each one, ask whether the transaction costs that justify internal ownership have actually changed. If AI made external coordination 5x cheaper, does the function still belong inside?
  • Second, list what you outsource. For each one, ask whether AI has made internal coordination cheap enough to bring it back. Sometimes the right move is the opposite of what you expect.
  • Third, look at where your industry boundaries are drawn. Companies that see recomposition coming will redraw their own boundaries first. Companies that do not will have the boundaries redrawn for them.

This is not about making your company faster. It is about whether your company is still shaped for the economics it operates in. The pieces are the same. The structure they fit into is not.

Sources:

  • Williamson/Coase framework: California Management Review, “From Coase to AI Agents” (2025)
  • Oliver Williamson Nobel Perspectives: UBS
  • AI and transaction cost reduction: Labor Market Matters, “AI, Transaction Costs, and Self-Employment”
  • BPO recomposition data (FTE to outcome pricing): a16z, “Unbundling the BPO”; FirstSource, “Future BPO Services”
  • Fluid firm boundaries: Raktim Singh, “The Fluid Boundary of the AI-Era Firm”
  • AI as coordination-compressing capital: arXiv 2602.16078
  • Healthcare AI transaction cost framework: arXiv 2604.16465

Posted in Artifical Intelligence, Business, Think Different | Leave a comment

Transformation is not a Job Title

The average Chief Digital Officer lasts 31 months. Shortest tenure in the C-suite, falling every year. 75% leave the company entirely when they go, not sideways into another role, out the door. Nearly half of CDOs themselves describe the position as a “revolving door.”

This is what happens when you try to solve a structural problem with a job title.

Digitization was supposed to transform businesses. What it mostly did was express existing processes digitally. Paper forms became PDF forms. Catalogs became websites. Manual approval chains became automated approval chains with exactly the same steps, the same bottlenecks, and the same logic. The container changed. The content did not.

The numbers confirm this. 70% of digital transformation projects failed to deliver their intended outcomes. $2.3 trillion in global spending, and most organizations got stuck at step one. Converting analog to digital. Not rethinking the business model. Not questioning which processes should exist. Just scanning the paper and calling it innovation.

Now the same playbook is being applied to AI. Hire a Chief AI Officer. Give them a mandate. Watch the revolving door spin again.

Gartner expects 35% of large enterprises to have a Chief AI Officer by now. 48% of FTSE 100 companies already have one, with 65% appointed in the last two years. The US government mandated the role at all federal agencies. But Harvard Business Review already published the warning. Chief Data and AI Officers are “set up to fail” through the same structural problems that killed the CDO. Poor alignment. Unclear mandate. No real authority over business processes. A technology title grafted onto a business problem.

The distinction matters. Digitization asked one question. How do I express this process digitally? AI asks a different one. Should this process exist at all? You cannot answer the second question from an office that reports to the CTO. You cannot answer it with a team of data scientists who have no authority over how the business actually operates. You cannot answer it with a 31-month runway before the next person walks through the revolving door.

The CDO role failed not because the people in it were bad. It failed because transformation is not a role. It is an operating model change. It requires rethinking how decisions get made, how teams are structured, how value flows through the organization, and who has authority to redesign processes end to end. No single hire can do that, no matter what you put on the business card.

Three things this means in practice:

  • First, stop hiring for transformation. Start redesigning for it. If your AI strategy depends on one person with a title, it is not a strategy. It is a prayer.
  • Second, look at the mandate, not the role. Does your AI lead have authority over business processes, or just over tools? If they can choose the model but not change the workflow, you have a technology advisor, not a transformation leader.
  • Third, measure tenure against outcome. If your AI leadership turns over every two years, the problem is not the people. It is the job.

The revolving door will keep spinning as long as organizations believe that transformation is something you delegate to a title. It is not. It is something you build into how the company works.

Sources:

  • CDO tenure data: DigitalDefynd, IMD
  • HBR, “Why Chief Data and AI Officers Are Set Up to Fail” (2023)
  • Digital transformation failure rate: CIO Magazine
  • CAIO adoption: Gartner, SearchSVC
  • FTSE 100 CAIO data: SearchSVC (2025)

Posted in Artifical Intelligence, Business, PracticalEconomics, Think Different | Leave a comment

The TikTok AI Trap

You have seen the pattern. A 45-second video of someone typing a prompt into ChatGPT. A flashy demo at a conference. A LinkedIn post with a before-and-after screenshot and the caption “AI just changed everything.” Three thousand likes. Zero production deployments.

This is TikTok AI. It looks impressive. It gets attention. And it has little to do with how AI creates value in a business.

The numbers are uncomfortable. RAND found that 80% of AI projects fail to deliver their intended value. MIT Sloan reports 95% of generative AI pilots never scale to production. Deloitte found 42% of companies abandoned entire AI initiatives in 2025, averaging $7.2 million in sunk cost per abandoned project. IDC tracked 33 pilots and only 4 graduated to production. Median time from approval to shutdown, 14 months.

Meanwhile, investment keeps climbing. 85% of organizations increased AI spending in 2025. 91% plan to increase further. And 95% report zero financial return.

We have been here before. In 1999, companies added “.com” to their name and watched their stock price rise. The ones that survived, Amazon, Google, eBay, were not the ones with the best launch party. They were the ones doing the boring work. Logistics infrastructure. Search indexing. Payment trust systems. The dot-com survivors built plumbing while everyone else built hype.

Gartner now places generative AI entering the Trough of Disillusionment. Demis Hassabis of Google DeepMind called parts of the market “probably a bubble, with seed rounds at multi-ten-billion valuations and basically nothing.” Yann LeCun said the industry is “completely LLM-pilled.” These are not doomers. These are the people building the technology, telling you that most of the money around it is going to the wrong places.

The boring work is where the value lives. Data quality. Process redesign. Governance. Integration architecture. Companies that invested in this, cleaning master data, building data pipelines, redesigning workflows around what AI can actually do reliably, are the ones seeing returns. An e-commerce company found its AI accuracy dropped from 92% on a clean test set of 50,000 records to 71% in production on 1.2 million messy SKUs. The gap between demo and production is not a technical problem. It is a data quality problem, a process design problem, and an organizational discipline problem.

Three patterns that signal TikTok AI thinking in your organization. First, the pilot that has been a pilot for 14 months. If it has not reached production, it is not a pilot. It is a hobby. Second, the AI strategy that starts with technology selection instead of process analysis. You are buying a spray can without checking which hinges are stuck. Third, the executive presentation where the demo gets applause but nobody asks about error rates, edge cases, or integration cost.

The question is not whether AI works. It does. The question is whether your organization is doing the work that makes it work, or just betting on vendors with TikTok videos about it.

Sources:

  • RAND Corporation (2025): 80% AI project failure rate
  • MIT Sloan (2025): 95% GenAI pilot failure rate
  • Deloitte: AI ROI paradox, 42% initiative abandonment rate
  • Gartner Hype Cycle for Artificial Intelligence (2025)
  • Demis Hassabis and Yann LeCun quotes: Fortune, AI Luminaries at Davos (2026)
  • E-commerce accuracy gap: BusinessWorld (2026)

Posted in Artifical Intelligence, Business, Think Different | Comments Off on The TikTok AI Trap

The Belt-and-Shaft Problem and its impact on AI

In 1881, the first factory switched from steam to electricity. You would expect a productivity revolution. It did not happen. For nearly 50 years, factories replaced the central steam engine with a central electric motor and kept everything else the same. Same belt-and-shaft layout. Same floor plan. Same workflow. Electricity was faster steam.

The productivity breakthrough came a generation later, when factory owners realized they no longer needed to organize the entire building around a single power source. Electric motors could be distributed. Every machine could have its own. The constraint had moved, but the layout had not.

This is exactly how most organizations think about AI. Linear. One process in, one faster process out. Input, acceleration, output. The mental model is a straight line.

But AI does not work in straight lines. It works like WD-40. It gets into every joint of an organization. Customer data connects to supply chain decisions connects to pricing connects to product development. The value is not in speeding up one link. It is in what happens when the links start talking to each other in ways they never could before.

Donella Meadows mapped this decades ago in her work on systems thinking. She identified pressure points, places where a small shift produces disproportionate system change. Most AI deployments target the surface. Speed, cost, volume. The deep influence sits elsewhere. In mental models. In how decisions get made. In the boundaries between departments, and eventually between companies.

Amazon did not use AI to make demand forecasting faster. They connected forecasting, inventory, logistics, and pricing into a system that optimizes itself across dimensions no human team could coordinate simultaneously. The value is not in any single prediction being better. It is in the network effect across predictions.

The human brain is wired against this. Research on exponential growth bias shows that even educated populations systematically underestimate nonlinear patterns. Early-stage exponential curves look indistinguishable from linear ones. So executives look at AI, see a 20% improvement in one process, and conclude they understand the impact. They are standing in an electrified factory, staring at the same belt-and-shaft layout, wondering why nothing really changed.

Three things to consider:

  • First, map where AI touches your organization today. If it is a list of isolated use cases, you are thinking linearly. Second, ask what happens when those use cases connect.
  • The second-order effects are where the real value sits.
  • Third, check whether your org chart reflects the old power source or the new one. If your AI team reports to IT, you are still organizing around the steam engine.

The constraint moved. Has your layout?

Sources:

  • Factory electrification history: Citrix (2025), “To Understand AI’s Future Impact, Check Out This Playbook from 150 Years Ago”
  • Donella Meadows, systems leverage points framework
  • Exponential growth bias research: Big Think, “Your Brain Is Wired for Linear Thinking”
  • Amazon AI integration: Stanford Digital Economy Lab

Posted in Artifical Intelligence, Business, Think Different | Comments Off on The Belt-and-Shaft Problem and its impact on AI

The faster Horse that never was

Henry Ford probably never said “If I had asked people what they wanted, they would have said faster horses.” The quote first surfaced in a 2001 marketing magazine letter, 54 years after Ford died. No historian has found it in his autobiography, his letters, or the Ford Museum’s archive of 200 verified quotes.

Which makes the irony perfect. The most famous quote about not thinking big enough is itself a myth we repeat without questioning. We do not even check the source of the story we use to warn people about not checking their assumptions.

That is exactly what is happening with AI right now.

Most companies are building faster horses. They take an existing process, bolt AI onto it, and call it transformation. Customer service gets a chatbot. Legal gets a contract summarizer. Marketing gets a content generator. The process stays the same. It just runs quicker.

McKinsey’s 2025 State of AI report puts a number on this. Only 55% of AI high performers actually redesign their workflows. The rest accelerate what already exists. MIT Sloan found the bottleneck is not model quality or data. It is organizational design. The technology is ready. The thinking is not.

Ford’s actual contribution was not speed. He did not make carriages faster. He made the carriage irrelevant. The assembly line was not an improvement to an existing production method. It was a fundamentally different way of organizing work, materials, and people. The resistance was not technical. It was conceptual. People could not see past the cart.

The pattern repeats with every general-purpose technology. Early automobiles looked like horse carriages with engines strapped on. Early television was filmed radio. Early websites were digital brochures. In every case, the first instinct was to pour the new capability into the old container. Real value only emerged when someone asked a different question. What can we do now that was impossible before?

A telecom cutomer of ours did not use AI to make customer service faster. They restructured it. Their AI assistant handles the workload equivalent of 50 agents, resolving issues in 45 minutes instead of 7 hours. The difference is not speed. It is architecture. They did not optimize the old model. They replaced it.

Three questions for any AI project. Are we making the existing process faster, or are we questioning whether the process should exist? Are we measuring speed improvement, or capability expansion? Would Henry Ford, the real one, recognize what we are doing as a faster horse?

If the answer to that last question is yes, you are building the wrong thing.


Sources:
– Quote Investigator (2011), Snopes (2025), HBR (2011)
– McKinsey, “The State of AI in 2025”
– HBR, “The Last Mile Problem Slowing AI Transformation” (2026)

Posted in Artifical Intelligence, Business, Think Different | Comments Off on The faster Horse that never was

Think Different about AI

Many organizations treat AI like a coachman treats a combustion engine. They strap it to the old cart. These series of posts each expose a different version of that fallacy, building from the individual mental model to the organizational operating model.

Posted in Artifical Intelligence, Business, Think Different | Comments Off on Think Different about AI

Groundhog Day, actually Year

Here we go again. The local journalistic farce just got an extension. The article in German for the 2026 edition of the Groundhog Day.

Posted in Uncategorized | Comments Off on Groundhog Day, actually Year