AI Project Failure, or Governance Failure?

RAND[1] recently published a root-cause analysis report on AI project failure, estimating the failure rate as high as 80%. Given the amounts being spent on AI, this is worth taking note of. It's already way more than just a few early adopters, burning small R&D budgets on long shots. No, this is proper heaps of money being set on fire.[2] What gives? 

Imagine you had to pitch a regular, non-IT project to your board, with a demonstrable 80% implementation failure rate, significant and unpredictable operating costs, at risk of being legislated out of practical use, with only a vague description of extremely tenuous ROI? I don't think any respectable governance board would agree to that.

And there's the rub. Isn't it the job of governance to prevent money being spent on things that won't give returns? Whether its formal capital "G" governance, or simple enforcement of sane business principles by management, somebody's not doing what they're supposed to. So, if the technology implementation turns out to be a lemon, do we call it project failure, or governance failure? Maybe it depends on who's calling.

There's a reason why professional traders will pay a premium for shares in well governed organisations. Hint - it's got something to do with the propensity of management for setting cash on fire.

Given the average failure rates of IT projects in general, which most analysts put at over 50%, with 90% failing to produce any measurable ROI[4], and process automation growing by less than 1% per year[5], something is clearly amiss here. A slot machine has better odds than this. Way better.

In fact, most organisations are making such fundamental blunders with their technology that they won't ever see returns on their expenditure.

From an EA perspective, I'm looking at this in disbelief. Partly because this is the nth time in a row that organisational leadership was somehow duped into falling for the same hype-driven ruse, partly because of its mind boggling scale, and partly because this is exactly the kind of hazard that any ethical technology professional should prevent. When sanity returns after the AI hype, a lot of value be wiped out, and, with it, a number of organisations. Imagine if "quantum", or whatever's next, is bigger than this? There's every reason to believe it will be. 

I firmly believe that organisational leadership is ill equipped to apply sound judgement amid the chorus of people flat-out lying to them about technology (because they want to look smart), the well-meaning (but uninformed) advisors, "influencers" and "thought leaders" who lack the context to provide input of value, and the grifters who make a living from selling lemons.[3] Note that many of the crypto charlatans have successfully pivoted into AI. The only winners in this situation, are the consultants and vendors.

It can be really difficult to successfully deploy technology into an existing organisation. Ask any EA. Dealing with large numbers of dependencies and moving parts is not easy - the staggering percentage of failed IT projects clearly shows that very few organisations are getting this right. It's not easy. If you want to increase your success rate, the long list of things to do can be overwhelming. Start at the wrong end, without the proper foundations, and your efforts won't amount to much. (Agile, anyone?) The amount of ex-ante work, coordination and orchestration required, even in a moderately large organisation, is a complex undertaking, and complex problems have no silver bullets. The absolute best place to start, is to give a solid foundation to all the work that should be done. Give it a reason to exist. That foundation is best laid at the governance level.

If the foundations for value contribution aren't in place, nothing will change. Let me repeat that differently, just to stress its importance: If people aren't incentivised or compelled to ensure that the money spent on technology results in some kind of quantifiable benefit, they will keep on throwing money at lemons. If demanding some kind of return is not an organisational priority, don't be surprised if people don't care about it. 

Without getting into the details of IT governance, it makes good sense to start with something that is proven to be true, and build from that. If a basic truth is that your organisation shouldn't spend money on things that provide no returns, start from there. Apply that, in its simplest terms, to new technology projects. Set a target for whatever returns are relevant, with timelines for reporting back, and make somebody accountable for gathering, presenting and owning the results. Don't be tempted to get drawn into the details, and their accompanying buzzwords. Avoid unquantifiable feel-good visions of market dominance, endless profits and fame. The road to hell is paved with good intentions. Reduce things to simple numbers, slap a timeline on it, and hold somebody accountable. Keep this part of the process simple: that's how things get done.

Treat technology projects like any other project. If you're building a new widget factory, you want to know how much that factory will cost to build, and to own. Maybe in the first, second and third years, maybe longer. Same with your technology project. The cost to own, will be paid in real money. And that must be quantified, preferably beforehand. Insist on it. (Cloud invoices, anyone?) What will the benefits be? What will the profits be from selling the widgets? The costs (and risks) are real, they should be balanced with an upside that isn't fantasy. Don't be side tracked by attaining some kind of buzzword compliance as a benefit. Attaining "singularity" may be valuable in some context, somewhere, to somebody. Unless you insist that the buzzwords are translated into something that can be measured, you won't know if they're actually worth pursuing. Who will be held accountable when the numbers aren't made? These are simple things that are already done in other areas. 

Once you have developed your IT governance to a point where you realise that it's a lot more complicated than a few simple, informal processes, you may want to start looking at implementing an IT governance framework. Yes, most medium to large organisations will inevitably reach this point. A framework gives you the benefit of battle-tested coverage of all your business/IT areas. You won't need to re-invent the wheel, and pay for rookie mistakes made along the way. It's a lot easier (cheaper, faster) to tweak an existing governance framework to your specific needs, than it is to invent IT governance from scratch.

At this point, you may also be ready to invest in an enterprise architecture capability. Because an EA capability that's not backed by governance principles, will generally be ineffective. That's why around 60% of all organisations have been "restarting" or "refocusing" their EA capabilities, over and over, for, oh, roughly the last decade. And this will continue, as long as the evidence shows that most organisations fail at basic IT governance. It's kinda obvious: if you can't get your governance right, how do you expect your EA to make useful contributions? EA can be an incredibly powerful capability, if applied correctly. It requires more than setting up an EA function, and hoping it will somehow produce magic. The informed few that have figured this out, are actually getting much more bang for their technology investments. 

The numbers show that the IT industry, and AI in particular, has evolved into a massive gamble. People wildly throw money at things, hoping that it will stick. It doesn't have to be that way. It never should have become that way. And the market won't tolerate it staying that way. One of the most significant contributing factors is the asymmetry in information, where the buyer lacks sufficient knowledge to know whether they are being sold something of quality. This is taking a long time to correct. Instead of being drawn into the details, a good first step for governance is to insist that the outcomes (after implementation) should be aligned with the interests of the organisation. It's not a perfect approach, but it's an implementable first step. And invest in that area between business and technology, to build the required competencies that will close the knowledge gap. It won't fix itself.


[1] RAND: https://www.rand.org/pubs/research_reports/RRA2680-1.html

[2] [Scale - El Reg] $95 billion has been invested in AI startups... And astonishingly, the Big Four hyperscalers have spent $200 billion in capex... https://www.theregister.com/2024/11/29/microsoft_preps_big_guns_for/?td=r

[3] [Akerlof] George Akerlof introduced the idea that information asymmetries distort markets and reduce the quality of goods, because sellers with more information can pass off low-quality products as more valuable than informed buyers would appraise them to be. https://en.wikipedia.org/wiki/The_Market_for_Lemons

[4] [Forbes] 70% of digital transformation projects fail. ...only 30% of digital transformation projects result in improved corporate performance... ...90% fail to deliver any measurable ROI... https://www.forbes.com/sites/steveandriole/2021/03/25/3-main-reasons-why-big-technology-projects-fail---why-many-companies-should-just-never-do-them/

[5] WEF Future of Jobs: This report puts the total growth of business process automation between 2020 and 2023 at 1%. That is around 0,3% per year. https://www.weforum.org/publications/the-future-of-jobs-report-2023/