Why 50% of GenAI projects fail – and how to beat the odds

Why 50% of GenAI projects fail – and how to beat the odds

Organisations racing to implement generative AI (GenAI) find themselves caught between the pressure to innovate and the reality of what it takes to actually do so. As a result, Gartner research found at least 50% of GenAI projects were abandoned after proof of concept by the end of 2025.

When applied well, GenAI can help organisations tackle complex challenges and build sustainable competitive advantage. When applied poorly, it becomes just another costly experiment.

The single biggest reason GenAI fails isn’t with the technology itself – it’s how organisations approach implementation. Organisations that don’t establish specific success metrics and align GenAI with strategic objectives face the highest failure rates. GenAI must be treated as a business transformation initiative, not just a technology deployment.

To realise meaningful results from GenAI investments, leaders must look beyond hype and address the core reasons many projects fail. Understanding these pitfalls and knowing how to avoid them can be the difference between wasted resources and lasting competitive advantage.

Lack of business value

The most fundamental reason GenAI projects fail is lack of business value. Many organisations fall into the trap of chasing flashy demos or deploying GenAI everywhere simultaneously. This approach dilutes resources across low-impact initiatives.

Without clear prioritisation frameworks or defined success metrics, projects lack measurable business value, making them vulnerable when budgets tighten or executives demand proof of ROI.

To succeed, organisations should create a rigorous AI use-case prioritisation framework that aligns with overall AI ambition and technical feasibility. It is essential to identify specific measurable outcomes, such as productivity gains, cost reductions and customer satisfaction, and track progress continuously.

Data isn’t ready

Data quality is the foundation of any successful GenAI initiative. Poor data affects every department, leading to unreliable outputs, failed retrieval augmented generation (RAG) implementations and models that can’t be fine-tuned effectively.

Building an AI-ready data foundation is critical for scaling GenAI efforts. This means curating accurate, enriched and well-governed data across the enterprise, while investing in training teams on specialised data management for GenAI use cases.

Specifically, organisations should focus on creating robust pipelines for RAG, organising and retrieving information with knowledge graphs. These all contribute to more reliable outcomes.

Escalating total cost of ownership

Rising costs kill projects even when they’re technically successful and delivering user value. What appears as negligible per-token expenses during pilots can become a total cost of ownership nightmare when multiplied across thousands of users and hundreds of use cases.

Organisations often underestimate GenAI’s operational expenses due to limited visibility into how costs scale. Projects that appear viable in proof of concept become budget black holes in production, leading to abrupt cancellation.

To avoid this outcome, GenAI FinOps practices should be adopted from day one. Educate all stakeholders – not just IT – on cost implications tied to deployment approaches, model selection and token usage, and avoid unnecessary model customisation, which can be expensive.

It is also important to apply prompt caching strategies to reduce redundant application programming interface (API) calls; use model routing to route queries to appropriately sized models; and monitor costs continuously with proper allocation and visibility tools.

Responsible AI as an afterthought

Neglecting responsible AI exposes organisations to regulatory violations, reputational damage, user harm and project shutdowns – risks that resonate with the C-suite and the board.

GenAI perpetuates existing AI risks while introducing new ones like deepfakes and hallucinations. Without robust controls around safety, privacy, accountability and fairness, these risks multiply quickly.

Responsible AI must be central from the beginning. day one. This means focusing on safety through the prevention of harmful outputs and ensuring model reliability, as well as privacy by protecting sensitive information. Accountability by establishing clear governance and ownership is also important, as well as fairness to avoid bias while ensuring equitable outcomes for all stakeholders involved.

Equally important is implementing critical tools, like model input validation and filtering; output monitoring and observability systems; compliance tracking and audit trails; and security controls for data and model access. Defining where GenAI shouldn’t be used is also an important consideration to protect against predictable disasters.

Poor change management

Without change management, even technically excellent GenAI tools see minimal adoption. Usage drops over time. Employees feel threatened rather than empowered. The organisation captures a fraction of potential value, while technical teams wonder why their capable solution sits unused.

Change management must be treated as a first-class requirement, not an afterthought. Leaders need to build empathy maps that reveal how GenAI impacts roles throughout their organisation, so they can focus on amplifying human capabilities instead of threatening job security.

To make it easier for employees to adopt GenAI, build it into existing workflows if possible rather than requiring them to use new tools and processes. Also, involve them in the pilot to ensure the user experience is acceptable to them and make changes based on feedback. This will increase the chances of them actually using the technology in the long-term.

Arun Chandrasekaran is a distinguished vice-president analyst at Gartner, specialising in AI.

Source: Computerweekly News
Read Full Story →