Companies Struggle to Scale Agentic AI as Data Gaps and Governance Hurdles Mount
Spending on agentic AI is accelerating at a breakneck pace. McKinsey predicts the market, worth roughly $5‑7 billion this year, could swell to more than $199 billion by 2034. The surge reflects a shift from generative AI assistants that merely suggest actions to autonomous agents that plan, interpret and execute tasks across enterprise systems.
Despite the hype, many firms are hitting a wall. Gartner estimates that over 40% of agentic AI initiatives will be scrapped by the end of 2027. A separate study by Qlik finds that while 97% of organizations have earmarked funds for the technology, only 18% have moved beyond pilot phases to full deployment. The gap between ambition and reality is widening.
Data Foundations Hold Back Deployment
One recurring theme is data immaturity. Agentic systems rely on a consistent, trustworthy view of information, yet many companies still wrestle with fragmented databases, duplicated records and murky ownership. In such environments, even the most sophisticated models generate outputs that teams cannot rely on.
Unstructured content compounds the problem. Internal emails, knowledge‑base articles and legacy documents often contain valuable context, but they lack clear provenance. When an AI agent draws on such sources, verifying the timeliness or accuracy of the data becomes a near‑impossible task, eroding confidence in automated decisions.
As agents begin to interact directly with operational workflows—triggering supply‑chain adjustments or initiating financial alerts—the margin for error shrinks dramatically. A misstep that a human could review before execution now translates into a potentially costly automated action.
Governance and Interoperability Challenges
Beyond data, accountability looms large. Companies must answer basic questions: Who owns the data feeding the agent? Who approves the actions it takes? When should a human intervene? Clear lines of responsibility are essential not only for trust but also for compliance, especially when AI‑driven decisions affect revenue, regulatory reporting or risk management.
Regulatory frameworks are beginning to shape the conversation. The European Union’s AI Act, for example, sets expectations around transparency, accountability and risk mitigation early in the development cycle. While some view such rules as a brake on innovation, many executives see them as a roadmap for responsible AI deployment.
Another hurdle is the proliferation of disparate AI assistants across organizations. Different teams often adopt varied tools—analytics platforms, internal bots, external services—creating a fragmented ecosystem. For agents to be effective, they need secure, standardized ways to access trusted data and interact with other systems.
Emerging standards such as the Model Context Protocol (MCP) aim to bridge that gap. By exposing data and analytics through consistent interfaces, MCP enables multiple AI tools to share information while preserving access controls and governance safeguards. Companies that adopt such protocols early can avoid costly custom integrations later.
Industry leaders agree that success hinges on preparing the underlying infrastructure before scaling beyond pilots. Strengthening data quality, establishing clear governance and embracing interoperability standards are the first steps toward realizing the transformative potential of agentic AI.
Until those foundations are in place, the promise of autonomous, business‑wide AI remains more aspiration than reality.
Used: News Factory APP - news discovery and automation - ChatGPT for Business