Boards Will Demand ROI on AI in 2026 But Teams Aren’t Ready  Skip to content

Boards Will Demand ROI on AI in 2026 But Teams Aren’t Ready 

GUEST OPINION: 2025 was the year every board demanded an AI strategy. In 2026, they will demand results. After more than a year of pilots and experiments, most businesses have observed marginal gains at best. In fact, MIT reported that 95 per cent of generative AI pilots are failing. So, what’s the solution?

As AI matures, boards will expect experimentation to turn into ROI, but many teams don’t have the infrastructure or processes in place to make this a reality. Here are four key ways teams can overcome common challenges and start realising ROI on their AI investments in 2026.

Building a foundation for success

Many organisations are starting to realise that the real limitation with AI adoption is the complexity of fragmented legacy systems. Poor integration between old systems tends to lead to data silos, outdated information and missing context. For AI projects, where data is the lifeblood, poor data can lead to poor outcomes. As such, businesses are realising that AI maturity is directly tied to integration maturity.

Without consistent access to clean, connected and up-to-date data, AI is fundamentally constrained. Organisations that move to address the underlying fragmentation of their technology environments can establish a more robust foundation to support their expansion of AI features.

Measures include investing in integration, improving data quality, simplifying application portfolios and aligning systems around shared workflows. Leaders should not view this as backend maintenance, but as a key enabler of AI readiness, as without a solid foundation, businesses risk inaccurate AI outputs and potential damage to reputation.

Trust will drive progress

Hallucinations, inconsistent outputs and unclear reasoning have harmed AI confidence for many teams, especially when AI is applied to anything beyond low-stakes tasks. When people are sceptical of AI’s ability to produce reliable outputs, they are less likely to use it, thereby impairing uptake of the technology. However, when teams trust that AI behaves consistently and safely, they use it more often and confidently. This results in more innovation, productivity and competitiveness. 

To build this trust, organisations must invest in transparent governance, clear decision boundaries, risk controls and auditability. Equally important is building a culture where teams understand how AI reaches its conclusions, rather than treating it as a black box. When employees can trace decisions, challenge assumptions and see where human judgment still plays a role, confidence grows.

Trust is not created through better models alone, but through predictable behaviour, strong guardrails and clear expectations for when AI should act and when it should signal for human intervention.

Web Analytics Made Easy - Statcounter

Data sovereignty and internal governance is crucial

Australia’s new National AI Plan sits alongside more prescriptive frameworks for the public sector, including the AI Plan for the Australian Public Service, which outlines requirements around transparency, sovereignty, data handling, auditability and responsible use. The introduction of these frameworks signals that there will only be stronger governance expectations to come.

For private-sector businesses, this means customers and stakeholders will also expect them to be able to demonstrate where their data is stored, how it is protected, how AI decisions are audited and how risks are mitigated. Sectors such as financial services and healthcare already operate under stringent compliance rules, but as AI adoption accelerates, these expectations are likely to extend across more industries.

Rather than waiting for regulation to catch up, leading organisations will treat data governance as an enabler of AI readiness and efficiencies. Ensuring sovereignty, auditability and transparent data practices not only reduces risk but also prepares companies to meet rising expectations from boards, customers and regulators. As the government sets the scene, private companies need to ensure they have the technology and practices in place to follow suit.

Realising ROI quickly

Many businesses have experimented with broad conversational AI tools in 2025, with mixed results. However, real gains – the ones that boards will demand – will come from narrow, well-defined applications called “specialised agents.”

These agents focus on specific tasks with clearly defined boundaries and therefore, predictable behaviour that aligns with a single business outcome. Subscription agents are one example of how some platforms have identified the specific task of assessing corporate subscriptions. The agent is tasked with comparing subscriptions purchased versus used to identify opportunities for cost savings.

Likewise, a Configure, Price, Quote (CPQ) AI Agent can automatically extract intelligence from previous client correspondence to propose quotes and upgrades tailored to the customer’s needs. Discounts can then be suggested based on historical conversion-rate data. While this would previously have taken hours to days of manual work, specialised AI Agents can achieve it within minutes.

Specialised agents are powerful because they overcome the challenge of ambiguity that often limits the effectiveness of large general-purpose models. Teams can start by identifying targeted workflows with clear rules and repeatable steps. Starting small with expert agents creates a practical path for scaling into more complex, multi-agent orchestration over time.

There will be no slowing of AI uptake in 2026. Both boards and consumers will demand more refined implementations of AI, going beyond low-impact tasks to be fully implemented into the core of the business.

Those business leaders who build resilient infrastructure, enforce governance rigour and scale specialised agents thoughtfully will not only meet rising board expectations, but also create value that compounds over time. The message is clear: 2026 won’t reward experimentation, but execution.

John Deeb is the GM ANZ and Field CTO APAC at Workato.

http://itwire.com/business-it-news/data/boards-will-demand-roi-on-ai-in-2026-but-teams-aren%E2%80%99t-ready.html