AI is reshaping cyber risk, but are security leaders ready? - TalkLPnews Skip to content

AI is reshaping cyber risk, but are security leaders ready?

image

GUEST OPINION: It’s not news that AI is now deeply embedded in enterprise operations, from customer engagement to core business processes. Yet adoption is moving faster than the safeguards to manage it, creating a widening gap between innovation and security.

The 2025 ISACA AI Pulse Poll captures the extent of the gap. A staggering 81% of digital trust professionals say AI is used within their organisation, whether it’s approved or not, yet only 28% report having a formal AI policy in place. Even more concerning, just 22% say their organisation provides AI training to all staff. The contrast between adoption and oversight is significant.

At the same time, the risks are mounting, including deep-fake attacks, AI-powered social engineering and automated misinformation. In fact, ISACA’s data shows that 66% of professionals expect deep-fake attacks to become more sophisticated in the next 12 months, but only one in five organisations is actively investing in detection tools.

Equipping Leaders to Govern AI Responsibly: Introducing AAISM

In response to these growing threats, ISACA has launched the Advanced in AI Security Management (AAISM) credential, the first and only certification focused entirely on AI-centric security management. Designed for professionals with a CISM or CISSP, it aims to close the gap between rapid AI adoption and the urgent need for governance, controls and ethical implementation.

The credential’s learning path spans AI governance, risk management and technical controls, reflecting that securing AI is about frameworks, accountability and leadership as much as algorithms. Importantly, it recognises that AI is not a bolt-on to traditional cybersecurity but a transformation of how organisations operate, decide and interact with data.

The AAISM is not just another certification. It marks a shift in what leadership in security now requires. It compels leaders, even at the height of their responsibilities, to think differently about how AI transforms risk, governance and trust. For me, earning it reinforced a truth I see everywhere: the leaders who will matter most in the AI era are those willing to grow when it is hardest, and to adapt when standing still feels safest.

As Jamie Norton, Vice Chair of ISACA, noted, “AAISM gives local security leaders a practical way to prove they can govern AI, reduce risk and enable innovation within our regulatory settings.” In Australia and New Zealand, where AI adoption is surging across sectors, that governance imperative is only growing.

Skills That Can’t Wait

The credential lands at a time when security leaders are feeling the pressure. ISACA’s poll found that 89% of professionals believe they will need AI training within the next two years just to stay relevant. For nearly half, that timeline shrinks to six months.

What’s more, almost a third of organisations are planning to expand hiring in AI-related roles over the next 12 months. That’s good news, but also a challenge if the talent pool isn’t ready.

The AAISM certification is designed to create trusted AI governance at scale by equipping experienced security professionals to confidently assess, implement and oversee AI systems. It complements ISACA’s growing suite of AI-specific learning tools and credentials, like the Advanced in AI Audit (AAIA), and builds on existing security best practices with an AI lens.

In my advisory work, I see the same imperative play out daily. Organisations need defensible, business-aligned strategies on one hand, and AI governance frameworks aligned to emerging global standards on the other. Earning AAISM reinforced that these are not separate threads but part of a single challenge, governing AI with confidence while enabling responsible innovation.

From Risk to Readiness

Despite widespread concern, most organisations aren’t yet treating AI risks as an immediate priority. Only 42% of ISACA survey respondents say their organisations have elevated AI risk to the top of the agenda. Yet nearly three in four professionals say AI skills are “very or extremely important” in their field right now.

The disconnect highlights the need for leadership, not just from CISOs and CIOs, but from boards and executives who must recognise that AI governance is not optional. It’s now a core part of building digital trust.

Ultimately, AAISM is about maturing how enterprises think about AI risk and equipping professionals to manage the transformation responsibly. Because while threat actors are already exploiting AI’s capabilities, many defenders are still catching up.

About the Author:

Chirag Joshi is a multi-award-winning CISO, author and global advisor recognised for shaping how organisations govern and secure technology in an era defined by AI. He is the Founder of 7 Rules Cyber, where he advises boards and executives on defensible cyber strategies, and the Co-Founder of Critical Front, a platform pioneering AI governance frameworks aligned with ISO 42001 and Australia’s AI Safety Standard.

Chirag has led cyber security and risk programs across government, financial services, critical infrastructure and technology sectors, and is frequently engaged by boards, regulators and policy makers on questions of resilience, governance and digital trust. He serves as President of ISACA Sydney Chapter and is a three-time CSO30 awardee, recognised among Australia’s top cyber leaders.

http://itwire.com/guest-articles/guest-opinion/ai-is-reshaping-cyber-risk,-but-are-security-leaders-ready.html