
Fortinet’s Australia CISO, Cornelius Mare, says the biggest challenge with AI is not the technology itself, but that productivity gains may come with unintended workforce consequences.
Fortinet is a leading cybersecurity vendor that delivers a unified platform combining network security, threat intelligence and AI-driven protection to help organisations defend against evolving cyber threats across on-premises and cloud infrastructure.
Speaking to ITWire, Mare said a lot of top performers call it brain fry. “They are saying that their brain feels cloudy,” he said, describing how constant reliance on AI can erode the deep thinking processes that underpin innovation.
Mare warned that over-reliance on AI risks diminishing not only skills, but also motivation. “What does winning look like in the business where you’ve solved real problems?” he said. “Removing the effort from problem-solving could reduce job satisfaction and long-term engagement.”
This concern extends to cognitive dependency. “If we keep on outsourcing everything to AI, we’re actually outsourcing our cognitive ability to think for ourselves.”
He also spoke about the fundamental shift in workforce dynamics, highlighting how AI is compressing performance gaps between employees. He cited Boston University research that showed a 22% gap between top and bottom performers shrinking to just 4% with AI assistance. “Now everybody can be good,” he said.
This levelling effect raises complex questions for organisations, noting that traditional approaches to managing underperformance may no longer apply in an AI-driven environment.
For Mare, the solution lies in balance by embedding AI into workflows, but with clear boundaries. “We want to bring in AI, but we also want to bring in guardrails,” he said.
That principle is particularly important in software development. While AI can accelerate coding, Mare stressed that foundational skills remain critical. “You still need hard coders that understand how to code.”
He said he knew of some organisations that are restricting their employees from using AI to ensure they can troubleshoot and fix problems when systems fail.
“If you need to solve a problem, you need to know how code works. I’ll give you an example. I had a problem with an API and was trying to connect to a specific device to do a code change. I couldn’t get it working. It took me two or three days, and I eventually realised that one of the code changes from one version to the next was the issue.”
Mare said this also had a flow-on effect. “Once I solved the issue, you get that winning feeling. What is going to happen in business if we don’t have that anymore?”
Beyond internal operations, Mare said AI is also reshaping the external threat landscape. Cyber attackers are increasingly leveraging AI to improve the sophistication of their tactics, particularly in phishing and social engineering. “It’s no longer just, here is one email, click, off you go,” he said.
Asked whether it was the good guys winning or the bad guys were getting on top, Mare said. “It is always cat and mouse. We don’t know what the bad guys are doing, but I think at the moment, if I look at the stats, I do believe that we have a winning streak,” he said.
He cited examples of AI uncovering hundreds of vulnerabilities in codebases, far beyond what traditional methods might detect. However, he emphasised that fundamentals still matter. “Cybersecurity hygiene, best practices, those things still make a big difference,” he said.
At a governance level, Mare acknowledged that many boards are still coming to terms with AI. “There is definitely that change to say let’s get experts in that know what they are doing.”
Mare said there were still challenges, particularly when it comes to investment decisions.
“The problem is AI doesn’t give you results right now,” he said. “It’s sometimes a two-year project before I can see a change in the business if I look at it from a profit and loss impact.” This lag between investment and return can make it difficult for boards to commit funding, especially in uncertain economic conditions.
Even so, Mare believes awareness is improving, with more organisations recognising the need to integrate technical expertise into board-level discussions. “We are seeing more security experts that actually now have a seat on the table,” he said.
Ultimately, Mare argues that AI should be seen as an enabler of human capability, not a replacement for it. “We still need people,” he said. “How can we invoke more creativity within the team to solve difficult problems?”
For Australian organisations, the path forward lies in moving beyond experimentation and focusing on disciplined implementation. That means embedding AI into business processes, establishing governance frameworks and ensuring employees are equipped to use the technology effectively.
“The technology works,” Mare said. “But how do we bring that back into the culture for the business?”
