
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- The scale of the cyber threat continues to rise in an age of AI.
- Professionals must develop tactics to embrace AI without risk.
- Share your knowledge, work with partners, and use automation.
The same capabilities that make AI useful also make it exploitable. In fact, the rate at which emerging technologies are advancing intensifies that uncomfortable reality by the minute.
While professionals might not want to expose their organizations to new threats, they also recognize the risk of falling behind as other businesses seek to gain a competitive edge by implementing AI.
Also: AI agents are fast, loose and out of control, MIT study finds
So, what should you do about this challenging conundrum? Five business leaders share five ways that professionals can ensure great security in an age of AI.
1. Share your knowledge
Barry Panayi, group chief data officer at Howden, an insurance intermediary group, said one of the big benefits of working for his organization is that many staff members know the cyber risks associated with AI.
“Because we provide cyber insurance as a business, we have people who understand this area,” he said. “So, therefore, it's not just a tech person who understands security, and it's not just a data or an AI specialist.”
As an executive charged with ensuring AI is implemented safely and securely, Panayi encouraged professionals across all organizations to boost their cyber credentials: “I think people have to know more about security in their roles.”
Also: Will AI make cybersecurity obsolete or is Silicon Valley confabulating again?
Panayi said the multifaceted nature of AI cybersecurity means professionals should expect new roles and responsibilities to emerge, with people sharing knowledge and swapping between teams to create a more powerful approach.
“I know the best security specialists are the ones talking to my AI teams and asking them, ‘How would this work, and how would that work?'” he said.
“And the AI teams, conversely, speak to information security experts and ensure their processes are not a blocker as we look to make systems more secure.”
2. Go back to basics
Nick Pearson, CIO at technology specialist Ricoh Europe, said that managing cybersecurity in an age of AI requires a multidimensional approach — and he finds new dimensions almost every day.
Pearson told ZDNET that professionals could feel overwhelmed by the breadth of threats associated with emerging technology.
Yet his conversations with other experts, including Ricoh Europe's CISO, suggest that it's important to place AI cyber threats in context.
“Great security still goes back to the basics of good practices,” he said. “So, we secure by design, we've got standards, we've got capabilities, and we've got teams that analyze, check, and balance.”
Also: Why enterprise AI agents could become the ultimate insider threat
Pearson said professionals should ensure that data is managed and governed effectively. Rather than reinventing the wheel, find a way to absorb AI into your existing frameworks.
“Otherwise, you can end up with something separate from what is good practice on data leakage, for example, which, in our case, has been there for 15 years,” he said.
3. Recognize the power of assistance
Martin Hardy, cyber portfolio and architecture director at Royal Mail, said one crucial component for his firm's cyber approach is an internal AI governance forum.
“We don't stop people using AI, but where we're building AI into applications, we're making sure it's got some level of governance around it,” he said.
“Understanding where our data is and what data is going into those AI solutions is the key to success, as is what we're then asking those solutions to do.”
Also: AI threats will get worse: 6 ways to match the tenacity of your digital adversaries
While not wanting to underestimate the potential power of emerging technology, Hardy told ZDNET that it's crucial professionals view AI as a tool rather than an end in itself.
Exploiting AI effectively and securely is about managing data and deciphering potential use cases.
“There are going to be instances where people use AI and get it wrong,” he said. “Success is about changing the mentality to one that suggests, ‘This is an aid, not the answer.'”
4. Build awareness of jaywalking
John-David Lovelock, chief forecaster and distinguished VP analyst at Gartner, said digital leaders and business professionals must consider cyber threats as they invest in AI through 2026.
Lovelock told ZDNET that one key issue is that organizations can't yet benefit from access to measurable, definable, and certifiable AI safety, meaning end-user security requirements are unlikely to be met by many of their providers.
“We're not at the point with AI that we can say, ‘Does it have a seatbelt? Will it survive a crash at 25 miles an hour?'” he said.
Also: 10 ways AI can inflict unprecedented damage in 2026
Lovelock likened the current state of AI safety to the rise of jaywalking in the 1920s, when the nascent auto industry lobbied government agencies to pass new laws.
Also: Why encrypted backups may fail in an AI-driven ransomware era
“We changed the responsibility from someone who was expressing their right of way and was a victim of the accident to somebody who ought to have known better and actually caused the accident,” he said.
“AI jaywalking is the attempt to do the same thing — it's an attempt to ensure that the jay is responsible for anything that goes right or wrong with their use of AI.”
In short, current vendor agreements will likely make end users responsible for AI safety, not the technology provider, and professionals must be aware of the position.
“Acceptance of this situation is crucial,” he said. “We've seen this trend with other technologies. It's not new, in a sense, but it is a reality with AI, so at least be aware.”
5. Make AI part of your process
Jeff Love, CTO at the Professional Rodeo Cowboys Association (PRCA), recently explained to ZDNET how his organization, which has close to 100 years of history, used AI to overcome its intractable legacy IT challenge.
When gen AI models failed to penetrate older code, Love turned to Zencoder, an agentic platform that analyzes business logic and translates it into plain-English explanations.
After embracing emerging technology, Love told ZDNET that his team can now use AI as part of its processes to snuff out potential security issues — and he encouraged other professionals to look for similar opportunities.
“When we have issues come up, or even as we're putting out new code, we can say, ‘You know what? Check this for security issues. Check this for bad logic,'” he said.
“The AI is better at doing that work than a human is because it considers the complete overview. We're just so honed into specific areas we can't see the big picture all the time.”
Love said AI can also help his team to consider issues they might otherwise have neglected.
“It's always checking to see if there are security risks. And there are times that I've put out some code, and it says, ‘You know what, this could be a little bit better,'” he said. “In today's world, you've got to be concerned about the security risks.”








