By Cory Musselman, SVP, Chief Information Security Officer at Kyndryl

Cybersecurity professionals can dream big about generative AI.

Wouldn’t it be something if AI technology could continuously analyze a companywide set of telemetry — terabytes of data at a time from the myriad devices and identities and applications in the environment — and suggest when patterns move outside the normal standard deviation?

Imagine how that would revolutionize cyber threat detection, protection and incident-response capabilities — supercharging security teams’ abilities to thwart bad actors and protect corporate and customer assets.

A version of the capability is already operational. Security analysts can use generative AI to help collect, sort and perform base analysis on incident data. It’s just not yet at that grand, automated scale.


Welcome to Wave 1: the initial phase of this groundbreaking technology. While teams across various business sectors envision transformative applications for generative AI, most companies are just beginning to implement and derive value from it. The Kyndryl Readiness Report found that concerns about security and data privacy are top barriers for organizations across all industries. Furthermore, only 29% of executives feel their AI systems are prepared to handle future risks and disruptive technologies.

Currently, most conversations about AI security focus on one of two themes: how an organization can defend against bad actors who use AI to launch attacks and exploit data, or how security teams can use generative AI to improve operational efficiency and efficacy.

As we navigate Wave 1, we must detach ideas about generative AI’s potential future from how we can responsibly use it today. This involves balancing the vision for what AI could achieve with practical and ethical implementation, striving daily to bridge this gap.


Three focus areas to improve security for AI projects in 2025

 

1. Insist on AI governance

There’s no overstating the importance of formalizing an AI governance framework. It’s foundational work for any organization that intends to responsibly use generative AI.

Good governance not only helps ensure the right security, privacy and regulatory controls, but also helps cut through the noise of ideas to determine how the company can get immediate value from AI technology.

Kyndryl has established a cross-functional governance committee that evaluates and approves use cases, ensuring they align with regulatory requirements and the organization’s risk tolerance. The group includes representatives from the cybersecurity team, the CTO office, corporate strategy, legal and other functions.

Through the governance process, a set of low-risk use cases has been prioritized, evaluated and approved. If a team brings forward a proposed use that fits in one of those boxes, it’s a quick approval. You know it’s non-sensitive data. It’s not regulated. It doesn’t create unwanted exposure. At the same time, if a proposed use comes through but doesn’t fit within the scope of what’s already been approved, it has to go through the evaluation rigor.

So many organizations are trying to figure out how to use generative AI. Business teams feel pressure to adopt it, while security teams must ensure it’s done safely and protects privacy. The formality of good governance isn’t to quash innovation. It’s to help balance speed to market and return on investment with ensuring it’s done responsibly.

2. Reinvigorate cybersecurity education programs

It used to be easier to spot a phishing attack. Spelling mistakes were more prominent, and the grammar just wasn’t quite right. Unfortunately, generative AI can now be used to enhance these deceptive tactics.

Using generative AI, lower-level bad actors can now orchestrate more sophisticated phishing schemes and create more convincing deepfakes and other social engineering attacks. According to Gartner, by 2027, 17% of total cyberattacks/data leaks will involve generative AI.

To combat these kinds of cyberattacks, your company’s first line of defense has to be cybersecurity training for employees.

Securing and ensuring privacy is not just a security team function. It’s every employee’s responsibility, particularly as human error is the No. 1 risk in protecting digital information. Herd immunity comes when each of us is cyber-educated, aware and responsible for doing what we can to secure, protect and deliver results.

The challenge with cyber education programs is that they often don’t meet companies’ specific needs. As threats evolve, so must cybersecurity education. At a minimum, cybersecurity training must become more customized, role-specific, engaging and timely so employees can relate the guidance to their daily activities.

3. Consider a zero trust architecture

Also known as deny by default or never trust, always verify, zero trust models consider all traffic as untrusted. That’s not a bad posture in an environment where deepfakes and other sophisticated threats are becoming more prevalent.

Part of the strength of a zero trust model is in the many layers of security it encompasses — from the identity of the user and the device they use to the network, application and data they try to access.

This model employs a progressive validation process to prevent malicious activities, even if initial defenses like voice recognition are compromised.

U.S. government agencies are already mandated to adopt a zero-trust architecture. McKinsey suggests that adoption rates for this security framework will continue to increase for companies of all sizes.

Zero trust is not a turnkey solution. It’s a process, an approach to fortify an organization’s cybersecurity posture. But like formalizing governance and customizing training, moving to a zero trust model can help move the needle on readiness for the next wave of generative AI.

Cory Musselman

Senior Vice President, Chief Information Security Officer