By Adeel Saeed
The surging use of generative AI has created a spy versus spy scenario for cybersecurity professionals.
Many of the companies I speak with are grappling with two sides of the same coin: How can we harness AI and generative AI tools in our defense? And how can we limit these same tools from exposing us to more risk—and being weaponized against us?
I offer five strategies to capitalize on these technologies for your blue team’s benefit—and to stay ahead of the would-be bad actors.
Shore up your defense
Kyndryl absolutely endorses responsible AI, and the effective management of the data on which these models are trained is inherent to any responsible AI practice.
At their core, generative AI tools function like creative search engines, which synthesize new content, ideas and insights that reference a model trained on a large corpus of existing material. These models make the tools themselves vulnerable to attack.
Therefore, robust data governance should be the first priority when deploying AI and generative AI in cybersecurity. Establishing stringent controls over data access and quality is critical to mitigating the risk of tampering or breach.
Your organization will also need to develop specific AI guidelines tailored to your unique use cases, industry standards and regulatory demands.
Innovation can and will flourish within these guidelines if the tenants of responsible use—such as ethical standards, bias detection methods and privacy protocols—become dogma to your team, providing a lens through which they can evaluate current and future strategies.