How do businesses improve security for AI projects in 2025?
Musselman: Most conversations about AI security focus on either how an organization can defend against bad actors who use AI to launch attacks and exploit data or how security teams can use generative AI to improve the efficacy and operational efficiency of what they do.
To improve security for AI projects in 2025, I suggest insisting on AI governance, reinvigorating cyber education programs and considering a zero trust architecture.
An AI governance framework is foundational for any organization that uses generative AI responsibly. Good governance not only helps ensure the right security, privacy and regulatory controls, but it also helps cut through the noise of ideas to determine how a company can get value from AI technology in the near term.
Next, as threats evolve, so must cybersecurity education. At a minimum, this training must become more customized, role-specific, engaging and timely so employees can relate the guidance to their daily activities. After all, employee training is a company’s first line of defense against cyberattacks.
Finally, while implementing a zero trust architecture (ZTA) isn’t a turnkey solution – since models consider all traffic as untrusted – ZTAs can help move the needle on readiness for the next wave of generative AI.