Businesses are investing heavily in artificial intelligence (AI), but only 42% are seeing a positive return on investment. The newly released Kyndryl AI Readiness Report — a supplement to the Kyndryl Readiness Report — explains why.

Here, Kyndryl experts Michael Bradshaw, Victoria Pelletier, Kim Basile and Cory Musselman detail how senior leaders must unite in service to a shared strategy for their companies to benefit the most from their AI investments.

 

 

How can businesses close the readiness gap and implement AI successfully?

Bradshaw: Closing the readiness gap starts with identifying use cases where AI can help organizations achieve their business goals. As enterprises evaluate their AI options, they must build a foundation of strong operational security, place people at the center of AI design to unlock new value and establish enterprise-wide trust in AI generally. At each step, business and technology leaders must play critical roles in driving the change that will help AI deliver on its transformative potential. They also should work with trusted partners who can help turn technology advances into competitive advantages.

 

 

What role does company culture play in the successful implementation of AI?

Pelletier: Successful implementation and adoption of any new technology depends more on how deeply it considers and integrates with human systems than on the technology itself. At its core, AI readiness is a behavioral transformation, and its successful implementation requires enterprises to embrace human-centered design principles to navigate this shift. That means understanding the people interacting with the system and building solutions that address their needs holistically while accounting for the inevitable exceptions to process or workflow standards. 

By embedding human-centered design into every stage of AI implementation — which is both operationally effective and emotionally resonant — enterprises can help ensure their AI initiatives deliver meaningful long-term outcomes.

 

More from Victoria Pelletier

AI’s real challenge? Human behavior ↗

 

How can organizations address the common barriers to AI adoption?

Bradshaw: Common barriers to AI adoption include concerns over data privacy, data security and emerging regulatory requirements throughout the global digital economy. There’s also uncertainty about value as leaders define short- and long-term success. Technical debt, insufficient data foundations and acute talent shortages also limit enterprises’ ability to fully integrate AI across their operations.

Simply investing in AI doesn’t guarantee readiness to deploy it at scale, manage emerging risks or extract long-term value. As enterprises look to the future, closing the readiness gap will require leaders who champion change. The results will depend as much on people as on technology.

 

 

What is the role of trust in AI readiness? 

Basile: Trust is essential to AI readiness. It’s about transparency, communication and empowering people to lean into change, not fear it. While words like “automation” and “efficiency” may elicit concerns about jobs, the reality is that AI’s true potential isn’t in eliminating roles but in elevating them. 

That said, trust doesn’t happen by accident. It’s built by communication. And that means an organization must be open about the good, the bad and the ambiguous. It also means developing processes — like governance frameworks — that help ensure that every AI initiative is thoughtful and responsible.

 

 

How do businesses improve security for AI projects in 2025?

Musselman: Most conversations about AI security focus on either how an organization can defend against bad actors who use AI to launch attacks and exploit data or how security teams can use generative AI to improve the efficacy and operational efficiency of what they do.

To improve security for AI projects in 2025, I suggest insisting on AI governance, reinvigorating cyber education programs and considering a zero trust architecture.

An AI governance framework is foundational for any organization that uses generative AI responsibly. Good governance not only helps ensure the right security, privacy and regulatory controls, but it also helps cut through the noise of ideas to determine how a company can get value from AI technology in the near term.

Next, as threats evolve, so must cybersecurity education. At a minimum, this training must become more customized, role-specific, engaging and timely so employees can relate the guidance to their daily activities. After all, employee training is a company’s first line of defense against cyberattacks.

Finally, while implementing a zero trust architecture (ZTA) isn’t a turnkey solution – since models consider all traffic as untrusted – ZTAs can help move the needle on readiness for the next wave of generative AI.

 

Michael Bradshaw

Global Applications, Data and AI Practice Leader

Kim Basile

Chief Information Officer

Victoria Pelletier

Vice President, Consult Partner

Cory Musselman

SVP, Chief Information Security Officer