Skip to main content
Data and AI

ROI on the prize: How to drive outcomes with generative AI

Article 4/04/2025 Read time: min
By Aaron Wright

Many companies struggle with how to scope their generative Al approach. In a recent Kyndryl and Altman Solon study on generative AI in the telecommunications sector, roughly 80% of executives surveyed reported exceeding their budgets when scaling their generative Al projects, with a third overshooting by up to 50%.

My role is to help our customers develop strategies that align their generative Al investments with their broader business outcomes. A major part of this work involves identifying possible obstacles to ROI early in the process to ensure that teams are mapping out the true costs and timelines required for these projects.

Here are some of my key takeaways from the challenges we’ve helped teams work through to scale their generative Al approach for success (and within budget).

Assess goals and data foundations

Scoping your generative Al approach starts with an honest conversation around goals and expectations.

My team often starts with questions like: What do you find to be the most intriguing aspects of generative Al? Where do you think they might apply to your business? And what have you done so far to start working towards these use cases?

Even basic lines of inquiry like these can reveal the bottom-line business outcomes that should provide the backbone for your investment, as well as the more practical obstacles standing in your team’s way.

For example, a common challenge many companies encounter early on is gaps in their data and Al foundations.

While it’s usually easy to envision the end goals of your investment — such as chatbots to personalize customer interactions or streamlined warehouse ops — it’s inevitable that before all that, your team will need to do some slightly less sexy work on your existing data infrastructure.

Our research with Altman Solon, for example, found that industry leaders who achieved greater success in scaling generative Al tended to focus on key areas such as:

  • Access to quality data assets
  • Scalable data infrastructure
  • Efficient data integration  

It is hard to overstate the value of this foundational legwork. Your data and data architecture are the driving forces behind your generative AI play and, therefore, must be capable of meeting its demands.

To borrow a metaphor from my colleagues: think of generative AI as a car, with your data architecture as the engine and your data as the fuel. Just as a car needs a reliable engine and the right fuel to perform efficiently and safely, generative AI needs a strong data architecture and clean, quality data to function successfully.

When it comes to preparing your existing data infrastructure for generative AI, it’s hard to overstate the value of foundational legwork.

Target strategic wins

Scoping your generative AI approach is also about identifying optimal entry points. I’ve found that successful projects often start by targeting quick but strategic wins that will help lay the groundwork for larger, more ambitious use cases down the line.

So, what does this look like in practice?

When working with a leading logistics and transportation company, we identified that their export process was a major bottleneck. Specifically: the highly manual work of inventory and compliance checks required by many importing countries.

By integrating X-ray technology with computer vision and an anomaly detection algorithm, we devised a new procedure that could reduce the week-long export process to just a few hours. (Our goal is to get it down to two hours). That’s some pretty sweet, low-hanging fruit, right?

Early wins can also serve as building blocks for larger, more ambitious use cases down the line.

Early wins can also serve as building blocks for larger, more ambitious use cases down the line.

For a retailer’s AI pursuit, we found these early wins in the deployment of an integrated, multi-modal point-of-sale tool. The tool serves as a dynamic interface for users across their sales and marketing landscape. The retailer uses it to integrate data capture and document management into their daily operations. Their employees and stockists use it for tasks like inventory management, campaign creation and campaign performance tracking. When consumers engage with campaigns, the tool captures their feedback in real time, providing behavior insights and trend analysis.

From here, however, our customer can continue to build. For example, they might deploy sentiment analysis to refine their campaign strategies based on consumer engagement. As these insights grow, they might then layer in additional models to identify which products to elevate and which consumer segments to target. Over time, by clustering consumers by sentiment and behavior, they could then train increasingly sophisticated models, leading to hyperpersonalized marketing and product recommendations.

Map out cost efficiencies

Another challenge many business leaders are facing now is managing the cost of scaling and maintaining generative AI. In fact, our study on generative AI in the telecommunications sector found that executives view the high implementation and operational cost of generative Al as the single biggest barrier to achieving ROI.

When customers express hesitancy, especially around costs like these, I’ve found that the best way to regroup and move forward is through some good old-fashioned road mapping. This means clearly outlining where investments are needed, how costs should be allocated and, crucially, what efficiencies along the way will drive long-term savings.

Doing so will help your team set more accurate expectations for both budgetary demands and eventual returns — and establish a strategic, efficiency mindset from the start.

Take our point-of-sale tool example. While engagement with the tool by B2B users (like stockists) and B2C users (consumers) differs greatly, all interactions feed into the same data and document management systems. This unified backend allows our customer to analyze end-to-end engagement and then to develop standardized, repeatable workflows across both B2B and B2C touchpoints.

Choosing the right solution at the right time can drive efficiency.

Sometimes, efficiencies are instead created by opting for the right solution at the right time. In our work with the transportation and logistics customer, we were able to optimize both our early use case and our budget by deploying a pre-trained model to validate our strategy before scaling further.

Finally, and because we all need a carrot from time to time, a good roadmap should also forecast when and where upfront investments will, eventually, be recouped. In this same example, for instance, scaling our solution is projected to save hundreds of hours of manual work per export — a truly undeniable impact.

While achieving results like these does take time, understanding the timeline and the steps it will take to get there will help to keep your team grounded — and clear-eyed about the long-term value generative AI can deliver.

Resist overcomplication

On a personal note, one of the biggest takeaways I’ve gained from working with our customers is this: Scoping your generative AI approach is also about resisting the urge to overcomplicate your solutions. Often, it’s the simplest, most obvious applications that deliver the crucial, early proof-of-concept that will be integral to successful scaling.

Read our report with Altman Solon, “Generative AI for Telecommunications,” for more ideas about building and scaling your AI blueprint.

Aaron Wright is Creative Technologist and Experience Researcher for Kyndryl Vital’s Global Team