Skip to main content
Mainframe

4 steps to understanding your mainframe applications — before you modernize

Article Feb. 19, 2025 Read time: min
By: Richard Baird

The importance of mainframes is undeniable: They run many business-critical applications for the world’s largest and most influential enterprises. In Kyndryl’s 2024 State of Mainframe Modernization Survey Report, 89% of respondents said their mainframes were extremely or very important to their business strategy and operations.

That same report showed 96% of enterprises that use mainframes do so within a hybrid IT environment — which means mainframe-based applications, like other parts of an enterprise’s IT estate, need to adapt. For mainframes to continue to provide high levels of security, reliability and performance, the applications running on them often need to be modernized.

Mainframe modernization presents human, technical and strategic challenges. Our mainframe modernization report found that organizations seldom have the in-house skills to modernize their applications, with 77% enlisting help from outside partners.

On the technical and strategy side, our research found that two unexpected challenges were particularly prominent: scope creep and an insufficient understanding of the applications and data sources to be modernized.

If a team doesn’t truly understand their applications and data, it’s hard to tightly scope a modernization initiative

These two issues are tightly intertwined, as scope creep often emerges from an insufficient understanding of applications and data. If a team doesn’t truly understand their applications and data, it’s hard to tightly scope a modernization initiative. It’s easy to throw in “one more thing” if you don’t have a good understanding of how much work that “one more thing” is — or what other functions it might disrupt.

A lack of clarity around applications and data is also dangerous on its own. It can lead to delays in modernization, workarounds that produce technical debt and applications that are unreliable or insecure. 

Acquiring a deep understanding of your applications and data is a process. Generally, organizations need to go through four phases of exploration and discovery — but many stop at just one.

1. Uncover your dependencies

Most teams start with a mainframe application analysis tool that crawls through an application, taking inventory of the connections between different modules of code and the data they use. These tools unearth application dependencies and produce a visual view of the application and data landscape. They’ll also let you drill down into the relevant databases so you can better understand exactly which data is being accessed.

Without this information, it’s impossible to proceed with confidence. Any change in one part of the application is likely to have an impact on other parts of the application, or on other applications entirely. These tools are designed to help your teams understand that impact.

As useful as this analysis is, it still doesn’t provide important information about the functionality of that particular module of code. You might learn, for example, that one module fetches data located in a particular column of a database. If the column is labeled “cust_number,” you can make an educated guess that the application is fetching customer account numbers. But if the column is called “fred2final,” it may not be as straightforward to figure out. That's where generative AI has the potential.

2. See the insides of your applications with generative AI

Generative AI has the potential to explain not only the connections between different modules of code and their data, but what these modules were designed to accomplish. It’s the difference between saying, “We’re going to 123 Elm Street,” and “We’re going to Cathy’s house.”

To use these tools, which are still in their infancy, you need access to your source code. In a way, they’re similar to Google Translate — the first time you used it, you probably weren’t impressed. But it’s improved dramatically.

In general, I’ve found that the makers of these tools understand that training data needs to be responsibly gathered and that the training itself needs to be responsible. But there are still questions you should ask, and they’re not too different from the ones you would ask before using a new chatbot. What is the source of the training data? How has the model been trained to follow best practices? And what will the output look like?

Just as is in any other language, there is “good” code and there are slang and shortcuts. You want a translation tool trained on the former, whether that comes from training manuals or code that customers have agreed to contribute for training.

In output, you’re looking for well-formed, object-oriented Java. This quality is not guaranteed. If Cobol is merely translated line for line, you’ll end up with something that may look like Java but will be missing many of its important properties. The result is code that is difficult to maintain or extend. Before you commit to a tool, a proof of concept will show the quality of the output you’re likely to receive.

3. Determine the non-functional requirements

The next step is to discover your application’s non-functional requirements (NFRs). While functional requirements define what an application needs to do, NFRs dictate how well the application needs to do it. These NFRs include performance, reliability, security and scalability.

An application that performs its assigned functions but isn’t meeting its NFRs is doing a task but may not be succeeding in a broader sense: for example, it may be so slow that the user gives up on it, or it may be insecure or glitchy.

Identifying NFRs will require some legwork within your organization. If you’re trying to figure out the performance requirements for an online banking application, for example, your line-of-business owner may be able to tell you how fast the application needs to respond.

Then, reach out to your development teams. The person who originally wrote or maintained the application may no longer be with your organization, but a colleague of theirs may have useful information and may even know where to find written documentation.

Application performance monitoring software is another resource, providing historic data on the peaks and troughs of various workloads. This data will give you an idea of what throughput and performance requirements should look like.

If you’re in a regulated industry, there may be industry standards that govern your NFRs, such as the use of multifactor authentication in the financial industry.

4. Run test cases

The last step is to find or develop test cases that will demonstrate the modernized application’s ability to perform as expected.

Most organizations do not have clean test cases. If they have any test cases at all, they may be outdated and unable to meet current standards. An old test case for an online banking application may approve an application if it responds in three seconds — but in today’s world, three seconds is much too slow.

If your application is updated frequently, then the test cases need to be updated often as well. For applications that haven’t changed much, some old test cases may still be relevant.

Tests are best written by people who know nothing about how the application is structured internally.

Ideally, test cases and test data should not be written by the same people who write the application. Subconsciously, developers are always trying to test the happy path — that is to say, the path that always works through their code. But too often, that’s not where the user ends up. Tests are best written by people who know nothing about how the application is structured internally. Line-of-business people should also contribute to the testing process, as they’re likely to know all the odd edge cases that have historically caused trouble for the application.

Consider an application that matches U.S. zip codes to their respective geographies. Developers might test to see if “85002” is successfully matched with “Phoenix.” That’s just the start. It’s also important to test a non-existent zip code like “34567”,  or better yet, “34*K7.”

Once you’ve made it through all four steps, congratulations! Now your understanding of your applications is clear enough to accurately scope out application modernization, ensuring your systems remain stable, secure and high performing.

Richard Baird is senior vice president and CTO of Core Enterprise and ZCloud for Kyndryl