1. Uncover your dependencies
Most teams start with a mainframe application analysis tool that crawls through an application, taking inventory of the connections between different modules of code and the data they use. These tools unearth application dependencies and produce a visual view of the application and data landscape. They’ll also let you drill down into the relevant databases so you can better understand exactly which data is being accessed.
Without this information, it’s impossible to proceed with confidence. Any change in one part of the application is likely to have an impact on other parts of the application, or on other applications entirely. These tools are designed to help your teams understand that impact.
As useful as this analysis is, it still doesn’t provide important information about the functionality of that particular module of code. You might learn, for example, that one module fetches data located in a particular column of a database. If the column is labeled “cust_number,” you can make an educated guess that the application is fetching customer account numbers. But if the column is called “fred2final,” it may not be as straightforward to figure out. That's where generative AI has the potential.
2. See the insides of your applications with generative AI
Generative AI has the potential to explain not only the connections between different modules of code and their data, but what these modules were designed to accomplish. It’s the difference between saying, “We’re going to 123 Elm Street,” and “We’re going to Cathy’s house.”
To use these tools, which are still in their infancy, you need access to your source code. In a way, they’re similar to Google Translate — the first time you used it, you probably weren’t impressed. But it’s improved dramatically.
In general, I’ve found that the makers of these tools understand that training data needs to be responsibly gathered and that the training itself needs to be responsible. But there are still questions you should ask, and they’re not too different from the ones you would ask before using a new chatbot. What is the source of the training data? How has the model been trained to follow best practices? And what will the output look like?
Just as is in any other language, there is “good” code and there are slang and shortcuts. You want a translation tool trained on the former, whether that comes from training manuals or code that customers have agreed to contribute for training.
In output, you’re looking for well-formed, object-oriented Java. This quality is not guaranteed. If Cobol is merely translated line for line, you’ll end up with something that may look like Java but will be missing many of its important properties. The result is code that is difficult to maintain or extend. Before you commit to a tool, a proof of concept will show the quality of the output you’re likely to receive.
3. Determine the non-functional requirements
The next step is to discover your application’s non-functional requirements (NFRs). While functional requirements define what an application needs to do, NFRs dictate how well the application needs to do it. These NFRs include performance, reliability, security and scalability.
An application that performs its assigned functions but isn’t meeting its NFRs is doing a task but may not be succeeding in a broader sense: for example, it may be so slow that the user gives up on it, or it may be insecure or glitchy.
Identifying NFRs will require some legwork within your organization. If you’re trying to figure out the performance requirements for an online banking application, for example, your line-of-business owner may be able to tell you how fast the application needs to respond.
Then, reach out to your development teams. The person who originally wrote or maintained the application may no longer be with your organization, but a colleague of theirs may have useful information and may even know where to find written documentation.
Application performance monitoring software is another resource, providing historic data on the peaks and troughs of various workloads. This data will give you an idea of what throughput and performance requirements should look like.
If you’re in a regulated industry, there may be industry standards that govern your NFRs, such as the use of multifactor authentication in the financial industry.
4. Run test cases
The last step is to find or develop test cases that will demonstrate the modernized application’s ability to perform as expected.
Most organizations do not have clean test cases. If they have any test cases at all, they may be outdated and unable to meet current standards. An old test case for an online banking application may approve an application if it responds in three seconds — but in today’s world, three seconds is much too slow.
If your application is updated frequently, then the test cases need to be updated often as well. For applications that haven’t changed much, some old test cases may still be relevant.