Announcing collaboration with Oxford AI, to build the world's most accurate opportunity intelligence platform.

Intelligence Hub
Enterprise Software
Global
one-off
Edition 1

Enterprise Modernization: Defusing the Core System Timebomb

Agentic AI is dismantling the rigid legacy stack — and converting the human workflows running on top of it into digital labour.

Published May 12, 2026
Share
Fredrik Hjelm
Expert
Fredrik Hjelm
Co-founder & CEO, Pit

Fredrik Hjelm is one of Sweden's most prolific technology entrepreneurs. A former intelligence officer in the Swedish armed forces, he co-founded Voi Technology - Europe's leading micromobility company with nearly 1,000 employees across 150+ cities. In 2026 he co-founded Pit with Voi, Klarna, and iZettle alumni, raising $16 million led by a16z with backing from OpenAI, Anthropic, and Google executives.

LinkedIn
Narankar Sehmi
QA & Methodology
Narankar Sehmi
Oxford AI · University of Oxford
LinkedIn
500+ opportunity briefs

Curated by editors who know that exact market. From crypto infra in the Nordics to aviation marketing.

Join 10,000 executives using Authority to decide what to do next, two steps ahead.

Trusted by investors from

STANFORD
OXFORD
Google

By subscribing you agree to our privacy policy and terms of use.

Executive Summary

The contemporary enterprise operates in a state of profound architectural contradiction. Organizations are aggressively allocating capital toward generative AI, predictive analytics, and real-time personalization engines - and yet they are deploying these dynamic applications on top of foundation layers engineered in the 1970s, 1980s, and 1990s. The traditional modernization paradigm - multi-year, billion-dollar manual rewrites that frequently fail - is being rendered obsolete by specialized, autonomous AI agents capable of analyzing, dependency-mapping, and refactoring millions of lines of legacy code into modern, cloud-native microservices. This report maps the demographic and economic gravity of legacy systems, the agentic AI vendor ecosystem now defusing the core system timebomb, and the architected guardrails enterprises need to avoid the 40% project-failure rate Gartner is forecasting through 2027.

recommendations

Top 3 Executive Implications for Enterprise Tech Leaders

Three concrete actions for CIOs, CTOs, and enterprise architects to translate the findings of this report into 2026 budget and roadmap decisions.

1. Treat the COBOL talent cliff as a board-level operational risk

Approximately 10% of COBOL programmers retire each year, with one-third of the global workforce gone by 2030. Every mission-critical system whose business logic lives only in the head of a single retiring engineer represents a catastrophic single point of failure.

Action

Mandate immediate logic-extraction sprints using agentic AI tooling (IBM watsonx Code Assistant for Z, AWS Transform, GitHub Copilot agents) to convert undocumented COBOL into versioned Markdown documentation - before the institutional memory walks out the door.

2. Reclassify technical debt from IT line item to strategic liability

The Global 2000 carries $1.5-$2 trillion in accumulated technical debt, with 70% of organizations citing it as the primary inhibitor to innovation - rising to 78% in financial services and 82% in biotech.

Action

Move technical debt scoring into board-level risk reporting alongside cyber and regulatory exposure. Tie modernization budgets directly to projected ROI - Kyndryl reports 362% ROI for full mainframe migration - rather than to incremental maintenance baselines.

3. Adopt the Strangler Fig pattern - kill the Big Bang rewrite

Identify one high-value, high-coupling module (authentication, pricing engine, account balance retrieval) and refactor it into a discrete microservice fronted by an API gateway routing traffic. Prove ROI per module, then scale.

Action

Structure your 2026 modernization roadmap around incremental, independently shippable services - this approach is the single biggest reason disciplined programs avoid the 40% Gartner-predicted project-failure rate.

industry context

The Paradox of Modern Enterprise Architecture

The contemporary enterprise operates in a state of profound architectural contradiction. In the current technological epoch, organizations are aggressively allocating capital toward generative artificial intelligence, predictive analytics, and real-time personalization engines. Yet, these enterprises frequently discover that they are attempting to deploy these advanced, highly dynamic applications atop rigid foundation layers engineered in the 1970s, 1980s, and 1990s. The endeavor to connect a state-of-the-art AI personalization agent to a monolithic Common Business-Oriented Language (COBOL) mainframe or a batch-processing database from 1998 fundamentally constrains the agility, scalability, and performance of the modern enterprise.

Jennifer Li, General Partner at Andreessen Horowitz (a16z), articulates this systemic vulnerability with precision, noting that the true constraint on scaling enterprise AI is the structural disarray beneath it. She emphasizes that this messiness causes "agents to break in subtle, expensive ways" when deployed on outdated, unstructured foundations, making infrastructure entropy the primary limiting factor for AI success. This sentiment is heavily echoed across a16z's enterprise analysis, with General Partner Martin Casado highlighting that while AI works spectacularly in agile, homogeneous environments, enterprise adoption stalls because "data is fragmented across legacy systems, and workflows are tightly coupled." Legacy systems are no longer merely a technical inconvenience or a maintenance burden; they represent a structural liability that limits market responsiveness, drains operational budgets, and locks enterprises into obsolete operating models.

However, the modernization paradigm has recently undergone a seismic and non-obvious shift. The traditional approach to legacy transformation - characterized by multi-year, billion-dollar manual rewrites that carry immense operational risk and frequently fail to deliver projected returns - is being rendered obsolete. The emergence of specialized, autonomous AI agents capable of analyzing, dependency-mapping, and refactoring millions of lines of legacy code into modern, cloud-native microservices has introduced a new era of software engineering. This transformation effectively defuses the core system timebomb. It solves an acute demographic crisis characterized by a rapidly shrinking pool of legacy developers, while simultaneously converting technical debt into a fast-tracked modernization asset. By mapping hidden dependencies and preserving decades of encoded institutional memory, agentic AI is transforming the enterprise infrastructure into a real-time, API-driven foundation capable of supporting advanced AI workloads at an unprecedented scale.

data analysis

The Demographic and Economic Gravity of Legacy Systems

The urgency surrounding legacy modernization is driven by a convergence of severe demographic attrition and compounding economic pressures. The underlying infrastructure of global commerce relies heavily on aging technology maintained by an aging workforce, creating a systemic risk that threatens the stability of financial markets, government services, and global supply chains.

The Looming Talent Cliff and the COBOL Paradox

The programming languages that power the world's most critical systems are facing an unprecedented and accelerating talent scarcity. COBOL, a language created in 1959 through a collaboration between private industry and governmental institutions, remains the undisputed backbone of electronic commerce and administrative processing. Currently, there are between 220 billion and 240 billion lines of COBOL code in active operation worldwide, with approximately 5 billion new lines added to the global repository annually. The scale of its utilization is staggering: this infrastructure processes 95% of all automated teller machine (ATM) swipes globally and handles approximately $3 trillion in commercial transactions every single day.

Despite this critical reliance, the workforce capable of maintaining, patching, and evolving these systems is disappearing. The average age of a COBOL programmer is currently 58 years old, and approximately 10% of this highly specialized workforce retires each year. Projections indicate that nearly one-third of all active COBOL programmers will leave the workforce permanently by 2030. Consequently, 60% of organizations utilizing COBOL report that sourcing skilled developers is their most significant operational challenge, with 46% of IT professionals explicitly noticing the talent shortage impacting daily operations. Finding developers who understand the intricacies of decades-old COBOL, Job Control Language (JCL), or Assembler has been likened to "finding unicorns".

Brent Ellis, a Senior Analyst at Forrester, observes this phenomenon by noting that "Great technology doesn't really go away - it finds the niche that it was made for". The challenge is not that the technology is inherently flawed - indeed, mainframe COBOL remains the optimal choice for processing several million transactions within a six-hour overnight batch window - but rather that the skills gap between retiring experts and the next generation of cloud-native developers is widening into a chasm.

This demographic reality is equally severe and potentially more destabilizing in the public sector. United States federal agencies depend heavily on legacy languages for critical citizen services. Bob Stevens, Public Sector Area Vice President at GitLab, highlights the stark contrast in government technology deployment, noting that while NASA uses AI to guide rovers on Mars, agencies like the Department of Health and Human Services (HHS), the Social Security Administration (SSA), and the Centers for Medicare and Medicaid Services (CMS) depend on systems built with COBOL - a language older than the moon landing. The attrition of technical experts capable of maintaining these systems increases the probability of major system breakdowns, exposing citizen data to modern security vulnerabilities and risking the interruption of critical benefit payments.

The Compounding Cost and Strategic Paralysis of Technical Debt

The financial burden of maintaining legacy architecture severely limits the capital available for innovation. Mainframe environments can cost organizations up to $4,500 per MIPS (million instructions per second) annually, depending on the specific workloads and licensing structures in place. When hardware procurement, software licenses, and specialized support personnel are aggregated, the cost of maintaining legacy systems consumes between 60% and 80% of traditional enterprise IT budgets.

This financial drain is explicitly classified as technical debt, which is no longer just a developer inconvenience but a strategic boardroom risk. Research from Pegasystems estimates that the average global enterprise wastes more than $370 million every year due to inefficiencies and the inability to modernize outdated applications. On a macroeconomic scale, the Global 2000 is estimated to be carrying between $1.5 trillion and $2 trillion in accumulated technical debt. Unmanaged technical debt consumes 20% to 40% of active development time, actively diverting highly skilled engineering resources away from strategic, value-generating initiatives toward endless system maintenance.

Furthermore, this technical debt manifests in two distinct but equally destructive forms:

  1. Visible Debt: This encompasses identifiable legacy codebases, monolithic architectures, legacy tech stacks, and unsupported frameworks (e.g., outdated Java modules or deprecated APIs).
  2. Invisible Debt: This represents process inertia, undocumented dependencies, siloed data ecosystems, long build cycles, and a lack of system observability. Invisible debt is particularly pernicious because it obscures the root causes of system fragility, meaning organizations only recognize the debt when it triggers a catastrophic failure.

In an economic environment marked by tighter capital availability and stringent return on investment (ROI) expectations, the strategy of layering new AI tools over fragile legacy foundations is no longer viable. According to McKinsey data cited in industry playbooks, the average developer spends 17.3 hours each week navigating technical debt and bad code, creating a vicious cycle of firefighting that hinders innovation. Currently, 70% of organizations cite technical debt as a primary inhibitor to innovation, with the impact rising to 78% in financial services and 82% in biotechnology.

Financial Impact MetricValue / EstimateStrategic Consequence
Average Annual Enterprise Waste$370 MillionCapital drained from innovation budgets due to modernization inability.
Global 2000 Accumulated Tech Debt$1.5 to $2 TrillionRepresents a structural liability locking enterprises into obsolete models.
Development Time Lost20% to 40%Engineering talent diverted to system maintenance rather than value creation.
Mainframe Operational CostUp to $4,500 per MIPSHardware, software, and personnel consuming 60% to 80% of IT budgets.
Weekly Developer Time Wasted17.3 HoursTime lost to debugging, bad code, and managing invisible architectural debt.
industry context

The Evolution of Modernization: From Manual Rewrites to Agentic AI

Historically, enterprise modernization required "Big Bang" manual rewrites. These projects were notoriously prone to failure, characterized by massive budget overruns, multi-year timelines, and the high probability of degrading critical business logic during the transition. The paradigm is shifting rapidly toward AI-assisted modernization, leveraging large language models (LLMs) built on transformer architectures that have been trained on billions of lines of code.

However, standard generative AI - where a developer prompts a chatbot for a code snippet - is insufficient for enterprise-scale modernization. The industry is rapidly advancing to "Agentic AI." Gartner defines agentic AI systems as those that can set objectives, plan multi-step actions, execute complex workflows, and demonstrate autonomous decision-making within defined operational boundaries. The IEEE Global Survey released in November 2025 found that 96% of technology leaders expect agentic AI adoption to continue at lightning speed, fundamentally rewiring the software development life cycle (SDLC).

The Three-Step Framework for AI-Powered Modernization

The successful application of AI in legacy modernization generally follows a rigorous, multi-phased framework. This approach is exemplified by Julia Kordick, a Microsoft Global Black Belt, who successfully modernized highly complex COBOL systems without ever learning the COBOL language herself. As Kordick notes regarding the advent of generative AI, "When this whole idea of Gen AI appeared, we were thinking about how we can actually use AI to solve this problem that has not been really solved yet". Her methodology proves that by partnering AI expertise with the domain knowledge of seasoned legacy experts, modernization can be executed securely and rapidly.

The established framework operates across three distinct phases:

Phase 1: Code Preparation and Reverse Engineering. The primary barrier to legacy modernization is a lack of understanding regarding what the existing system actually does, as years of patching have obscured original design intents. AI agents act as automated archaeological tools, parsing through decades-old codebases to extract embedded business logic, identify complex call chains, and separate active functional logic from historical "dead code" or irrelevant logging artifacts. Multi-agent systems can autonomously trace CALL statements across thousands of files and automatically generate visual dependency graphs (such as Mermaid diagrams), instantly illuminating how various monolithic components interact without requiring manual human tracing.

Phase 2: Knowledge Distillation and Enriched Documentation. Legacy code, particularly COBOL, is often heavily criticized for its rigidity. However, from the perspective of an advanced LLM, this rigid, verbose syntax is highly advantageous. Statements such as `ADD TOTAL-SALES TO ANNUAL-REVENUE` are fundamentally self-documenting. AI agents synthesize these constructs into plain English, generating comprehensive Markdown documentation that translates COBOL syntax into human-readable business rules. This documentation becomes the immutable "source of truth" for the remainder of the modernization effort. This phase is vital because it captures the encoded institutional memory that would otherwise be permanently lost during a manual rewrite.

Phase 3: Automated Test-Driven Modernization. Once the logic is documented and verified by human domain experts, specialized agents orchestrate the physical transformation of the codebase. This involves a highly structured, test-driven modernization pipeline. A primary agent extracts the exact business rules; a secondary agent automatically generates rigorous test cases to validate those specific rules; and a tertiary agent writes the modern microservice code (for example, in Java or Python) that must seamlessly pass the generated tests. This triad of agents ensures semantic equivalence between the legacy system and the new architecture, proving programmatically that the new system behaves identically to the monolith it replaces.

The Orchestration of Specialized AI Agents

The deployment of agentic AI relies on specific persona-driven agents working under a cohesive orchestration layer. Modernization is no longer the domain of a single developer interface; it is a collaborative exercise among autonomous digital entities. An overarching "Supervisor Agent" coordinates these specialized sub-agents to execute the migration.

When an enterprise architect requests the modernization of a specific application - such as a Java-based application connected to an on-premises SQL database hosted on a Linux Virtual Machine - the Supervisor Agent dispatches the following specialized entities:

  • The Assessment Agent: This agent deeply analyzes the existing application, mapping infrastructure dependencies, codebase complexity, and performance characteristics. It provides an autonomous cloud platform suggestion (evaluating the merits of AWS versus Azure, for example) along with cost estimations, presenting a proposed plan to the human architect for approval.
  • The Migration Strategy Agent: Utilizing the assessment data and referencing the latest architectural standards of the organization, this agent determines the most appropriate transformation path. It evaluates whether the workload requires rehosting, replatforming, or extensive refactoring, subsequently generating a detailed, step-by-step migration plan.
  • The Transformation Agent: This entity executes the physical code-level modifications required for cloud compatibility. It automates tasks such as updating legacy database connection strings, adapting the code to cloud-specific APIs, and refactoring monolithic routines into distributed, modular services.

These systems are further refined by emerging methodologies such as "Mentorship-as-code." Advanced platforms utilize structured languages, like MentorScript, to define mentorship rules for AI agents, ensuring that team-specific best practices, architectural principles, and proprietary coding styles are codified, versioned, and applied consistently across the entire automated refactoring process.

industry context

Architectural Transformation: From Batch Monoliths to API-Driven Microservices

The modernization of enterprise architecture is not merely a linguistic translation from COBOL to Java; it is a fundamental structural paradigm shift. The ultimate objective is to transition from rigid, batch-processing mainframes to distributed, API-driven microservices that natively support continuous integration/continuous deployment (CI/CD) pipelines and real-time AI data consumption.

The Strangler Fig Pattern and Incremental Migration

To mitigate the catastrophic risks historically associated with "Big Bang" rewrites, agentic AI platforms heavily leverage the "Strangler Fig" pattern. This architectural approach involves incrementally wrapping the existing legacy system with modern, cloud-native interfaces, minimizing operational disruption.

Using AI-driven codebase analysis, architects identify high-value, highly coupled modules within the monolith - such as user authentication, pricing engines, or account balance retrieval. The transformation agents then refactor these specific components into discrete, independent microservices. An API Gateway is introduced into the architecture to route external traffic. New requests for the modernized functionality are directed to the new microservice, while legacy requests continue to flow uninterrupted to the mainframe. Over time, as successive modules are extracted, refactored, validated, and deployed, the legacy monolith is gradually "strangled" of its responsibilities until it can be safely and permanently decommissioned. This allows organizations to modernize their environments iteratively, proving ROI at each stage while maintaining uninterrupted business continuity.

Modernizing Legacy Databases: Navigating VSAM, IMS, and Db2

The application code constitutes only half of the modernization equation; the underlying data structures present an equally formidable, if not greater, challenge. Legacy mainframe applications typically rely on decades-old data storage paradigms, including Virtual Storage Access Method (VSAM) flat files, hierarchical Information Management System (IMS) databases, or early relational Db2 databases. These legacy repositories have accumulated decades of complex, often entirely undocumented structural idiosyncrasies and highly specific business rules that dictate data integrity.

Transitioning an enterprise from overnight batch processing to real-time, event-driven functions requires reimagining these legacy data models entirely. Agentic AI tools provide deep structural analysis of legacy data access patterns, identifying how the COBOL code reads and writes to the mainframe disks. The tools automate the data generation phase, transforming outdated hierarchical or network data models into modern, cloud-native relational or NoSQL database schemas that natively support microservices architectures. For example, AI agents can ingest a flat VSAM file structure, automatically normalize the data relationships, and generate the corresponding Data Definition Language (DDL) scripts required to instantiate a modern PostgreSQL or Amazon Aurora database environment.

Furthermore, these agents are instrumental in identifying hidden data invariants - the unwritten rules governing data integrity that were known only to the system's original developers. By syntactically analyzing the COBOL code that manipulates the data, the AI reverse-engineers the constraints and explicitly encodes them into the modern target architecture, ensuring that decades of data fidelity are strictly maintained across the migration.

Bridging the Gap: Agentic AI as a Universal API

In specific operational scenarios where immediate code refactoring is practically impossible due to intense regulatory constraints, extreme system criticality, or a lack of modernization budget, agentic AI offers a profound alternative pathway: acting as an intelligent intermediary. Inside facilities like Amazon's Artificial General Intelligence (AGI) Lab, teams are training AI agents not on idealized, modern application programming interfaces, but on high-fidelity simulations of raw legacy systems.

These agents learn to navigate the exact quirks, input delays, error states, and invisible dependencies of 1990s green-screen terminal interfaces. By managing these eccentricities autonomously behind the scenes, the AI agent effectively becomes a "Universal API". Modern applications, SaaS platforms, and generative AI consumer tools can query this agentic layer using natural language or standard RESTful calls. The agent then autonomously navigates the legacy terminal, executes the required mainframe keystrokes, retrieves the data, and returns it in a formatted structure to the modern application. This integration pattern rapidly heals the systems that cannot be immediately replaced, allowing enterprises to extract immense value from siloed legacy data without waiting for a multi-year architectural overhaul to conclude.

industry context

The AI Modernization Vendor Ecosystem and Platform Capabilities

The market for AI-driven legacy modernization has expanded explosively. As enterprises demand faster realization of value, the industry is transitioning away from traditional, labor-intensive system integrators toward highly automated, software-driven platforms. This diverse ecosystem is broadly divided into hyperscaler enterprise solutions, evolving global system integrators (GSIs), and highly specialized AI startups.

Hyperscaler Solutions: IBM and AWS

The major cloud and infrastructure providers have heavily productized agentic modernization, recognizing that unresolved technical debt is the primary friction point preventing massive cloud consumption.

IBM watsonx Code Assistant for Z. IBM has leveraged its deep, proprietary knowledge of the mainframe ecosystem to engineer a purpose-built AI assistant targeting the Z architecture. Fine-tuned extensively on mainframe-specific patterns - including Customer Information Control System (CICS) transactions, JCL job control, Db2 access patterns, and PL/I - watsonx provides comprehensive, end-to-end modernization lifecycle support.

It offers automated application discovery, natural language code explanation, and generative refactoring. A standout feature is its capability to transform COBOL directly into highly optimized Java "in minutes," while automatically generating unit tests that validate the semantic equivalence between the old and new codebases. Beyond mere translation, the platform includes the IBM AI Optimizer for Z, which conducts deep source-level analysis of COBOL modules to identify performance bottlenecks. Notably, IBM advises users of this tool to rely exclusively on CPU time as the primary performance metric, explicitly warning against tuning test data to manipulate elapsed time measurements, which are highly vulnerable to extraneous system variables. In internal deployments, IBM's own CIO Organization realized a 10% faster resolution of incidents and a 50% decrease in the time required to patch Db2 systems by incorporating these AI capabilities into their workflows.

AWS Transform. Amazon Web Services (AWS) approaches the market with AWS Transform, a collaborative enterprise modernization workbench powered by agentic AI, designed to refactor mainframe, VMware, and Windows workloads. AWS Transform automates the categorization of legacy components (JCL, BMS, COBOL) and provides visual representations of complex application dependencies. It specifically supports the complex modernization of z/OS mainframe apps and Fujitsu GS21 applications, handling specialized formats like Presentation Service Access Method (PSAM) and Network Data Base (NDB) files.

The platform utilizes shared virtual workspaces where cross-functional engineering teams collaborate natively with AI agents via natural language chat. These agents autonomously handle assessments, codebase analysis, target database generation, and holistic transformation planning. Telecommunications giants like AT&T are actively utilizing AWS Transform to migrate massive mainframe environments to Java, employing generative AI to automate documentation and testing, thereby compressing modernization timelines from years to mere months while retaining human oversight at every crucial decision node.

PlatformPrimary Target ArchitectureDifferentiating Agentic Capabilities
IBM watsonx Code Assistant for ZIBM z/OS, COBOL, PL/I, Db2Native mainframe pattern training; automated semantic equivalence unit testing; CPU-based AI optimization.
AWS Transformz/OS, Fujitsu GS21, VMware, WindowsShared collaborative workspaces; automated visual dependency mapping; PSAM/NDB support.
Google Cloud Gemini Code AssistEnterprise software portfoliosGemini 2.5 Pro integration; automated routine task completion leading to 30% efficiency gains (e.g., Wipro deployment).
Microsoft Azure MigrateWindows Server, SQL Server, LinuxAutomated dependency analysis and discovery; native Azure ecosystem mapping; free migration tooling.

The Evolution of Global System Integrators (GSIs)

Traditional GSIs are pivoting aggressively from labor-arbitrage models toward AI-accelerated frameworks. Enterprises are demonstrating deep dissatisfaction with legacy service models. Industry data reveals that 49% of enterprise leaders believe existing system integrator services focus too heavily on maintaining legacy systems through "armies of coders" rather than structurally eliminating complexity. Consequently, 74% of enterprise leaders explicitly expect the industry to pivot entirely toward highly autonomous, "Services-as-Software" delivery.

Venture capital firm Andreessen Horowitz (a16z) has explicitly highlighted this shift toward automated execution. As a16z investing partner Kimberly Tan notes regarding the evolution of modernization, "With LLMs, there is an opportunity to build a more intelligent RPA system that can contextually understand the inputs and actions it's taking and will be able to dynamically adjust to create a more robust solution" for legacy environments. Furthermore, a16z partner Sarah Wang emphasizes that AI is fundamentally restructuring modernization by "turning messy discovery (meetings, docs, tickets) into structured requirements, then auto-producing the implementation workstream: process and field mappings, config and code, test scripts, cutover plans, and migration playbooks." In response to this mandate, integrators like Publicis Sapient have developed delivery models such as Sapient Slingshot - built on the Bodhi agentic AI platform - designed to bind persistent context across the software development life cycle, ensuring every software artifact generated is grounded in organizational logic.

Other major integrators have developed similar proprietary platforms to retain market share. Infosys utilizes Topaz to enable AI-accelerated refactoring; Accenture deploys GenWizard for cross-industry modernization at a global scale; and Wipro leverages its HOLMES AI platform alongside generative AI to automate legacy environments, heavily focusing on operational efficiency.

Global System IntegratorFlagship AI Modernization PlatformCore Modernization Strengths
Publicis SapientBodhi / Sapient SlingshotAgentic AI workflow orchestration; persistent context binding across SDLC.
AccenturemyWizard / GenWizardEnterprise-scale, multi-year program management; cross-industry AI acceleration.
InfosysTopaz / CobaltAI-accelerated code refactoring; automated transition mapping.
WiproHOLMES AIAutomation-driven modernization; AI-powered migration accelerators.

The Vanguard of Specialized Agentic Startups

While hyperscalers and GSIs provide broad ecosystem integration, a new vanguard of highly specialized AI startups is driving the leading edge of agentic legacy modernization. These firms are capturing market share by focusing obsessively on specific technical niches, highly regulated environments, or holistic operational automation.

  • Pit. Emerging from stealth in May 2026 with a $16 million seed funding round led by Andreessen Horowitz (a16z), this Stockholm-based startup takes a radically different approach to modernization. Rather than solely refactoring legacy code, Pit operates as an "AI product team as a service," aiming to replace the fragmented, manual workflows run on rigid legacy SaaS tools and spreadsheets. The AI-native platform (comprising Pit Studio and Pit Cloud) analyzes how organizations operate and automatically generates customized, production-grade software that integrates natively with existing systems. Underscoring the strategic vision of transforming legacy technical debt into scalable operational efficiency, co-founder Fredrik Hjelm stated, "We are addressing the global white collar TAM for business operations by turning human labour into digital labour".
  • Stride 100x. Engineered specifically for high-stakes, highly regulated modernization efforts (such as complex .NET or core financial systems). Stride pairs proprietary GenAI tools with rigorous human engineering oversight. It focuses deeply on code and database tracing to generate modern architectural backlogs directly from legacy technical debt, ensuring auditability and compliance.
  • Rhino.ai. A speed-first entrant leveraging agentic AI and workflow automation to facilitate rapid legacy-to-cloud transitions. It specializes in schema transformation and automated application replatforming. However, analysts note it is best suited for organizations prioritizing rapid turnaround over complex, risk-managed dependency refactoring.
  • CloudFrame & Devox Software. Devox brings rigorous architectural system analysis, utilizing AI-powered tooling to uncover bottlenecks and restructure systems at the fundamental code level, providing a strong foundation prior to cloud-native stack transitions.
  • Mid-Market Innovators (ScalaCode, Simform, Fingent). These specialized firms deliver cloud-first, microservices-based modernization without the massive overhead of traditional GSIs, integrating AI for intelligent workflows and application re-platforming to serve the mid-market enterprise sector.
second order effects

Risk Factors, Hallucinations, and the Gartner Reality Check

Despite the unprecedented capabilities of agentic AI platforms, legacy modernization remains a high-risk endeavor. The integration of non-deterministic artificial intelligence with highly deterministic, mission-critical financial and operational systems creates novel and highly volatile failure vectors. Consequently, Gartner has issued a stark prediction: by the end of 2027, over 40% of enterprise agentic AI projects will be canceled.

Understanding the root causes of these anticipated failures is essential for enterprise architects attempting to navigate this technological transition safely and securely.

The Threat of "Agent Washing" and Escaping the Hype Cycle

The primary driver of projected project failure is rampant vendor misrepresentation, a phenomenon Gartner formally terms "agent washing". Driven by intense market hype and capital influx, numerous software vendors are aggressively rebranding legacy robotic process automation (RPA) tools, basic predictive models, or simple conversational chatbots as "agentic AI". Gartner explicitly estimates that of the thousands of vendors currently claiming to offer agentic technology, fewer than 130 possess actual autonomous, multi-step reasoning capabilities.

When enterprises deploy these "agent-washed" tools against the immense, undocumented complexity of a COBOL mainframe, the tools fail catastrophically. They lack the maturity, environmental awareness, and computational agency required to parse millions of lines of intertwined logic or to execute nuanced, contingent instructions over long time horizons. Organizations expecting a "silver bullet" solution quickly find themselves mired in stalled proof-of-concepts that disrupt workflows, incur costly modifications, and ultimately offer zero tangible ROI.

Hallucinations and the Irreversible Loss of Institutional Memory

The second major risk factor stems from the fundamental architecture of Large Language Models: they are probabilistic engines prone to hallucination. When refactoring a standard contemporary web application, a minor AI hallucination may result in an easily patched user interface bug. However, when refactoring a core banking system that processes millions of financial transactions daily, an AI hallucination can silently corrupt financial ledgers or break critical compliance algorithms.

Decades-old legacy code is rarely clean or straightforward. It is riddled with undocumented logic, localized edge cases, and temporary patches that calcified into permanent infrastructure over the decades. Legacy code must not be viewed merely as "technical debt"; it is deeply encoded institutional memory. Mahesh Kumar Goyal, Senior Data and AI Expert at Google, explicitly warns that separating modernization strategies from AI initiatives is an "architectural lie". He cautions that "You can rewrite the code, but you can't rewrite the knowledge". If an AI agent autonomously refactors a COBOL module but hallucinates away an undocumented security check or a critical business invariant, the modernized system will inevitably fail in real-world deployment, potentially causing massive financial and reputational damage.

The Absolute Necessity of the Human Governance Layer

To systematically mitigate the risk of catastrophic failure, successful agentic modernization absolutely mandates a strict "human-in-the-loop" governance model. AI agents are exceptionally capable at executing scale-heavy, labor-intensive tasks - such as mapping multi-layered dependencies, extracting syntax rules, and drafting thousands of unit tests - but they cannot and must not act as the final arbiters of enterprise business risk.

Nicholas Kathmann, Chief Information Security Officer at LogicGate, stresses that fast and responsible AI implementation is entirely dependent upon strict alignment to organizational governance, risk, and compliance (GRC) needs. Enterprise architects must serve as the ultimate gatekeepers. They must enforce system integrity, validate code-level consistency, and ensure that every AI-generated transformation strictly complies with security protocols and transaction flow requirements. Governance, rather than technical capability, is rapidly emerging as the single largest barrier to scaling AI-driven modernization across the enterprise.

The most successful and secure implementations utilize AI as a high-powered, tireless assistant that conducts the "heavy lifting," while human technical experts retain total, uncompromised steering control over the project's strategic direction and final code commits. Furthermore, emerging technical frameworks, such as the Model Context Protocol (MCP) and GraphRAG (Retrieval-Augmented Generation using Knowledge Graphs), are being heavily adopted to anchor these AI models. By grounding the probabilistic LLMs in highly structured, deterministic representations of historical systems, engineers can effectively suppress hallucinations and unlock accurate architectural insights that would otherwise remain dangerously obscured.

Root Cause of FailureManifestation in ModernizationArchitected Mitigation Strategy
Agent WashingUsing basic RPA tools masquerading as AI to parse complex COBOL dependencies.Rigorous vendor assessment; demanding proof of multi-step autonomous reasoning.
Logic HallucinationsAI omitting undocumented edge cases or business invariants during refactoring.Employing MCP and GraphRAG to ground models in deterministic legacy documentation.
Governance DeficitsAllowing AI to directly commit code without validation, violating compliance protocols.Architects acting as mandatory gatekeepers; implementing "Compliance-as-Code" pipelines.
Unclear ROI / HypeLaunching pilot projects without tying AI modernization to specific business cost reductions.Focusing solely on enterprise productivity (cost, quality, speed) over theoretical technical capabilities.
opportunities

Strategic ROI, Executive Value, and the Future State

When executed with proper architectural governance and robust, genuinely agentic platforms, the financial and operational returns on legacy modernization are historically unprecedented. Organizations that successfully transition from fragile mainframe monoliths to cloud-native, AI-ready platforms transition their IT spending from defensive, reactive maintenance to offensive, market-leading innovation.

Quantifiable Financial Returns and Accelerated Velocity

Extensive industry data demonstrates that strategic modernization yields compounding, multi-dimensional financial benefits. According to Kyndryl's 2025 State of Mainframe Modernization Survey, enterprises achieve an astonishing 362% Return on Investment (ROI) when successfully moving off the mainframe entirely into modern cloud architectures. Even partial or hybrid modernization strategies yield immense, rapid value; integrating mainframe systems with modern cloud architectures delivers a 297% ROI, while modernizing applications natively on the mainframe achieves a 288% ROI. These figures represent a massive increase from previous years, fueled directly by the lower project costs enabled by AI automation tools.

These high returns are driven by a dual-axis financial impact: drastically reduced project implementation costs (facilitated by agentic AI) and significant, permanent ongoing operational savings. In heavily regulated sectors, such as Banking, Financial Services, and Insurance (BFSI), successful modernization typically yields a 25% to 40% permanent decrease in annual infrastructure and software maintenance expenditures. Furthermore, organizations realize a 20% to 40% reduction in overall hardware and cloud resource costs as workloads are finally optimized for dynamic, elastic scaling.

Beyond pure capital reduction, agentic modernization unlocks unprecedented organizational agility. By successfully breaking down monolithic applications into discrete microservices, enterprises eliminate the profound bottlenecks associated with overnight batch-release cycles. Organizations routinely achieve 30% to 50% faster software release cycles post-modernization, allowing product development teams to respond to market shifts in real-time, matching the development pace of digitally native competitors. Furthermore, modernized architectures suffer 40% fewer systemic failures and recover from unexpected disruptions up to 5 times faster, drastically reducing the massive financial liability of Tier-1 application outages.

Macro-Economic Impact and AI Readiness

The implications of this modernization wave extend far beyond individual enterprise balance sheets. The IT industry is currently standing on the brink of one of its most transformative eras. IDC projects that by 2027, Global 2000 agent use will increase tenfold, with the number of actively deployed AI agents exceeding 1 billion worldwide by 2029. Supporting this massive transformation, worldwide spending on AI-supporting technologies will surpass $749 billion by 2028. Ultimately, IDC predicts investments in AI solutions and services are projected to yield a global cumulative impact of $22.3 trillion by 2030, representing approximately 3.7% of the global gross domestic product (GDP).

However, participating in this economic explosion requires foundational AI readiness. Legacy systems inherently trap critical enterprise data in inaccessible silos. Once data structures are normalized and architectures are driven by real-time, scalable APIs, the organization achieves true AI readiness within 12 months. Modernized enterprises can safely and securely deploy predictive analytics, autonomous agentic workflows, and real-time generative AI models directly against their core operational data, unlocking net-new revenue streams and highly personalized customer experiences that were physically impossible on their legacy foundations.

The Modernization Maturity Pathway

To navigate this transition, organizations are adopting structured maturity models. The Anthropic Code Modernization Playbook outlines a precise four-level evolution for enterprise architecture:

Maturity LevelOperational CharacteristicsRole of Artificial Intelligence
Level 1: Ad HocReactive firefighting; patching 30-year-old systems during outages; zero documentation.None. High risk of catastrophic system failure and talent attrition.
Level 2: PlannedSiloed, budgeted conversions (e.g., manual COBOL-to-Java); heavy reliance on GSIs.Basic generative AI for code snippets; manual verification required.
Level 3: SystematicDedicated modernization teams using standardized playbooks and tracking technical debt scores.Integrated AI assistants for documentation and syntax translation.
Level 4: OptimizedContinuous modernization pipeline; cloud-native microservices with total test coverage.Autonomous AI agents proactively scan systems, understand intent, and automatically refactor code.

By advancing through these maturity levels, organizations ensure they are systematically eliminating the technical debt that hinders their strategic objectives, replacing it with a robust, AI-native infrastructure capable of sustaining decades of future innovation.

outlook

Conclusion

The enterprise technology landscape has reached a critical, unforgiving inflection point. The demographic collapse of the legacy engineering workforce, colliding with the exorbitant and compounding costs of technical debt, has transformed application modernization from a long-term corporate aspiration into an immediate, existential mandate. Organizations can no longer afford to layer highly sophisticated, predictive artificial intelligence upon brittle, undocumented, batch-processing foundations built in the preceding century.

Agentic AI has fundamentally redefined the mechanics, economics, and velocity of this transformation. By autonomously mapping complex dependencies, mathematically translating syntax, and generating comprehensive testing suites, multi-agent systems eliminate the multi-year timelines and massive operational risks historically associated with architectural rewrites. The technological ecosystem - spanning hyperscaler solutions from AWS and IBM to specialized, hyper-focused platforms like Stride 100x and Claude Code - proves definitively that decades-old COBOL monoliths can be systematically fractured and rebuilt into agile, API-driven microservices.

However, success in this new paradigm requires extreme discipline. As market data indicates, organizations that succumb to hype-driven "agent washing" or attempt to deploy probabilistic AI models without stringent, architect-led human governance will face high failure rates and severe operational disruptions. The strategic imperative for executive leadership is to recognize that legacy code is not merely obsolete syntax - it is deeply encoded institutional memory that must be preserved. The objective is not to unleash AI to blindly rewrite history, but to utilize highly capable agentic systems as powerful surgical tools guided continuously by expert human architects.

By executing a governed, AI-accelerated modernization strategy, enterprises can successfully defuse the core system timebomb. In doing so, they not only solve an immediate demographic crisis but convert their heaviest structural liabilities into a highly resilient, data-fluid foundation uniquely positioned to dominate the incoming era of autonomous, AI-driven business operations.

methodology

Works cited

  1. Smash through tech debt: Why AI is the jackhammer - Publicis Sapient
  2. The Hidden Tech Crisis That Could Crash Your Bank Account Tomorrow: Mainframe Skills Gap in 2025 - Franklin Skills
  3. Transforming Legacy Systems with AI - Anthropic
  4. How agentic AI helps heal the systems we can't replace - Northwest Quantum
  5. COBOL in 2025: Legacy or Opportunity? - Sysmatch
  6. 2025 Legacy Code Stats: Costs, Risks & Modernization - Pragmatic Coders
  7. Legacy Mainframe Modernization: A Complete Guide for 2025 - Quinnox
  8. How GitHub Copilot and AI agents are saving legacy systems - The GitHub Blog
  9. How AI can fix government's legacy code problem - GitLab
  10. Leveraging AI to Modernize Legacy Code in Federal Civilian Agencies - ACT-IAC
  11. Average Global Enterprise Wastes More Than $370 Million Every Year Through Technical Debt - Pegasystems
  12. Turning technical debt into an AI enabler - IDC
  13. Why 2025 is the Year Tech Debt Becomes a Strategic Risk - Zartis
  14. How Generative AI Can Assist in Legacy Code Refactoring - ModLogix
  15. How Agentic AI is Redefining the Carrier-Agent Partnership - Patra
  16. Agentic AI Projects: 15+ Best Ideas, Tools and Source Code for 2025-2026 - NextAgile
  17. Accelerating Software Development - Harnessing Agentic AI - Infosys
  18. Agentic Software Engineering: Foundational Pillars and a Research Roadmap - arXiv
  19. Legacy System Modernization: Strategy, Cost & ROI in 2026 - Mobisoft Infotech
  20. Legacy Modernization Services: 40% Faster ROI & AI Readiness - Ciklum
  21. Modernization of mainframe applications - AWS Transform Documentation
  22. Reimagine your mainframe applications with Agentic AI and AWS Transform - AWS Blog
  23. How Generative AI can help with Legacy Code Refactoring? - Techolution
  24. IBM watsonx Code Assistant for Z announcement - IBM
  25. IBM watsonx Code Assistant for Z - IBM Products
  26. 12 Best Legacy Modernization Tools for Companies in 2026 - Launchpad.io
  27. Best practices - IBM watsonx Code Assistant for Z
  28. IBM CIO watsonx Assistant for Z - Case Studies
  29. Smash tech debt and get AI ready - AWS Transform
  30. AWS Transform for mainframe
  31. AWS for mainframe modernization: re:Invent 2025 guide - AWS Blog
  32. re:Invent 2025: AT&T's Mainframe Modernization and Legacy - Zenn
  33. AWS re:Invent 2025 - Accelerate Telco Transformation: AT&T (IND201) - YouTube
  34. Real-world gen AI use cases from the world's leading organizations - Google Cloud
  35. AI Code Generation: Definition, Uses and Tools - Google Cloud
  36. Top 10 AI-Driven Legacy Modernization Solutions to Watch in 2026 - Opteamix
  37. Top 10 Legacy System Modernization Companies in 2026 - Cyber Management Alliance
  38. Top 10 AI-Driven Legacy Modernization Platforms of 2025 - Stride Consulting
  39. AI-Driven Legacy Modernization at Scale with Rhino.ai - Carahsoft
  40. Top Application Modernization Companies in the USA - Reddit r/aistacks
  41. Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027 - Gartner
  42. AI Legacy Modernization: Why AI Alone Won't Fix Your Legacy Code - Board.org
  43. AI Appreciation Day Quotes and Commentary from Industry Experts in 2025 - Solutions Review
  44. Kyndryl's 2025 State of Mainframe Modernization Survey Report
  45. Agent Adoption: The IT Industry's Next Great Inflection Point - IDC
  46. IDC Unveils 2025 FutureScapes: Worldwide IT Industry Predictions - MyIDC
  47. AI-powered success - with more than 1,000 stories of customer transformation and innovation - Microsoft Cloud Blog
    Enterprise Modernization: Defusing the Core System Timebomb | Authority Intelligence