Loading...
Loading...
Before committing to a multi-million legacy modernization program, no one wants to commit on functional parity. We delivered ten internal POCs to calibrate the method BEFORE any client mission. Here are the numbers and the five learnings that stabilized ATLAS Legacy.
On a COBOL mainframe in production for twenty or thirty years, most business rules are no longer in the documentation. They are in the code, sometimes in branches of conditional logic accumulated over several generations of maintainers long gone.
The CIO who wants to modernize this system faces a question simple to formulate and dreadful to answer: how do you prove, after migration, that the new system behaves exactly like the old one? Without a clear answer, the program cannot start. With a vague answer, it derails for sure during validation.
Our conviction: before proposing a program to a client, the method had to be calibrated on public code, without commercial pressure. Ten POCs later, we have a contractual answer.
Deliberately public sources — IBM carddemo, GenApp, CBSA, Raptor (COBOL), DGFiP property tax (Delphi), BizTalk Insurance (BizTalk Server). No NDA, no sensitive heritage, just legacy code representative of the families our clients ask us to modernize.
Each POC follows the same protocol: defined scope, characterization tests written BEFORE migration, pattern-by-pattern conversion AI-assisted (Claude for reasoning, Copilot for autocomplete), parallel legacy/target runs, signed discrepancy registry.
Duration per POC: one to three weeks. Typical cell: 1 senior architect + 1 target developer + AI assistance. Technical target: TypeScript on Cloudflare Workers (for ease of public deployment) or Azure Logic Apps for BizTalk.
35,524 lines analyzed in total across the ten POCs. 39 COBOL patterns stabilized (PERFORM loops, indexed files, called subprograms via CALL, environment sections, FILE declarations, EBCDIC handling, etc.) and 9 BizTalk patterns (orchestrations, XSLT maps, send and receive pipelines, schemas).
44 discrepancies identified and traced across the ten POCs — that is 0.12% of the lines. Each discrepancy documents a behavior where target code does not exactly reproduce legacy — typically COMP-3 arithmetic side-effects, EBCDIC vs ASCII sort conventions, character handling edge cases. No discrepancy affected a major production case.
8 demo applications are now live, executable, with their discrepancy registry and test suite: portfolio, carddemo, cbsa, genapp, carddemo-ext, raptor-invoice, dgfip-property-tax, biztalk-insurance-migration. See the ATLAS methodology and the COBOL to Java path.
One — Vibe coding accelerates conversion 2x to 3x, but does not replace Discovery. AI is efficient on recognizable patterns. It stays mute on unwritten business rules, which must be extracted by humans. Cutting Discovery by AI optimization = guaranteed validation failure.
Two — The client signs a discrepancy registry, not a number of lines. The contractual deliverable is not "we migrated 100,000 lines". It is "here are the 47 gaps between legacy and target, each documented, arbitrated, and signed by your program committee". That signature unblocks production.
Three — Characterization tests are written BEFORE migration, never after. Without an upstream behavioral reference, parity becomes an opinion. Three POCs started without this discipline; all three had to redo the capture phase, doubling the timeline.
Four — Parallel legacy/target runs remain the only definitive proof. No big-bang. For two to six months, legacy and target run side by side, fed by the same inputs, compared on their outputs. Any divergence becomes a ticket. It is expensive in infrastructure, it is what saves you from a production catastrophe.
Five — Discovery quality drives target code quality, not the other way. A rushed POC upstream cannot be salvaged by good engineering downstream. If you want a predictable program, pay for Discovery. It is the investment with the highest ROI we have measured.
Fuzzy scope. "We migrate the invoicing module" without specifying which transactions, which datasets, which screens. The POC must define in writing what is in and what is out before the first converted line.
Client refusal to freeze ground truth. If the legacy keeps evolving in parallel with the POC, the target chases a moving target. Either we freeze a version for the POC duration, or we accept that the POC measures nothing.
Unkept AI promises. "AI will migrate everything alone" is a sales statement, not a technical protocol. The vendors who promise it do not deliver programs in production. Our method: AI-assisted, human-responsible, signed discrepancy registry.
No tests. Without characterization tests, you do not migrate — you speculate. Three weeks of well-written tests beat three months of blind migration.
Ten internal POCs allowed us to stabilize a method (ATLAS Legacy), a pattern library (39 COBOL + 9 BizTalk), and a discrepancy library (44 documented cases). This is the methodological capital you buy when you engage Access on a modernization program.
For a client program, the protocol is the same as on public POCs — adapted to your heritage, your target, your governance. Free Intake scoping. A POC on your scope can be delivered in two to six weeks before committing to the full program. See the ATLAS Legacy product and delivery models.
**Two to six weeks** depending on volume and complexity. A single-user Delphi application of 50,000 lines: 3 to 4 weeks. A COBOL mainframe module of 10,000 lines with copybook dependencies and JCL: 4 to 6 weeks. The Discovery phase typically represents half the duration — that is normal and healthy.
The POC proves three things: (1) the technical target is viable on your specific heritage; (2) real AI-assisted productivity is measurable and reproducible; (3) the expected discrepancy ratio is known. These three measures allow scoping the full program with a precision unreachable without POC.
**Your program committee**, after internal arbitration. Access delivers the registry filled with evidence (test cases, screenshots, comparative logs), but the decision to accept or request correction stays with you. That is what makes the deliverable contractual: Access does not self-validate.
That is precisely the value of the POC: transforming uncertainty into a number. If the functional cell turns out to be three times more complex than expected, the full program is rescoped on that basis, or its perimeter revised. **Better to discover this in a 50 k€ POC than in a 5 M€ program.**
**No.** AI recognizes syntactic patterns but does not understand why a business rule exists or why an exception was added in 2008. Our senior architects remain indispensable to arbitrate ambiguities, validate structural choices, and sign the discrepancy registry. AI accelerates writing, humans remain responsible.
Three levers: (1) **comprehensive documentation** delivered with the program — identified patterns, arbitrated discrepancies, automated test suite; (2) **pair-programming** with your teams during the final weeks; (3) **transferable pattern library** your teams can reuse on future evolutions. At delivery, you are autonomous.
**No, they are public by construction.** Sources deliberately taken from open code (IBM carddemo, GenApp, CBSA, BizTalk Insurance) precisely so the methodology can be shared without NDA. You can consult the live demos and the discrepancy registry of each POC. On your missions, NDA applies normally.
**Yes, and it is even recommended on multi-application programs.** One POC per subsystem identifies specific complexities of each functional domain and calibrates the global scoping more precisely. The additional cost of parallel POCs is largely offset by program-scoping precision.
We frame every program at Intake, with transparent budgeting. A short POC of a few weeks can be delivered before committing to the full program.
Launch a POC on my scope →