The Role of T+1 in Post-Trade Systems Quality Assessment

Dmitry Doronichev, Head of Post Trade, Exactpro

As a number of countries have successfully transitioned to the compressed settlement standard, many are still to follow suit. In the meantime, market infrastructure operators on both sides of the transformation are impacted by occasional incongruent workflows, mostly resulting from the discrepancies between post trade and the related processes, such as FX management or securities lending – taking place across several time zones. It is curious to see what adjustments are happening on the side of the technology teams tasked with validating the quality of new post-trade platform setups. Does the impact compare in significance to the changes that clearing and settlement organisations have made or are about to undergo?

The T+1 transition collaboratively initiated by SIFMA, ICI and DTCC – backed by the SEC in the US and joined by the CDS and CIRO on the other side of the border – clearly defines the direct impact on business and operations activities. However, it does not formally prescribe how the change influences the configuration of frameworks and processes that ensure the quality of the systems involved. Determining the extent to which software testing operations are affected by T+1, in fact, reveals the strengths and vulnerabilities of the organisation’s underlying approach to quality assessment.

The heavily manual quality assurance operations, the ones still widespread in the in-house industry set-ups and reliant on specifying tests step-by-step in Excel and their manual execution in the system’s user interface, are likely to be affected the least and the most at the same time. On the one hand, they may keep on ploughing through manual scripts and introducing technical changes to them manually. On the other, it will be incredibly time-consuming. In the context of the compressed clearing and settlement timeframes being a global trend and a systemic shift – it is doubtful that this can be considered a competitive practice.

For venues that have had their automated test libraries (T+2 or longer) previously set up, deployed and functioning as regression testing tools, validating a transition to T+1 will be much less of an effort, but it would still not be as simple as mass-replacing expected result values in test scripts or implementing any other typical change request. Disparate processes would have to be reconfigured and realigned, with the aim of having the test framework match the new lifecycle structure. In this case, test library optimisation would still involve a considerable amount of manual effort.

What is crucial to understand is that it is impossible to just test the T+1 transition in isolation (i.e. as an add-on feature) in any given system, if it is to be done comprehensively and efficiently. This approach would lack an understanding that the quality of the system’s functions and all its interconnected processes should be verified in a holistic way. That is why venues that already have an automated test library in place and, thus, the ability to implement its migration to the T+1 lifecycle, are much better positioned for the transition. Those that do not have a test library would have to take a step back and start from developing one from scratch. In a way, T+1 can be thought of as a measure that levels the playing field for quality assessment practices. Clearing and settlement organisations that have not been dedicated to developing a streamlined software testing framework will be inclined to do it now.

An approach where T+1 impact on quality assessment is reduced even further is one where testing is a data-driven practice. This may encompass end-to-end (E2E) model-based testing (MBT) and automated input data generation, execution and results analysis using machine learning (ML, also known as subsymbolic artificial intelligence (AI)) methods.

In this approach, neither the input data nor the expected results are specified manually. They are generated automatically, based on the logic defined in the test input generators and the model of the system under test (SUT), respectively. In case of changes to the system logic of any scale – including the shortening of the settlement cycle – the only place where changes need to be introduced is the corresponding model code. Following the change, new test inputs and expected outcomes aligned with the T+1 cycle are generated as part of the usual test workflow.

The resulting process is less time- and resource-consuming, compared to the other approaches. Instead of engaging large amounts of specialists supporting and changing the test cases and test scripts, the approach requires the involvement of fewer experts who, in turn, are tasked with model maintenance and results analysis. If a post-trade organisation does not yet have an efficient testing practice in place, developing a test library for a T+1 would inevitably encompass investing in a model of their clearing and settlement system. But software testing processes that are already efficient can only undergo model customisation. With an existing test library in place, adjusting the assessment of the system’s functioning in line with the T+1 standard includes the following steps:

  • Introducing changes to the SUT model’s code based on the new requirements;
  • Configuring input data and test scenario generation in line with T+1 priorities;
  • Iteratively improving the test library until the most comprehensive and yet resource-efficient version is achieved and analysing the effectiveness of the test coverage;
  • Checking if the expected values and system actions conform to what is received from the system while executing the test library against it;
  • Analysing the test results and configuring traceability between the business requirements and the test report items.

The result of end-to-end modelling would be an automated test library covering the system’s functionality. A comprehensive and efficiently automated test library puts a time limit on how fast E2E testing on a given implementation can start, and how quickly it can be completed. Using ML methods enables deeper system exploration and leads to a more extensive test suite accounting for more unique parameter combinations and more versatile edge cases, with faster, more resource-light results triage and analysis.

To best prepare for a T+1 transition, whether it arrives in their location in 2025 or 2027, clearing and settlement technology operators should be concerned with developing an efficient integrated test library and ensuring its comprehensive coverage of their system. In that case, a major part of testing would be taken care of before the transition happens and, once the systems are transition-ready, a limited scope of the test library can be fine-tuned to fit the new schedule.