Date:
Phil Penhaligan
Head of Equity Product Technology (LSE and Turquoise) at London Stock Exchange
Low Latency: What are we measuring?
-
3 distinct attributes
-
Measured from just outside LSE/TQ firewalls
-
Excludes transmission time to/from external client/venue
-
Excludes latency of client/venues's own systems(s)
-
Excludes network packet retransmissions due to slow consumers
-
Order latency
-
Time from Order receipt to order acknowledgment
-
Commonly referred to as Order:Ack
-
-
Market Data latency
-
Time from Order acknowledgement to public broadcast Commonly referred to as Ack:Tick
-
-
Reference price latency (Turquoise Dark Book only)
-
Time from Tick receipt to matching engine usage
-
-
-
2 key metrics
-
Average
-
Mathematical mean
-
-
Consistency
-
Max 99.9 percentile
-
Discard the worst 0.1% of orders and then take the maximum
-
-
-
-
Production Systems
-
Continuous real-time capture
-
Historical data stored in database for statistical analysis and reporting
-
-
QA Systems
-
New software releases evaluated during technical tests
-
To confirm/verify expected latency improvements
-
To detect unexpected latency regressions
-
Low Latency: How Fast Is Fast Enough?
-
Customer feedback
-
LSE/TQ averages are already fast enough - better Consistency is most important!
-
-
Our Focus
-
Since the initial deployment of Millennium (Oct 2009 Turquoise and Feb 2010 LSE)
-
We have improved order:ack average by 30% & consistency by 90%
-
Multicast ack: tick has improved by 50%
-
-
We continue to focus on improving consistency (see later slides for details)
-
-
My Observations
-
Since migration to Millennium IT, the exchange latency is an order of magnitude better than latency within client applications + transmission to/from our venues
-
Therefore for the majority of clients who are not co-located, further improvements to latency will have little impact
-
E.g. a 10% improvement to exchange latency = client 1% improvement
-
-
Co-location customers whose algorithms interact exclusively with LSE/TQ markets have the most to gain from further reductions in latency
-
Our latency is a tiny fraction (<0.0005%) of human reaction times (typically <200mS)
-
Low Latency: Who really needs it?
-
Clients/Participants
Category Least Sensitive Most Sensitive Manual Trading Brokers, 'High Touch' Traders. Arbitrage Traders, Portfolio Traders. Automated Trading Retail Service Providers (RSP), DMA Clients. Market Makers. Algorithmic engines & Smart Order Routers. Remote Algorithms e.g. VWAP, POV, IS typically driven more by historical data curves than real-time ticks. HFTs, Best Execution SORs, Co-lo Algorithms (e.g. arbitrage, momentum). -
The Venue/Exchange
-
Venues with the lowest latency Market Data are at the front of the queue when BBO updates occur
-
Quality of dark book trades depends on ref price latency
-
Builds strong technology reputation
-
Generally good for Sales & marketing
-
Low Latency: What's the next best thing?
-
Consistency
-
Fewer outliers, and outliers closer to the average rather than simply seeking ever lower average latency is the most important feature for any trading strategy whether manual or automated.
-
If your system cannot compete on latency average, you can always make improvements by making it more consistent.
-
-
Resilience
-
Closely tied to consistency is resilience, since an unresilient system will never be able to provide consistent results!!
-
Low Latency: Technology
MIT/LSE/Turquoise | Customers | |
---|---|---|
How did we get here? |
|
|
What next? |
|
|
Performance & Capacity Management - KPIs
# | KPI | Requirements / Target |
---|---|---|
1 | Total Daily Transactions | Max (4 x average, 2 x peak) |
2 | Total daily Trades | Max (4 x average, 2 x peak) |
3 | Order Latency | Agreed levels per market |
4 | Market Data Latency | Agreed levels per market |
5 | Reference Price Latency | Agreed levels per market (TQ only) |
6 | Transactions per second (1 sec peak) | Max (4 x average, 2 x peak) |
7 | Transactions per second (10 sec average) | Max (4 x average, 2 x peak) |
8 | Transactions per second (60 second average) | Max (4 x average, 2 x peak) |
9 | System Availability | 100% |
Performance & Capacity Management
-
Each system is technically tested with every software release to
-
Prove KPI levels
-
Reconfirm/prove behaviour and known breaking points
-
-
Preliminary tests take place on pre-production
-
At least one test cycle takes place on the actual production hardware on a weekend
-
Golden Rules
-
A client message will always get a valid response (ack or nack)
-
Any component that has been taken so far above the KPI level that it fails must do so gracefully, and the system must continue to obey golden rule 1.
-
-
Because we know the system/component limitations & bottlenecks we can manage growth as the markets evolve