By Donal Byrne, CEO of Corvil
In the world of modern trading, latency has emerged as a key factor in determining the success a particular automated trading or market making strategy achieves. This has resulted in an unprecedented level of investment into low-latency technologies and infrastructure by the world’s leading exchanges, banks, electronic market makers and service providers to financial markets. All are seeking to achieve a relative speed advantage over their competitors.
Against this backdrop, the landscape of high performance trading is changing rapidly. Competition has intensified. Cost and complexity continue to escalate while latency advantage is being eroded. The potential for future regulation in the sector fuels uncertainty. This forces people to consider more carefully their decisions about strategy, tactics, and levels of investment necessary to succeed in modern trading:
- Which markets should I trade?
- Do I need to co-locate?
- Should I take direct feeds?
- Which service providers should I use?
- How will I verify the latency?
- How much will it all cost?
These are just a sample of the questions that need to be addressed. To answer these questions, one needs access to high quality information about speed. It is our view that in the not so distant future, information about speed will become more valuable than speed itself. Why? It is very unlikely that you are going to be the fastest to every market you wish to trade, all of the time. However, it is more reasonable to expect that you could be the fastest to some of the markets some of the time.
Therefore it is critical to understand when you are likely to be fast and when you are likely to be slow across all paths of your trading network. If you know this, then you can decide where and when you should trade and increase the overall probability of achieving the desired fill.
It is also critical to understand the underlying performance parameters associated with the trading services you are using, e.g. market data and Direct Market Access (DMA). The right information about these services can enable you to more optimally consume them so that the probability of being fast more often is improved. Therefore, those with access to this level of latency information combined with an ability to use that information can gain a competitive advantage. This drives the requirement for comprehensive and relevant latency statistics to be provided to market participants with full transparency for such services. This is the background to the motivation for why LatencyStats.com has been created.
Our journey with LatencyStats.com begins by looking at market data. The reason we start here is that high-speed order execution is no guarantee of success if one’s market data is slow. If competing traders are receiving market data updates faster, then they can react first and capture the fill opportunity. In the world of high-performance trading the tape prices are not guaranteed to exist by the time an order is sent and received by the market center. The market may have moved within the elapsed time. Therefore market data latency can be thought of as a proxy for the likelihood that the tape price still exists when you are ready to send an order.
The specific objectives we are looking to achieve with LatencyStats.com are:
- PROVIDE A NEW LEVEL OF LATENCY TRANSPARENCY FOR DIRECT MARKET DATA. Traditionally the industry has done a poor job providing latency statistics for direct market data feeds. This is mainly due to the technical difficulty in measuring the one-way latency of multicast market feeds with microsecond accuracy between geographically diverse locations. Our approach is to measure and publish the one-way latency with microsecond precision from the market data source to the client hand-off point. The average, peak and 99% statistics are published for a variety of important time scales including the busy 1-minute of the trading day. To our knowledge, this is the first time this level of latency information has been made available to consumers of market data feeds.
- PUBLISH NEW PERFORMANCE PARAMETERS THAT ENABLE CLIENTS TO OPTIMALLY CONSUME DIRECT MARKET DATA FOR LOW-LATENCY. An example is how to answer the question of how much bandwidth is needed to consume a specific market feed without introducing excess latency. Many providers today publish recommended bandwidth levels but without any guidelines on what the expected latency would be if that amount of bandwidth is provisioned. The difficulty in answering this question is that latency is proportional to traffic load. Market data feeds often have highly variable message rates and can spike over short timescales. These spikes are referred to as microbursts and induce latency at bandwidth choke points. Our approach is to measure continuously the bandwidth needed such that no worse than a specified level of latency is achieved. The initial version of the site publishes the bandwidth required such that no worse than 1ms of latency will be added. This again is the first time such information has been made available to consumers of market feeds.
Please refer to our white paper “Latency Transparency for Market Data” for more details on the information published on the site and how to use it. We greatly welcome your feedback and comments and hope that you find the site useful.