Managing Margin (Part I) – Five Key Challenges from a System-Wide View
In today’s markets, margin requirements are a critical component of assessing trades and pursuing investment strategies. Broker/dealers, prime brokers, and the buy-side clients they serve are demanding ever increasing levels of risk awareness and transparency with respect to margin calculations. In this article, TS Imagine discusses the key challenges in this arena and will dive deeper into the specifics of solving these challenges in future articles in this blog series dedicated to margin management.
Computing, optimizing and monitoring real-time margin requirements across a multitude of instruments and customer accounts is a Herculean task. It requires assembling a massive array of interdependent data elements along with rules and calculations that all have to be aligned simultaneously, like an enormous Rubik’s Cube. Then, one twist – a new trade or a market move – and the requirements have to be recomputed. Among the key challenges facing those responsible for this effort: gathering, storing and updating all of the necessary data, implementing multiple sets of margin rules, generating various analytics, and providing transparency, actionable reporting and a clear understanding of what drives the output, to name a few.
The business case for managing margin requirements correctly and efficiently is clear – the results have a direct impact on profitability and customer relationships, and violations can be costly. At the same time, the process can be pull-your-hair-out frustrating. Therein lies an opportunity, as doing this well has a myriad of benefits. In this first in a series of articles on Margin Requirements, we provide an overview of why this process is so complex, and what it takes to do it successfully.
The Five Key Challenges
#1 – Data.
Calculating margin is incredibly data-intensive. For starters, a firm must have position-level data on all relevant trades as well as existing positions. Also needed: current market data, terms and conditions for each security, data on collateral positions, data to drive the margin calculations and related analytics such as VaR (which has its own data requirements), a global Legal Entity Identifier (LEI) database, information about Credit Support Annexes (CSAs), haircut rules, etc. Those who are charged with accessing, scrubbing, and updating all of these scattered inputs, and storing them in historical datasets to support audit and compliance requirements, usually feel they are drowning in data.
But that’s just the beginning. The data must be readily usable by the margin calculation engine. It should be structured efficiently, in compatible databases that can “talk” to each other; otherwise, the calculations will be cumbersome, generating results will take too long, and it will be difficult or even impossible to drill-down beyond a top-level number, or to recompute quickly enough to be useful when the inputs change.
#2 – Transparency.
Firms want the ability to drill down into summary-level margin numbers to answer the question, “what positions/clients/etc. are driving this result?” That is, in theory, a perfectly reasonable goal. Unfortunately, in practice it is often difficult to achieve. Top-level reports that show, “The current margin requirement for ABC Exchange = $X” may provide no way for trading desks, risk managers, and others who rely on these outputs to drill down into them. Which clients are largely responsible for a given result? What types of exposures caused a large increase in margin? Which exposures were grouped together? If positions were moved to another group to reduce overall margin requirements, what was moved and why? A lack of transparency in margin systems often makes it almost impossible to answer these questions.
#3 – Workflow.
When approaching any problem that involves a number of calculations, the natural inclination is to construct the solution linearly – e.g., “to get the answer, first compute A, then B and then C.” However, calculating margin involves executing a series of steps whose results lead to choices that can branch off in one direction or another, or where inputs rely on an additional set of calculations that, to maximize efficiency and reduce redundancies in the code base should be carved out into a separate routine, or where a grouping choice leads to a different answer. To determine margin requirements optimally, one must have a deliberate, upfront awareness of the inherently non-linear nature of the workflow involved (and how to access the data needed for the calculations on each branch – see #1 above). As a simple example, a system must be aware of in-house rules and Exchange rules, calculate both, take the larger of the two and branch off accordingly. The system’s architecture should be able to accommodate and control for client-specific detours without starting over from scratch; without this, the entire exercise becomes cumbersome to the point of failure. Structuring a non-linear workflow with complex data requirements that is adaptable but has safeguards against manipulating how a given client’s margin is determined, and maintains transparency, is a major challenge.
#4 – Reporting.
Results of margin calculations should be presented in a way that provides useful, actionable information that goes beyond “this is the P&L per client, this is how much margin they need, this is how much we have, this is the surplus/shortfall”. Those who make decisions based on the results of margin calculations want reports that help them to answer questions such as “which are the most important positions that are contributing to this number?” and “which positions or clients represent a large long/(short) net exposure to a spike in a given risk factor (e.g., oil prices, short-term EUR or USD interest rates, etc.). It is also critical to understand the extent of margin coverage, i.e., the ratio of attributed margin to risk, so that traders and managers know in advance whether there is a sufficient cushion, whether a threshold is in danger of being crossed so that risk mitigation can be pursued.
#5 – Attribution.
The concept of an attribution analysis for margin calculations is poorly understood and often confused with “transparency” (see #3 above). Transparency deals with “what” – it allows stakeholders to drill down from a summary number to the position-level, if need be, to see which clients and positions comprise that result. In contrast, an attribution analysis focuses on “why”.
Conceptually, an attribution analysis for margin calculations is similar to attribution analyses that deconstruct a portfolio’s total return or tracking error, or explain VaR or total risk. The goal in all of these cases is to understand the key drivers of a given outcome as a function of changes in market levels, exposures and other key factors. However, techniques used in those other analyses are not readily applicable to a margin attribution analysis. For example, the “marginal contribution” approach that is often used to decompose total risk (i.e., “how would the total risk number change if this exposure increased by X”) does not satisfactorily address the question of how a margin requirement would be affected if a given exposure were to change by a small amount. In this context, attribution requires an approach that recognizes the nature of offsets that is so central to margin calculations.
Awareness is the first step
If you are new to the challenges of calculating and managing margin, we hope this has been enlightening. If your firm has been struggling with these issues, know that you are not alone. Often, so much time and effort has been spent patching together sub-par workarounds to shortcomings in a margin system, those who are in the trenches feel oddly committed to sticking with what they have despite its deep flaws – “We’ve put so much into this, we have to keep on with it,” is the comment we’ve often heard. But that’s the trap of looking at a sunk cost as though it might somehow help to guide a decision going forward – it interferes with the ability to step back and admit that a current approach is simply not up to the task.
At TS Imagine, our team’s first-hand experience informs the way we have designed a real-time margin system to address these key issues from the ground up, and in upcoming articles we will dive into the items we summarized above.
To speak with one of our margin experts, please contact us.
The Margin Series
HVaR, By Dr Lance Smith, Chief Strategy Officer, TS Imagine Historical VaR (HVaR) has become a standard measurement of risk, in which a current portfolio is subjected to the market conditions of a prior day and the resulting P&L is recorded. Read entire article here.
EMS integration automates historically manual processes, unlocking hard-to-find liquidity for the buyside community.
HVaR assumes that present market dynamics are captured in past behavior so what should we do when the world changes? Since Russia's invasion of Ukraine, commodity markets have entered new trading patterns and Lance Smith, TS Imagine Chief Strategy Officer explains some of the tools that can be used to help HVaR without disturbing other risk factors.