The Art of Failure: Predicting Financial Distress Pre-Crisis
October 3, 2016 - Monocle Research Department
It is generally agreed amongst economists, bankers, politicians and the man on the street that the 2007/8 financial crisis was the worst economic crisis to befall western economies since the Great Depression of the 1930s. Some call it the Great Recession and some call it the Financial Crisis – its full impact has not yet been fully understood.
During the Great Depression, policy makers took far too long to adequately respond to the impact of the severe stock market decline of 1929. Instead, policy makers in western economies, G8 countries and across the world, in the case of the 2007/8 crisis, reacted extremely quickly, through extreme monetary interventions, as well as through some – most likely insufficient – fiscal interventions. This was of course entirely necessary given that confidence, not only in the markets, but also in the very notion of capitalism itself was under attack. Several commentators had even questioned the long term viability of capitalism as a political economic system.
In order to save the world from itself the central banks punted enormous amounts of liquidity into the market, wrote thousands of pages of legislation that would require banks to hold more capital and more liquid assets and to meet far more exacting standards than previously. To a large degree, these alterations to a previously laissez-faire economic playground have been successful. For one thing, the more extreme predictions that immediately followed the initial crisis in 2008 have not come to bear. Europe, although growing at a very slow pace is still growing. Asian economies have not yet collapsed and the US is actually growing reasonably well.
What is masked, however, by these extreme acts of government intervention are the basic statistical realities of what fundamentally changed within the global banking system. By this is meant that some very simple statistical realities are not as transparent as one would like them to be from an analytical perspective. For example, if one were to ask the question as to how many of the world’s top 1000 banks failed in the period during and after the 2007/8 crisis, one would find what appears to be a relative absence of information in respect of this. The main reason is that – following the repercussions of allowing Lehman to fail – policy makers used a variety of interventions to prevent further explicit failure. From these interventions emerged the phrase ‘Too Big to Fail’. Lehman’s failure, and the extreme market impact that followed, virtually eroded market confidence completely, leading policy makers to react with measures never before contemplated. From a monetary perspective, particularly, the extent of the interventions clouded over the breadth and depth of banking as well as corporate failure – and in fact to some extent continues to do so, not only within particular individual banks but across the banking fraternity itself.
Monocle Solutions in its continued research efforts became particularly interested in whether the new legislation for banking, written and codified by the Basel Committee for Banking Supervision (BCBS) post-crisis – in what was known as Basel 2.5 and then Basel III – will effectively address the main reasons for the crisis in the first place. As an example, we have noticed that there would appear to be far more regulation imposed upon the banks than on the periphery corporations that helped to manufacture the crisis to the extent of severity that it reached. The credit agencies, for example S&P and Moody’s – those agencies that are paid by banks to issue credit ratings for the issuance of their own debt and debt instruments – seem vastly less affected by regulation than the banks themselves. Yet they issued many thousands of triple A-rated stamps of approval on collateralized debt obligation (CDO) structures that later exploded.
To further this point using a recent example: The US Justice Department fine of $14bn that is to be levied against Deutsche Bank, for their negligence in selling toxic mortgage-backed securities (MBS), could potentially be putting Deutsche into a severe undercapitalised position and may even lead to their failure. This will impact not only the jobs of those people who work at Deutsche Bank, but will have a potentially severe systemic impact. Note that these fines target Deutsche Bank specifically but do not target the credit agencies that rated these securities.
In essence, if one is to boil down the BCBS regulations, ex-post their imposition on the banking system, they effectively achieve three things. Firstly, they almost double the amount of core tier-1 equity capital that needs to be held against the risk-weighted asset loan book of a bank. Secondly they increase the standard of that capital to be limited to only the purest of capital, i.e. unencumbered equity, excluding instruments that are pseudo-capital in nature. Thirdly the regulations introduce liquidity ratios with which banks must now comply. Specifically, the two critical ratios are the Liquidity Coverage Ratio (LCR) and the Net Stable Funding Ratio (NSFR) which have been constructed by the BCBS to address the ultimate cause of failure in the majority of banks that did fail.
To be clear, whilst most banks during the crisis experienced severe pressure on their loan books and the value of their assets, leading them to substantially increase provisions to absorb the forthcoming losses, their failure as institutions was primarily owing to an inability to meet immediate liability demands.
In fact, it is technically incorrect to say that banks failed from a credit crisis since most of these credit losses were experienced by these banks as revaluations of their asset book rather than actual experienced losses. The effect of the credit crisis, and the extreme devaluation of MBSs and CDOs, lead market participants in the interbank market, i.e. banks themselves, to cull from their own. We witnessed during the crisis the severe effects of banks hoarding cash, and an unwillingness of these same banks to take collateral from counterparty banks of anything that was of less than the highest quality. This meant that all MBS and CDO paper, even if it was triple A-rated, was not accepted by banks as collateral in repo-style transactions. This lead particular banks to experience severe short term liquidity shortfalls, which lead in some cases to extreme government intervention and in other cases to failure.
It was, at the outset of this study, our supposition that banks that were more reliant in their liability structure on the interbank market prior to the crisis, would have been those banks that would have been more likely to have failed. Several problems present themselves in attempting to perform such an analysis. The first is that there are significant problems with understanding and creating a stable definition of default owing to the extreme interventions performed by governments in different manners across the world. It is very difficult to say, for example, that Royal Bank of Scotland (RBS) failed or did not fail. It was certainly bailed out. Certain banks definitely failed though, for example Landsbanki in Iceland.
Goldman Sachs, as a further example – which was not in fact a bank at the time but was forced post-crisis to become a bank holding group – was, according to Lloyd Blankfein its CEO, forced to take bailout money from the Troubled Asset Relief Program (TARP). This $750bn bailout fund was set up by Hank Paulson, Goldman Sachs’ previous CEO.
From an analytical perspective, therefore, it was essential to settle upon a clear definition of default that we could use in our study of what the frequency of default of banks was post-crisis, as well as which ratios might have been indicative of failure pre-crisis. The second problem that presents itself, from an analytical perspective, is that the rules created by the BCBS for LCR and NSFR were based on new underlying information that would form either part of the numerator or the denominator of those ratios. That information prior to the Financial Crisis was not publically available and is not publically available since the crisis – in all the forms it would need to be in order to reconstruct those ratios for banks – unless one had insider information.
As such, it was decided that we would address these two issues for the study in the following manner. Firstly, through literature research, we came across a study by McKinsey – Working Papers on Risk, Number 15 (2009): Capital Ratios and Financial Distress Lessons from the Crisis. This study had been conducted on the same basis that we wished to conduct our study, i.e. to examine whether financial ratio analysis would have been a good indicator of financial distress post-crisis. This study examined a sample of 115 banks, their financial distress outcome post-crisis, and the ability for their ratios to predict this outcome. The criteria for the sample banks was that the banks needed to have a minimum asset size of $30 billion representing $62.2 trillion in total assets – about 85 percent of developed-market banking assets, and 65 percent of total banking assets worldwide. Broker-dealers specifically were excluded from their analysis as data on risk-weighted assets for such institutions in December 2007 was unavailable. Performance between 1 January 2008 and 1 November 2009 was used to identify banks that became distressed.
McKinsey defined banks as being financially distressed if any one of the following four criteria were met: where banks had declared bankruptcy, been placed into government receivership, been acquired under duress, or received bailout capital in excess of 30% of their tier-1 equity.
A range of financial and risk ratios were then calculated as of 31 December 2007 to determine their ability to predict financial distress. McKinsey found that the tangible common equity (TCE) to risk-weighted assets (RWA) ratio was the best predictor of future distress.
Whilst we believe this was a good starting point, we were concerned that they had not established a clear default frequency for the top 1000 banks. In fact, we could find no study that had been conducted across the worlds’ top 1000 banks in terms of a frequency of default post the largest single financial crisis since the Great Depression. We therefore conducted initial analysis work to simply calculate what the default frequency of banks was during and post crisis based on the McKinsey definition. We also decided to use a longer observation period, from 2008 through to 2010, and across the top 1000 banks.
For each of the 1000 banks on The Banker List (2007), an investigation was conducted into the status of each bank from 1 January 2008 through to 30 June 2010, identifying whether any of the McKinsey criteria for financial distress were met by the specific bank in the defined time period. Banks were identified as being either financially distressed (FD) or non-financially distressed (NFD). The results are surprising – out of the top 1000 banks investigated, 106 banks conform to the definition of Financial Distress. This result implies that the effective default frequency for banks during the Financial Crisis 2007/8 was in excess of a staggering 10%. To put this number into perspective, recall that a poor performing mortgage book in a classic retail banking environment would be of the order of 2-3% during the height of the crisis. The default frequencies that caused the ripple effect into the MBS market and the CDO market was of the order of 10-15% in the worst performing areas of the United States, such as Las Vegas. Banks themselves, therefore, experienced previously unheard of default frequencies comparable only to the defaults that occurred post 1929.
The second area of interest was the ratios themselves. It was decided by the Monocle Research Team to turn towards the fundamental analysis concept from the beginning. In 1966, William H Beaver used financial ratios with a univariate technique to predict financial distress. He classified financial distress as bankruptcy, insolvency, liquidation for the benefit of a creditor, firms which defaulted on loan obligations, or firms that missed preferred dividend payments.
Beaver’s technique accurately classified 78 percent of the sample “distressed” banks up to five years prior to failure. His research concluded that the cash flow to debt ratio was the single best indicator of bankruptcy. To overcome many of the inconsistencies found in Beaver’s research, in 1968, Edward I Altman improved on Beaver’s univariate model by introducing a multiple discriminant approach. His results found that five financial ratios were significant predictors in the financial distress prediction model. These ratios are: working capital to total assets, retained earnings to total assets, earnings before interest and taxes to total assets, market value equity to par value of debt and sales to total assets.
Banks and financial institutions in particular are generally more highly leveraged than the industrial institutions Beaver and Altman studied. We could not therefore make use of the same ratios or the same coefficients of such ratios to create anything like the Altman z-score. In order to create our own database and to conduct the necessary analysis, the income statements and balance sheets of a sample of banks were extracted, constructed and normalised into a single common format. The sample of banks consisted of 20 financially distressed banks selected from the top 1000 banks according to size and demographics and 20 non-financially distressed banks randomly chosen from the top 60 non-financially distressed banks on the Banker List. Is it essential to note that the selection process was not perfectly scientific and was based on our desire to observe outcomes in different demographic regions of the world. Our sample size of a total of 40 banks was also small owing to the time constraints involved in normalizing banks’ income statements and balance sheets into a single common format. Further research is currently underway within our research team and these should be noted to be preliminary results.
We found that of all the ratios we examined – in excess of 9 ratios – by far the strongest was customer loans to deposits. This, of course, makes perfect sense because the ratio effectively demonstrates the concept of reliance on the interbank market. Recall that a bank in its liability structure to meet its asset demand will depend on either deposits made by corporate or individual customers or on the interbank market. We can therefore deduce that the higher the percentage of total loans to customer deposits the more reliant banks are in their liability structure on the interbank market.
Based on the limited sample size, we are able to conclude that had the extent to which banks were funded by interbank loans versus customer deposits been known pre-crisis it would have been a powerful indicator of future financial distress with an accuracy level of 80% on 31 December 2007.
The ultimate finding of the study is that 10.6% of banks failed. This is an extremely high proportion and one that we have not found any primary research for. Secondly we found that customer loans to deposits is significantly indicative of risk and would have been a very useful indicator prior to the Financial Crisis.
It goes without saying that the constructions of the LCR and NSFR, the latter in particular, are extremely difficult for banks to comply with. To a large degree these ratios conflict with the original notion underlining the business of banking – that is to use an upward sloping convex yield curve to arbitrage on tenor to achieve interest rate differentials to make profit.
Given the current environment of very low interest rates in advanced economies, we believe that this study indicates that a more simplistic ratio could have been used by policy makers as early as 2008/9 instead of LCR and NSFR. The use of such a ratio might have had a less deleterious effect on funding structures for banks. Ultimately a combination of the NSFR ratio as well as the radical levels of monetary intervention – which have led to very low interest rates – have had extreme effects on the profitability of banks. If these conditions perpetuate for another 5 to 10 years, it is questionable whether banks are going to be able to attract sufficient equity to remain attractive to investors. A simplistic ratio such as customer loans to customer deposits could be a potential solution to avoid banking ultimately becoming a utility.
We wish to acknowledge that the primary research was conducted by a Masters student of North-West University’s Business Mathematics and Informatics (BMI) faculty during a 6 month period while working closely in conjunction with Monocle Solutions. We are grateful to all the support and assistance from the University professors.