Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Research

In May 2012, the Basel Committee on Banking Supervision (BCBS) issued the first paper in a series of consultative documents aimed at revising the current Market Risk Framework. The purpose of the proposed amendments is to improve the requirements and calculation of regulatory risk capital for the trading book. These proposals are collectively referred to as The Fundamental Review of the Trading Book (FRTB). The ensuing industry feedback, along with a hypothetical portfolio exercise and an interim impact study, culminated in the finalised standards being published in January 2016, under the title Minimum Capital Requirements for Market Risk. The new standards replace the existing framework, which represents 20 years of Market Risk regulations. The BCBS deadline for new regulations was originally set for January 2019, but a decision has subsequently been made to move out the implementation date to January 2022, along with the first reporting disclosures expected in the same month.

 

A review of the regulations was necessary as the framework was deemed weak and inadequate, following lessons learned during the 2008 Financial Crisis. One area that required revision was the boundary between the banking and trading book. The old regulation lent itself to being exploited by institutions to create arbitrage opportunities by transferring assets between the banking or trading book to game capital requirement rules. Furthermore, there was no logical or natural relationship between the capital calculated using the Standardised Method and Internal Model Approach, leading to large discrepancies between these two approaches. These discrepancies contributed to the need for a universal method of calculating a bank’s capital charge.

 

The following table summarises the main FRTB changes and objectives:

Trading vs Banking book boundary  

  1. Limits regulatory arbitrage
  2. Clear designation of instruments
  3. Clearer guidance on internal risk factors

 

New Standardised Approach

(replaces Standardised Measurement Method)

 

  1. Sensitivities based approach
  2. Default risk charge
  3. Residual risk add-on
  4. Closer calibration with IMA structure

 

Internal Model Approach (IMA)  

  1. Expected shortfall with liquidity horizons
  2. Default risk charge and stressed expected shortfall
  3. Constraints on hedging and diversification
  4. Desk-level model approval/eligibility test

 

 

The BCBS has redefined the classification of instruments between the banking and trading books. The new definitions are more prescriptive, less ambiguous and, thus, less open to interpretation. The new regulations aim to promote a common, objective understanding among regulators as to what types of instruments belong exclusively in either of the two books. Previously the distinction was mainly based on a subjective definition of “intent” – an intention to trade vs an intention to hold until maturity. The definitions now include explicit position descriptions and characteristics between the trading and the banking book. Trading characteristics include instruments held for short-term resale and profit from short-term price movements, locking in arbitrage or hedging risks arising from any of these activities. Instruments seen as held for these purposes include those resulting from underwriting commitments, part of a correlation portfolio, or giving rise to net short credit or equity positions in the banking book. Instruments to be included in the banking book include unlisted equities, real estate, retail credit and equity funds.

 

Previously, the calculation of capital requirements for market risk was addressed through two approaches: the Standardised Measurement Method (SMM) and the Internal Model Approach (IMA). Under the new standard, the changes to the SMM are so fundamental and comprehensive that they constitute a new approach, called the Standardised Approach (SA). The SA will be relevant to all banks regardless of whether they choose exclusively to adopt the IMA. The SA will serve as a floor and fall back to the IMA. As such, the SA is designed to share a common risk data infrastructure with the IMA and to be sufficiently risk sensitive without compromising its implementation by banks with more conservative trading operations.

 

Standardised Approach

The most notable change from the SMM is that the main SA charge is based on risk sensitivities within all the broad asset (or risk) classes, and includes two additional components, namely the Default Risk Charge (DRC) and the Residual Risk Add-on (RRA).

 

 

Internal Model Approach

The most notable change in the revised IMA is that it now calls for market risk to be calculated and reported via a single Expected Shortfall (ES) metric at a 97.5% confidence interval, replacing the well-known Value at Risk (VaR) and Stressed VaR metrics at 99% confidence. The aggregated ES values for modellable risk factors, together with a Stressed Capital Add-on (non-modellable or stressed ES), forms the basis of the capital charge for market risk. There is also a Default Risk Charge component like in the SA, and an SA Calculated Charge (CC) for unapproved desks.

 

 

Of further importance is the incorporation of varying liquidity horizons into the revised SA and IMA to mitigate the risk of an abrupt and severe impairment of market liquidity across asset markets. This will replace the static 10-day horizon assumed for all traded instruments under VaR in the current framework.

 

The BCBS has indicated that these changes will result in a significant increase in market risk capital requirements. It is, therefore, imperative that banks adopt prudent consideration and planning to ensure that adequate structures and resources are in place.

 

With a first reporting date of January 2022, it is important that banks take the necessary steps to prepare for this change:

 

  • Banks must understand the effects of FRTB on both the reclassification of the banking and trading books.

  • Institutions must understand the impact on Risk Weighted Assets (RWA) to reflect the updated liquidity horizons, risk weights and other changes.
  • Banks need to start the collection and management of quality market data in sufficient volumes, especially those who wish to use internal models.
  • The collection and management of quality market data, in turn, calls for enhanced data architecture, as well as the realignment of processes and functions within the bank.
  • The anticipated increase in capital requirements coupled with the increase in costs of implementing and maintaining FRTB compliance will force banks to reassess their business strategy, as well as look at the future sensitivity of all currently affected profit centres.

 

These effects need to be communicated to senior management and the front office to raise awareness and understanding of the fundamental changes that FRTB will bring about.

 

In conclusion, the effects of FRTB are profound, reaching across all organisational and technical levels of a bank. Throughout the bank, there are additional and more rigorous regulatory requirements on data, business models, processes, technological infrastructure, reporting, governance and management. Banks must not be caught dragging their feet. The timelines are tighter than they appear. Regulators will need a head-start to evaluate and approve desks for IMA, meaning desks will need to be ready well ahead of their regulator’s set deadline. Even before all of this, they will need to internally evaluate if IMA is even a cause worth pursuing and to what extent. Then there is the matter of sourcing the relevant data and enhancing data systems central to many of their existing operations. For smaller banks not used to the level of complex data and modelling required, it is going to be a fundamental adjustment not only in the upgrading of systems, but also in training and preparing personnel to operate under the Standardised Approach.

0

Research

Despite the regular updating of technologies and tightening of regulations to keep criminals at bay, criminals are finding increasingly clever ways to commit financial crimes such as money laundering and terrorist financing.

 

When money is obtained illegally and then transferred through a bank or any other legal institution without disclosing the origins of that money truthfully, it is money laundering. Terrorist financing, on the other hand, is the money that is used for financing terrorist actions. According to the International Monetary Fund (IMF) and World Bank, between USD 2.17 trillion and USD 3.61 trillion is laundered annually – between 3% and 5% of global GDP.

 

The beneficiaries of money laundering are thought to be diverse and exist all around the world. Successful laundering has a major effect on global society from a social, economic and financial standpoint. Money laundering enables further criminal activities, it affects tax revenue collection, enables anti-competitive business behaviours, and affects the integrity of the banking system. This is only a few of the reasons why anti-money laundering (AML) and counter-terrorist financing (CTF) regulations are extremely important. The Financial Action Task Force (FATF) is an inter-government policy-making body that sets standards and promotes the implementation of legal, regulatory and operational measures for combating money laundering, terrorist financing and other related threats to the integrity of the international financial system.

 

From a South African perspective, the Prevention of Organised Crime Act (POCA), Promotion of Access to Information Act (PAIA) and Financial Intelligence Centre Act (FICA) currently support the goals and standards set by the FATF. Following the FATF’s last assessment of South Africa’s compliance to international FATF standards in 2009, it was found that local authorities had improved mechanisms to cooperate on combatting money laundering and the financing of terrorism; however, both the FATF and the Bank Supervision Department’s (BSD) 2016 annual report agree that there is still much that can be improved upon within South Africa’s AML and CTF regime. Issues such as beneficial ownership and measures to deal with politically exposed persons and corresponding banking have been highlighted as priorities.

 

The BSD 2016 annual report found that whilst no banks were guilty of explicitly facilitating transactions linked to AML and CTF, there existed several control gaps and non-compliance issues. The banks involved were fined R46.5 million in total during 2016 and China Construction Bank alone was fined R75 million in 2017 for non-compliance. The next FATF assessment is due to be completed in November 2019, where South Africa’s progress in the space of AML and CFT prevention will be audited and new recommendations made on how best to address the challenges facing the global financial system.

 

Changes to the FICA – which came into effect in 2003 – have been introduced under the Financial Intelligence Centre Amendments Act (FIC Amendments Act), which was published and signed by the Finance Minister of South Africa, Malusi Gigaba, in 2017. South Africa is, thus, starting the process of its implementation and thereby bringing the country in-line with current global standards as set out by FATF.

 

The FIC Amendments Act includes a risk-based approach to assessing and managing risks within organisations; beneficial ownership to improve transparency in the financial system; maintenance of relationships between institutions and prominent influential persons to be able to manage potential risks; and the establishment of a legal framework to apply and administer sanctions such as freezing of assets.

 

The amendments show a shift to a stronger strategy in combatting AML and CTF activities. Historically, risk has been managed by applying blanket risk controls to all banking customers. As the number of clients per institution grows, this task becomes more tedious and prone to errors. The amendments to the FIC Amendments Act encourage organisations to better understand their customers and apply controls based on an individual’s specific risk profile.

 

The Act’s risk-based approach also involves less requirements for low risk clients, which in turn results in the opportunity for banks to provide services to customers who previously could not open a bank account. Previously banks could only provide financial services to clients that had the required Know Your Customer (KYC) documentation. This regulation has now been revoked if the client is classified as a low-risk client.

 

Money laundering and terrorist financing will most likely never be totally eradicated, however, continued improvement in technologies and regulations to improve its prevention and enforcement will reduce the scale and severity of the threats. South Africa has made significant improvements in combatting money laundering and the financing of terrorism, but there were still some shortcomings, gaps and room for improvements. The successful implementation of South Africa’s FIC Amendments Act in full by the end of 2018 will go a long way to improving South Africa’s compliance to international FATF standards in 2019.

0

Research

The liquidity and credit crisis of 2007 and 2008 left the global financial community concerned over liquidity risk. Suddenly, multi-national banking groups realised they had to broaden their understanding and measurement of risk beyond market, credit, and operational risk. They needed to be able to anticipate liquidity strains in the markets – and strategise contingent funding.

 

While Basel II guidelines had not adequately addressed liquidity risk, Basel III proposals aimed to address this, with what are viewed by many as onerous liquidity ratios that were introduced in 2015. The ratios present a conundrum for banks; they need to comply with them, while still maintaining a competitive funding structure. But it’s not the ratios themselves that bring value, it’s the data accuracy and precision they demand.

 

The validation required for liquidity risk models will actually mean banks can get both an holistic and granular view of their risks. For example, the proposals imply that banks will need to distinguish between different behavioural aspects of diverse customers within a particular product. That means enhanced risk management capability, as well as pricing and customer selection.

 

If banks want to achieve the ambitions of the Basel regulations – and create a single, integrated platform for liquidity risk management, pricing, capital management, and strategic customer selection, they’ll need to implement a data-centric, market-factor driven, liquidity risk management framework. A framework that integrates credit, market, interest rate, and liquidity risk into a consistent set of metrics. And, in order to get that integrated risk view, and look precisely at the contribution of different risks to the liquidity risk solution, banks need granular data differentiation.

 

For Monocle, that means a comprehensive measurement and manage­ment approach for deeper understanding of liquidity risk and its potential interaction with other risks. It’s an integrated way to treat Liquidity-at-Risk (LaR). The process enables stress and scenario testing under market crises, leading to quantification of levels of contingent funding. It also helps to find more optimal loan-to-deposit ratios by investigating reliance on the wholesale funding markets.

 

LaR is a framework that includes simulation of a large number of future cash flow profiles, by replicating the entire cash flow process under each circumstance of contractual cash flows, behavioural cash flows, growth in asset and liability sizes, and interest rate re-pricing of each position. Each simulation offers a picture of how the balance sheet and, ultimately, the cash inflows and outflows, may evolve under different scenarios. Ultimately, it assists banks in anticipating strains, and managing liquidity risk.

 

Liquidity Risk

When we talk Liquidity Risk, we talk, simply, about “the ability to fund increases in assets and meet obligations as they come due, without incurring unacceptable losses”. When we drill down, liquidity risk encompasses both market liquidity risk; the risk that a position cannot be offset or eliminated without economic loss – and funding liquidity risk; the risk that cash flow and collateral needs cannot be met in the normal course of business.

 

Conventionally, liquidity risk has been managed and measured within the Asset and Liability Management (ALM) function. But then the strains in the wholesale funding markets in August 2007 and September 2008 highlighted the interrelationships between funding and market liquidity risk, funding liquidity risk and credit risks, funding concentration and liquidity risk, and the effects of reputation on liquidity risk. We now know that liquidity risk is consequential – and cannot be viewed in isolation. So, today, banks veer towards an integrated risk management approach, in which all risk types are measured inclusively.

 

Monocle’s LaR model quantifies credit, market, liquidity, and interest rate risk using a single set of underlying risk factors, allowing a bank to view various ‘future states of the world’ from an integrated risk perspective. It also allows the risk management function in a bank to isolate the impact of these risk types on the liquidity shortfall for a particular tenor, with a particular confidence level. The goal? A stable, robust metric for the measurement of a liquidity gap, which includes the impact of credit, market, and interest rate risk.

 

Why LaR Measurement is Crucial

When wholesale funding markets seized up, existing liquidity models failed. Why? Because they were predominantly based on point estimates, rather than LaR distributions. Point estimates cannot accommodate the scenario tests and stress testing necessary to assess a bank’s liquidity position under market extremes, except in an overly simplified manner. The result was that many organisations didn’t fully understand the speed and severity with which their liquidity position would deteriorate in these extreme markets. And banks were not prepared for deleveraging debt markets, in which there were more sellers of debt securities than buyers. The desperate need for liquidity forced many banks to issue debt at previously unthinkable spreads, and liquidate assets at previously unthinkable prices. But if they’d been able to anticipate or predict liquidity strains in the markets – and the contingent funding requirements – that desperation could have been halted.

 

A contingent funding requirement measure needs to look at the inter-relatedness between different risk types, and give a response on funding requirements with different levels of confidence. Of course, banks are not only funded via wholesale markets, but also by deposits; reflected in the bank’s overall loan-to-deposit ratio.

 

LaR is a more distribution-based approach to the measurement of liquidity risk, under a range of different market rate scenarios. It looks at the effect that these scenarios may have, not only on the wholesale funding markets, but also on the behavioural aspects of the overnight core to non-core deposit base, and gives banks substantially greater insight.

 

Who will particularly benefit from this approach? Banks that, in the past, relied too heavily on wholesale market funding, rather than retail and commercial deposit funding. Rapid liquidity deterioration was particularly severe at these organisations, with high loan-to-deposit ratios.

 

These banks will be able to focus on creating deposit products that have longer periods of limited redemptions, by analysing the differential behaviour of different client types. This will reduce reliance on the wholesale market, identify more ‘ideal’ loan-to-deposit ratios, and may well provide cheaper funding for the bank, affording them the ability to improve pricing.

 

All that is needed is a commitment to carefully monitoring the metrics, to enhance their understanding of funding volatility and the specific circumstances that could result in a sudden funding requirement. The bank will also be able to continuously monitor access to money markets as a function of evolving macroeconomic conditions.

 

This increased awareness will provide the breathing space needed to strategise alternative funding mechanisms – before any seizing up of liquidity markets. Ideally, it can also offer real-time identification of the order in which assets should be liquidated and made available for liquidation, depending on the expected duration and severity of liquidity strains.

 

So how does it work under each circumstance of contractual cash flows, behavioural cash flows, growth in asset and liability sizes, and interest rate re-pricing of each position?

 

Behavioural Modelling of Product Cash Flows

The LaR Model is derived from a distribution of simulated economic factors, which drive simulations of the bank’s balance sheet and results in simulated net cumulative cash flow (NCCF) profiles over the funding horizon. Distributions of the bank’s NCCF at different times are generated as the outcome of factors impacting the behaviouralisation of product cash flows over defined intervals.

 

The framework simulates many versions (say, 100 000) of future cash flow profiles by replicating the entire cash flow process under each circumstance of contractual cash flows, behavioural cash flows, growth in asset / liability sizes, and interest rate re-pricing of each position. Asset Liability Management (ALM) is generally responsible for predicting the value and timing of daily cash inflows and outflows – both on a contractual and a behavioural basis.

 

Contractually, the cash flows of assets, liabilities, and off-balance sheet items are known with relative certainty. Term loans, which follow standard amortising schedules, stipulate a monthly payment from the term loan customer. These cash flows can be disaggregated into interest and principal components for simulations of Net Interest Income (NII) and other measures of bank profitability.

 

There is, however, no world-wide standard on the contractual terms or conditions which may be imposed on a bank’s products. Cash flow functions must be customised to cater for the unique product types and payment structures a bank offers its customers.

 

The risk, across many product types, is that the cash flows expected under the contractual profile are different from actual, experienced cash flows. This is because of the options customers have to deviate from the initial terms and conditions of their account; Pre-Payment Risk (the risk that a customer will repay an asset before the contractual maturity date); Early-Redemption Risk (the risk that a client will withdraw a deposit before the contractual maturity date); or Rollover Risk (the risk that a liability will reach a maturity date, and a higher funding cost is demanded by the depositor to roll it over). But it could also be because of behavioural market impacts, such as credit risk, resulting in the cessation of a particular set of cash flows.

 

Behavioural modelling, using econometric techniques and statistical methods, can estimate future client behaviour, and transform the results of these models into predicted cash flows. It also includes statistical techniques used to predict the behaviouralised cash flows of portfolios of fluctuating products, such as savings and current accounts. Simply, it adjusts contractual cash flow calculations to reflect the most likely client behaviour in the future.

 

As a rule, behavioural models used by banks have typically been long-run averages of past behaviour, so they’re not sensitive to the prevailing economic environment, or potential future economic developments. But it is possible to directly relate the level of each behavioural risk to an economic factor. Let’s look at pre-payments, for example. We can create a statistical model which relates levels of pre-payments in housing loans to selected interest rates or interest rate changes. Logically, as interest rates decrease, banks and their competitors are able to reduce the level of interest charged to products such as housing loans. And, of course, offers of lower rates entice customers to refinance.

 

So we can estimate the level of prepayment on sub-portfolios, given the prevailing level and changes in key interest rates. Behavioural cash flow models address the full range of possible behavioural adjustments for each and every product, for diverse client types.

 

There are also other behavioural aspects that impact the assets and liabilities side of the balance sheet. In particular, banks tend to lend on an internal or ‘prime’ rate, which is adjusted non-linearly to prevailing market rates. And, much liquidity risk modelling has excluded the impact of business growth and budgeting and forecasting targets. What we need to do, then, is include interest rate sensitivity models and asset/liability portfolio growth models with a holistic consideration of liquidity risk behavioural models.

 

Interest Rate Sensitivity

Cash flow amounts are usually dependent on the amount of interest charged on/accrued to the asset/liability. One of the challenges faced in modelling future cash flows, is the fact that bank rates are often deter­mined by the bank itself, rather than by the market. This is problematic when predicting the bank’s response to changes in market interest rates.

 

Our methods would estimate the relationship between a change in market rates, and the resulting change in internal lending and deposit rates. This is then input to a simulation to determine the level of interest charged on individual and corporate accounts, and therefore the value of projected cash flows.

 

For products with a reference rate based on an internal lending or deposit rate, rather than a market rate, Monocle has a methodology to predict future levels of internal lending and deposit rates, given the levels and changes in market rates.

 

Historically, there’s been a strong relationship between internal rates and market rates, but this may not always be the case. As market rates decrease, banks generally apply these ‘savings’ to their customers by reducing internal lending rates, to remain competitive. Similarly, as market rates rise, the banks pass these costs onto their customers by raising internal lending rates.

 

By looking at 5 to 10 years of market rates of all tenors, we can estimate the relationship between these rates and internal bank rates over the same period. This relationship can then be embedded within a statistical model which translates movements in market rates into a probability of change for internal rates. When the probability of change reaches a key value, the reference or prime rate is assumed to increase or decrease, depending on recent movements in market rates.

 

Since market rates are easier to simulate and project into the future, the model allows the bank to understand how each market rate scenario in a large simulation (typically 100 000 iterations) translates into changes in internal rates, resulting in a prediction of internal rates into the future. This can then be used for Net Interest Income (NII) and other profitability scenario analyses.

 

Asset/Liability Portfolio Growth Models

Typically a liquidity risk model is premised on a run-off basis, where assets and liabilities are not replaced as they reach maturity, or on a business-as-usual basis, where assets and liabilities are replaced as they mature. However, to achieve true business value, we need to understand at a product level, and at a client-type level, what the growth in a particular asset or liability will be. LaR is an ‘adjusted business-as-usual’ approach, in which a projected growth in each asset and liability portfolio, as a result of changes in economic factors, is taken into account.

 

As part of the simulation process, there is a series of techniques to predict the future growth in asset and liability values on a bank’s balance sheet. The models rely on the historical relationship between growth rates and interest rates, which can be combined in a multi-variable regression. Of course, we know that relationships between interest rates and growth levels have not always remained intact – particularly in the recent financial crisis. Market interest rates (interbank rates) have been near all-time lows for some time, which, under normal circumstances, would be a leading indicator of higher growth rates. But financial institutions have decreased their appetite for risk, and for extending credit, drastically reducing growth levels below what would normally be expected.

 

Because of this, growth models incorporate a “desirability” factor, which is a numerical indication of the bank’s appetite for extending credit, or for growing a particular product type. This “desirability” factor adjusts the output of the growth models, which are purely linked to interest rates.

 

Economic variables other than interest rates can also be taken into account, particularly those variables which are shown to be leading indi­cators, such as economic variables; inventories, sentiment indices, and money supply growth.

 

Combining Predictive Models in the LaR

LaR typically requires generating 10 000 to 100 000 simulations of underlying market factors over a one-year horizon. At a minimum, LaR will use simulated market interest rates as a driving economic factor, because interest rates usually show the strongest predictive power in behavioural modelling. The interest rate simulation produces 10 000 to 100 000 observations of possible future interest rate paths, along with paths for other market factors.

 

These interest rate paths, and other market factor paths, are used to drive the contractual cash flow models, the behavioural cash flow models, growth models, and interest rate re-pricing models. These models interact to provide a picture of how the balance sheet ‘evolves’ over time. New ‘synthetic’ accounts are created to compensate for predicted growth value, and cash flows are, in turn, calculated for these new accounts. The replication of the balance sheet is an image of how the future balance sheet may look.

 

One method we use is to hold interbank activity conducted by the bank constant over the funding horizon, to measure the bank’s reliance on the interbank market in times of heightened demand for liquidity. With the foregoing assumptions and processes in place, the simulation is run. At each month during the horizon, it’s possible to measure the liquidity gap on a point-in-time basis. Simply, the cash inflows and outflows are considered in isolation, to assess whether that month’s liquidity gap is positive or negative.

 

At the end of the one year horizon, for example, banks can calculate a distribution of possible one year Net Cumulative Cash Flows (NCCF). This is the sum of all twelve point-in-time liquidity gaps over the year, representing an accumulation of cash flow shortages, or excesses, from the beginning to the end of the year. At the 99.97 percent confidence level for example (consistent with a AA rating), LaR will be the 31st worst observation in this distribution.

 

LaR simulates the progression of a portfolio’s value through time – how cash flows should behave, how balance sheet values should change, and ultimately the cash inflows and outflows that result from these changes. It is known as a dynamic portfolio approach as the methodology assumes that the portfolio is constantly changing as loans mature and new loans are created. The goal of LaR is to produce 10 000 to 100 000 values for the NCCF at the desired horizon, which could be overnight (1 day), 7 days or monthly, from 1 month to 12 months.

 

What that does is make it possible to assess the maximum possible liquidity shortfall over a one year period, as well as at different points through the one year period, given that interbank activity has been held constant. It may even serve to mitigate against the probability of a severe funding shortfall arising in the first place.

 

The LaR framework, essentially, is about bespoke models, methods, and techniques that assist banks in managing their liquidity risk. So that we may be able to predict, anticipate and prepare for the next time we’re in an extreme market crisis; instead of fumbling around desperate for liquidity, wondering why we didn’t see it coming.

0

Research

In 1994, airport officials in Rotterdam found it necessary to lodge a formal complaint against a company that they felt had begun to endanger the lives of their passengers and pilots. This company had amassed such a significant volume of aluminium in storage around the Rotterdam airport and sea port that the reflected light off the metal was blinding their pilots on take-off and landing. In investigations that followed, it was found that the actual owner of the aluminium, held through a series of offshore companies, was in fact a well-known, yet surprising entity: Goldman Sachs.

 

What – one may ask – would be the purpose of an investment bank, renowned for brokering corporate take-overs and mergers, and for underwriting equities and bonds, for owning such a tremendous volume of aluminium? The answer lay of course in the fact that Goldman Sachs had significantly ramped up their presence in the commodity trading markets.

 

Whereas within equity and debt trading operations – in which there is no actual physical delivery of the underlying asset – in the case of commodities, should the contract expire, the commodity itself needs to be physically delivered, to be stored at the very least, somewhere in the world. In this case Rotterdam was the port which held the storage facilities that were required by Goldman.

 

This kind of activity – that of taking large volumes of commodities onto their own balance sheets – already a stretch of the concept of banking, had been growing steadily since banks began offering commodity hedging as part of their raft of services to their corporate clients.

 

Should Coca Cola, for example, wish to hedge the future cost of aluminium anticipating price volatility in the metal – a key component of their cost of production – they may wish to enter into a commodity futures or forward contract with Goldman Sachs for a fee. There would seem nothing untoward about this – it being one of the core purposes of financial services, at the service of industry and economic growth in general. In this regard, Goldman Sachs would be engaged simply in the process for which banking is meant: that of financial intermediation.

 

However, over time, the benefits to firms like Goldman Sachs and Morgan Stanley of trading commodities for their own balance sheets – what is known as proprietary trading – far outweighed the margins they were making by providing commodity price hedging services to their clients. In fact, the business of proprietary trading commodities became so attractive to Goldman Sachs that they bought a firm called Metro International Trade Services in 2010, although they had made extensive use of this firm for some time prior.

 

The sole purpose of Metro, now a wholly owned subsidiary of Goldman Sachs, is to store and distribute aluminium. In taking control of Metro, Goldman was then able to slow down the delivery of aluminium from initial storage to delivery from an average of 40 days to an average delivery time by 2014 of 674 days. In order to skirt around the rules that the London Metal Exchange (LME) had put into place to prevent precisely this nature of hoarding – which would have obvious price implications for the metal – Metro, instead of selling the aluminium to desperate end-users such as Coca Cola and Coors, would simply move the metal from one storage warehouse to another. They would do so on behalf of a trade that had been executed with a counterparty who did not wish to take delivery, and who may of course have been a trading counterparty of Goldman Sachs, if not Goldman Sachs themselves.

 

By dominating both the market for the spot and futures trading of aluminium, and by also dominating the market for the storage and distribution of aluminium to end-users, Goldman Sachs was no longer acting as a financial intermediary. In fact, they were acting as a competitor against their own clients, with the advantage of both having inside information in respect of the timing of demand as well as being the master of price determination. It should be noted that this nature of market manipulation, in which investment banks were able to inveigle themselves also into their clients’ businesses and from there to be allowed to radically manipulate prices, was only made possible through regulations that were introduced under the Clinton administration.

 

In 1999, several market observers and regulators had felt strongly enough that the over-the counter (OTC) derivatives market was sufficiently opaque that they should lobby for increased regulation of all OTC trading. Increased regulation would hopefully ensure that counterparties in trades would have to properly identify themselves, and that the purposes of the trades would have to be clearly understood. The derivatives market, after all, had already radically eclipsed the underlying market – betting on market prices had become a far larger business than the making of markets themselves.

 

The response from the Clinton administration to the lobbying for increased regulation, was met with a fierce and somewhat malicious backlash. Larry Summers – one of Clinton’s key advisors at the time – led the charge to introduce a new act, called the Gramm-Leach-Bliley Act, which was quickly shepherded through Congress and ratified into law. Among other aspects of this Act, the law now insisted that there should not be more regulation of the OTC market, but less.

 

This Act also effectively repealed the 1933 Glass-Steagall Act, which had been written out of the ashes of the Great Depression, and which had separated investment banking activities from commercial banking activities. It was precisely the cosy and unregulated relationship between the selling of equities, and the underwriting of equities that US regulators had clamped down on post-Depression – this conflict of interest being identified as one of the principal causes of the stock market Crash of 1929. In fact, it was the Glass-Steagall Act that forced JP Morgan to split into JP Morgan Co. and Morgan Stanley the investment bank.

 

In repealing Glass-Steagall, Clinton and Summers augured in the years of the banking hey-day – in which it was not uncommon for investment banks to make returns on equity of over 30 percent using leverage of over 30 times equity. This was ultimately, in the final analysis, perhaps the key underlying cause of the 2008 financial crisis. Investment banks could now take on deposits, sell directly to retail end users, and most significantly in terms of the crisis, package mortgage-backed securities, as well as sell them to naïve buyers.

 

However, what is perhaps less well known is that the Gramm-Leach-Bliley Act also allowed in certain clauses that were lobbied by the investment banks in the hope of aiding their desire for even greater domination of the commodities trading business. After all, Goldman Sachs and Morgan Stanley were now facing increasing competition from non-banking entities that were on a huge growth spurt, such as Cargill and Glencore. An amendment to the 1956 Bank Holdings Act was easily passed in 1999 that in essence allowed Goldman to do openly what it had for some time wanted to do, and possibly had been doing through broken Chinese walls – to take delivery. Essentially, Goldman could now, through entities such as Metro, control not only the trading of aluminium, but also its storage and release.

 

A further piece of legislative engineering was required however, and this was the ratification of the Commodities Futures Modernization Act (CFMA) in 2000. It is particularly telling that at the dawn of the 21st century, a democratic US president – the leader of the western world, at the helm of a country which sells its brand of liberal capitalism as the only form of political governance worth fighting for – completely deregulated the OTC derivatives market whilst simultaneously legislating that, under the CFMA, the OTC contracts themselves were legally enforceable in a court of law. This is really a case of helping firms such as Goldman Sachs to have their cake and eat it.

 

To clearly explain: investment banking players in the commodities OTC market wished to ensure that they be allowed to make markets, price contracts for clients, delay delivery of the underlying, in a completely unregulated manner; but at the same time wanted the law of contract on their side in collecting on debt. Recall, these investment banks had already successfully argued – in lobbying the Gramm-Leach-Bliley Act – that they should be allowed to do so on the basis that they were in engaged in hedging activities on behalf of clients, and that market regulation of complex derivatives would lead to an absence of liquidity and deleterious effects on price. Essentially, they had argued for the market in OTC derivatives to be entirely free of oversight, even SEC oversight.

 

Naturally, in a world without regulation, authorities would have no way to test whether trades in which investment banks or commodities houses were engaged in were of a ‘hedging’ nature or of a pure ‘trading’ nature. The problem, of course, with having enormous values of pure trading – or betting – on underlying prices in economically-critical commodities such as aluminium is that the price of the actual underlying commodity then becomes more driven by speculation – and specifically by highly leveraged speculation – than by supply and demand factors. By trusting that the free market would not overly speculate, the Clinton administration had not only augured in the financial crisis, but had also opened the door to the commodities price boom of the early 2000s.

 

Just as it is not the job of the police to enforce the payment of bets made at underground blackjack tables, the investment banks could hardly insist on having contracts enforced should they go bad – especially if the purpose of the contracts was not to hedge, but rather to bet, just as one might take a bet on the outcome of a turn of a card on a table in an illegal casino.

 

However, in lobbying for particular clauses to be scripted into the CFMA, the investment banks achieved precisely this: not only could they bet to their hearts’ desire, they could also call on the authorities to enforce the contracts should their counterparties balk at paying up. The investment banks wanted – despite wishing to conduct themselves free of any oversight or rules – to be able to go to court and make use of the institutions of law where it best suited them. One would have to search hard for a more one-sided form of prudential oversight.

 

It is no coincidence that the price of food reached its highest level in 2008, since 1845 – that is, for over 150 years. For it was not only the price of aluminium that had been manipulated, or ‘financialised’ to use Rana Foroohar’s term in her remarkable book Makers and Takers, it was pretty much everything. The extent to which dominant market participants have entered into traditional businesses, obstructed them, and then altered them forever, beggars belief.

 

The sheer cynicism of the banking fraternity, when interrogated by Congress in respect of the specific amendments being made in 1999 and 2000 to the Bank Holdings Act of 1956 – which had been originally scripted to specifically prevent banks from competing against their own clients – was to cite the notion that financial firms engaging in ‘complementary’ businesses could benefit their clients. Credit card services, they argued, could be enhanced by offering travel advice, as an example.

 

To cite travel advice as a reasonable example of complementary business, and to use it as justification for being allowed to take delivery of underlying commodities such as aluminium, is to deliberately miss the wood for the trees. Remarkably, and inexplicably, the congressional subcommittees investigating at the time did not take umbrage at the banality of the argument. It really is – in retrospect – impossible to accept the naiveite of the lawmakers at the time. It is also impossible to accept the lack of accountability that has been taken by those same lawmakers and regulators to this present day.

 

More specifically, there has been virtually no accountability taken by western leaders or their advisors for helping usher in the dramatic run-up of events that led to the 2008 financial crisis. Alan Greenspan, in spite of the criticism that has been levelled against him, at least has the distinction of being perhaps the only economist in the history of western civilization who admitted – albeit in the face of overwhelming evidence – that he might have been wrong. No such speech has yet been forthcoming from Larry Summers, who went on after his years as one of Bill Clinton’s most trusted advisors, to become President of Harvard University.

 

The aggression of firms such as Goldman Sachs is symptomatic of their nature, one could argue. They are what they are – highly competitive players making use of each and every inch of advantage they can find on the field of play.

 

This is simply not an argument one can make in the case of Clinton and Summers. There is no rational argument for not insisting on regulating OTC derivatives, especially at a time when the face value of the derivatives market was doubling every three years.

 

To use a football metaphor: the fact that Maradona made use of the “Hand of God” to help his country win the World Cup, or the fact that Luis Suarez savagely bit Giorgio Chiellini’s shoulder during Uruguay’s game against Italy in 2014, are insignificant infringements in comparison to those of Sepp Blatter.

 

If FIFA is to football what the US and Europe’s governments are to free market capitalism, then surely it is the leaders who should be hauled over the coals, not the delinquent players. Sadly, it took many years of intense pressure before FIFA was finally exposed. One suspects it will take even longer in the case of the financial markets.

 

 

 

0

Research

It is generally agreed amongst economists, bankers, politicians and the man on the street that the 2007/8 financial crisis was the worst economic crisis to befall western economies since the Great Depression of the 1930s. Some call it the Great Recession and some call it the Financial Crisis – its full impact has not yet been fully understood.

 

During the Great Depression, policy makers took far too long to adequately respond to the impact of the severe stock market decline of 1929. Instead, policy makers in western economies, G8 countries and across the world, in the case of the 2007/8 crisis, reacted extremely quickly, through extreme monetary interventions, as well as through some – most likely insufficient – fiscal interventions. This was of course entirely necessary given that confidence, not only in the markets, but also in the very notion of capitalism itself was under attack. Several commentators had even questioned the long term viability of capitalism as a political economic system.

 

artoffailurev-smallIn order to save the world from itself the central banks punted enormous amounts of liquidity into the market, wrote thousands of pages of legislation that would require banks to hold more capital and more liquid assets and to meet far more exacting standards than previously. To a large degree, these alterations to a previously laissez-faire economic playground have been successful. For one thing, the more extreme predictions that immediately followed the initial crisis in 2008 have not come to bear. Europe, although growing at a very slow pace is still growing. Asian economies have not yet collapsed and the US is actually growing reasonably well.

 

What is masked, however, by these extreme acts of government intervention are the basic statistical realities of what fundamentally changed within the global banking system. By this is meant that some very simple statistical realities are not as transparent as one would like them to be from an analytical perspective. For example, if one were to ask the question as to how many of the world’s top 1000 banks failed in the period during and after the 2007/8 crisis, one would find what appears to be a relative absence of information in respect of this. The main reason is that – following the repercussions of allowing Lehman to fail – policy makers used a variety of interventions to prevent further explicit failure. From these interventions emerged the phrase ‘Too Big to Fail’. Lehman’s failure, and the extreme market impact that followed, virtually eroded market confidence completely, leading policy makers to react with measures never before contemplated. From a monetary perspective, particularly, the extent of the interventions clouded over the breadth and depth of banking as well as corporate failure – and in fact to some extent continues to do so, not only within particular individual banks but across the banking fraternity itself.

 

Monocle Solutions in its continued research efforts became particularly interested in whether the new legislation for banking, written and codified by the Basel Committee for Banking Supervision (BCBS) post-crisis – in what was known as Basel 2.5 and then Basel III – will effectively address the main reasons for the crisis in the first place. As an example, we have noticed that there would appear to be far more regulation imposed upon the banks than on the periphery corporations that helped to manufacture the crisis to the extent of severity that it reached. The credit agencies, for example S&P and Moody’s – those agencies that are paid by banks to issue credit ratings for the issuance of their own debt and debt instruments – seem vastly less affected by regulation than the banks themselves. Yet they issued many thousands of triple A-rated stamps of approval on collateralized debt obligation (CDO) structures that later exploded.

 

To further this point using a recent example: The US Justice Department fine of $14bn that is to be levied against Deutsche Bank, for their negligence in selling toxic mortgage-backed securities (MBS), could potentially be putting Deutsche into a severe undercapitalised position and may even lead to their failure. This will impact not only the jobs of those people who work at Deutsche Bank, but will have a potentially severe systemic impact. Note that these fines target Deutsche Bank specifically but do not target the credit agencies that rated these securities.

 

In essence, if one is to boil down the BCBS regulations, ex-post their imposition on the banking system, they effectively achieve three things. Firstly, they almost double the amount of core tier-1 equity capital that needs to be held against the risk-weighted asset loan book of a bank. Secondly they increase the standard of that capital to be limited to only the purest of capital, i.e. unencumbered equity, excluding instruments that are pseudo-capital in nature. Thirdly the regulations introduce liquidity ratios with which banks must now comply. Specifically, the two critical ratios are the Liquidity Coverage Ratio (LCR) and the Net Stable Funding Ratio (NSFR) which have been constructed by the BCBS to address the ultimate cause of failure in the majority of banks that did fail.

 

To be clear, whilst most banks during the crisis experienced severe pressure on their loan books and the value of their assets, leading them to substantially increase provisions to absorb the forthcoming losses, their failure as institutions was primarily owing to an inability to meet immediate liability demands.

 

In fact, it is technically incorrect to say that banks failed from a credit crisis since most of these credit losses were experienced by these banks as revaluations of their asset book rather than actual experienced losses. The effect of the credit crisis, and the extreme devaluation of MBSs and CDOs, lead market participants in the interbank market, i.e. banks themselves, to cull from their own. We witnessed during the crisis the severe effects of banks hoarding cash, and an unwillingness of these same banks to take collateral from counterparty banks of anything that was of less than the highest quality. This meant that all MBS and CDO paper, even if it was triple A-rated, was not accepted by banks as collateral in repo-style transactions. This lead particular banks to experience severe short term liquidity shortfalls, which lead in some cases to extreme government intervention and in other cases to failure.

 

It was, at the outset of this study, our supposition that banks that were more reliant in their liability structure on the interbank market prior to the crisis, would have been those banks that would have been more likely to have failed. Several problems present themselves in attempting to perform such an analysis. The first is that there are significant problems with understanding and creating a stable definition of default owing to the extreme interventions performed by governments in different manners across the world. It is very difficult to say, for example, that Royal Bank of Scotland (RBS) failed or did not fail. It was certainly bailed out. Certain banks definitely failed though, for example Landsbanki in Iceland.

 

Goldman Sachs, as a further example – which was not in fact a bank at the time but was forced post-crisis to become a bank holding group – was, according to Lloyd Blankfein its CEO, forced to take bailout money from the Troubled Asset Relief Program (TARP). This $750bn bailout fund was set up by Hank Paulson, Goldman Sachs’ previous CEO.

 

From an analytical perspective, therefore, it was essential to settle upon a clear definition of default that we could use in our study of what the frequency of default of banks was post-crisis, as well as which ratios might have been indicative of failure pre-crisis. The second problem that presents itself, from an analytical perspective, is that the rules created by the BCBS for LCR and NSFR were based on new underlying information that would form either part of the numerator or the denominator of those ratios. That information prior to the Financial Crisis was not publically available and is not publically available since the crisis – in all the forms it would need to be in order to reconstruct those ratios for banks – unless one had insider information.

 

As such, it was decided that we would address these two issues for the study in the following manner. Firstly, through literature research, we came across a study by McKinsey – Working Papers on Risk, Number 15 (2009): Capital Ratios and Financial Distress Lessons from the Crisis. This study had been conducted on the same basis that we wished to conduct our study, i.e. to examine whether financial ratio analysis would have been a good indicator of financial distress post-crisis. This study examined a sample of 115 banks, their financial distress outcome post-crisis, and the ability for their ratios to predict this outcome. The criteria for the sample banks was that the banks needed to have a minimum asset size of $30 billion representing $62.2 trillion in total assets – about 85 percent of developed-market banking assets, and 65 percent of total banking assets worldwide. Broker-dealers specifically were excluded from their analysis as data on risk-weighted assets for such institutions in December 2007 was unavailable. Performance between 1 January 2008 and 1 November 2009 was used to identify banks that became distressed.

 

McKinsey defined banks as being financially distressed if any one of the following four criteria were met: where banks had declared bankruptcy, been placed into government receivership, been acquired under duress, or received bailout capital in excess of 30% of their tier-1 equity.

 

A range of financial and risk ratios were then calculated as of 31 December 2007 to determine their ability to predict financial distress. McKinsey found that the tangible common equity (TCE) to risk-weighted assets (RWA) ratio was the best predictor of future distress.

 

Whilst we believe this was a good starting point, we were concerned that they had not established a clear default frequency for the top 1000 banks. In fact, we could find no study that had been conducted across the worlds’ top 1000 banks in terms of a frequency of default post the largest single financial crisis since the Great Depression. We therefore conducted initial analysis work to simply calculate what the default frequency of banks was during and post crisis based on the McKinsey definition. We also decided to use a longer observation period, from 2008 through to 2010, and across the top 1000 banks.

 

For each of the 1000 banks on The Banker List (2007), an investigation was conducted into the status of each bank from 1 January 2008 through to 30 June 2010, identifying whether any of the McKinsey criteria for financial distress were met by the specific bank in the defined time period. Banks were identified as being either financially distressed (FD) or non-financially distressed (NFD). The results are surprising – out of the top 1000 banks investigated, 106 banks conform to the definition of Financial Distress. This result implies that the effective default frequency for banks during the Financial Crisis 2007/8 was in excess of a staggering 10%. To put this number into perspective, recall that a poor performing mortgage book in a classic retail banking environment would be of the order of 2-3% during the height of the crisis. The default frequencies that caused the ripple effect into the MBS market and the CDO market was of the order of 10-15% in the worst performing areas of the United States, such as Las Vegas. Banks themselves, therefore, experienced previously unheard of default frequencies comparable only to the defaults that occurred post 1929.

 

The second area of interest was the ratios themselves. It was decided by the Monocle Research Team to turn towards the fundamental analysis concept from the beginning. In 1966, William H Beaver used financial ratios with a univariate technique to predict financial distress. He classified financial distress as bankruptcy, insolvency, liquidation for the benefit of a creditor, firms which defaulted on loan obligations, or firms that missed preferred dividend payments.

 

Beaver’s technique accurately classified 78 percent of the sample “distressed” banks up to five years prior to failure. His research concluded that the cash flow to debt ratio was the single best indicator of bankruptcy. To overcome many of the inconsistencies found in Beaver’s research, in 1968, Edward I Altman improved on Beaver’s univariate model by introducing a multiple discriminant approach. His results found that five financial ratios were significant predictors in the financial distress prediction model. These ratios are: working capital to total assets, retained earnings to total assets, earnings before interest and taxes to total assets, market value equity to par value of debt and sales to total assets.

 

Banks and financial institutions in particular are generally more highly leveraged than the industrial institutions Beaver and Altman studied. We could not therefore make use of the same ratios or the same coefficients of such ratios to create anything like the Altman z-score. In order to create our own database and to conduct the necessary analysis, the income statements and balance sheets of a sample of banks were extracted, constructed and normalised into a single common format. The sample of banks consisted of 20 financially distressed banks selected from the top 1000 banks according to size and demographics and 20 non-financially distressed banks randomly chosen from the top 60 non-financially distressed banks on the Banker List. Is it essential to note that the selection process was not perfectly scientific and was based on our desire to observe outcomes in different demographic regions of the world. Our sample size of a total of 40 banks was also small owing to the time constraints involved in normalizing banks’ income statements and balance sheets into a single common format. Further research is currently underway within our research team and these should be noted to be preliminary results.

 

We found that of all the ratios we examined – in excess of 9 ratios – by far the strongest was customer loans to deposits. This, of course, makes perfect sense because the ratio effectively demonstrates the concept of reliance on the interbank market. Recall that a bank in its liability structure to meet its asset demand will depend on either deposits made by corporate or individual customers or on the interbank market. We can therefore deduce that the higher the percentage of total loans to customer deposits the more reliant banks are in their liability structure on the interbank market.

 

Based on the limited sample size, we are able to conclude that had the extent to which banks were funded by interbank loans versus customer deposits been known pre-crisis it would have been a powerful indicator of future financial distress with an accuracy level of 80% on 31 December 2007.

 

The ultimate finding of the study is that 10.6% of banks failed. This is an extremely high proportion and one that we have not found any primary research for. Secondly we found that customer loans to deposits is significantly indicative of risk and would have been a very useful indicator prior to the Financial Crisis.

 

It goes without saying that the constructions of the LCR and NSFR, the latter in particular, are extremely difficult for banks to comply with. To a large degree these ratios conflict with the original notion underlining the business of banking – that is to use an upward sloping convex yield curve to arbitrage on tenor to achieve interest rate differentials to make profit.

 

Given the current environment of very low interest rates in advanced economies, we believe that this study indicates that a more simplistic ratio could have been used by policy makers as early as 2008/9 instead of LCR and NSFR. The use of such a ratio might have had a less deleterious effect on funding structures for banks. Ultimately a combination of the NSFR ratio as well as the radical levels of monetary intervention – which have led to very low interest rates – have had extreme effects on the profitability of banks. If these conditions perpetuate for another 5 to 10 years, it is questionable whether banks are going to be able to attract sufficient equity to remain attractive to investors. A simplistic ratio such as customer loans to customer deposits could be a potential solution to avoid banking ultimately becoming a utility.

 

We wish to acknowledge that the primary research was conducted by a Masters student of North-West University’s Business Mathematics and Informatics (BMI) faculty during a 6 month period while working closely in conjunction with Monocle Solutions. We are grateful to all the support and assistance from the University professors.

0

Research

The new Standardised Approach for calculating counterparty credit risk (SA-CCR) is set to replace the two current standardised approaches formally known as the Current Exposure Method (CEM) and the Standardised Method (SM). The focus therefore shifts towards the effects of SA-CCR on banks’ over-the-counter (OTC) derivative trading and compares its associated burden with that of its only alternative, the Internal Model Method (IMM) which is the only internal method prescribed under Basel regulatory framework. Mandatory implementation of the SA-CRR is scheduled for the 1st January 2017 and given that the vast majority of banks globally use the standardised CEM approach, the impact will be considerable. Moreover, only a handful of banks have obtained IMM approval from their national regulators as the approach will be difficult to implement. This paper begins with a brief outline of terms, formulas and relevant past information on derivatives regulation and then delves into the SA-CCR and its impacts on derivative trading within banks.

 

Understanding the CVA  

 

The Basel Committee on Banking Supervision (BCBS) defines the Credit Valuation Adjustment (CVA) as the difference between the value of a derivative assuming the counterparty is default risk-free and the value of a derivative reflecting the default risk of the counterparty. The CVA charge therefore reflects the market value of counterparty credit risk and market risk factors that influence the value of derivatives. The CVA effectively ensures that banks are compensated for the counterparty risk they take on by quantifying counterparty risk and including it in the price of all derivative contracts. The CVA reduces the mark-to-market value of an asset or liability and thus reduces the balance sheet risk banks face.

 

The Exposure at Default (EAD) is the main component of the CVA and past capital charge formulas. A higher EAD leads directly to a higher CVA which in turn leads to a reduction of banks’ derivative assets and liabilities. Historically there have been several methods to calculate the EAD component of the CVA capital charge under Basel regulations. The two relevant methods discussed refer to the non-internal SA-CCR and the internal IMM which are set to be the only two prescribed methods under the Basel framework by 2017.

 

Why the change?

 

Since inception, the BCBS has constantly updated existing regulations and introduced new standards in an attempt to remain relevant and effective in regulating a fast changing financial sector. This is challenging and many of the latest Basel accords in the wake of the Financial Crisis have been reactive and are aimed to prevent history repeating itself. The Basel framework has struggled to keep derivative regulations relevant as derivative trading has evolved aggressively- increasing in volume, products traded and the complexity inherent to these trades. Pertaining Basel I (1988), derivative regulations simply comprised of an add-on factor for interest rate derivatives. This was appropriate at the time as derivative trading only accounted for a small fraction of business in a select few banks. Derivative trading subsequently evolved into a larger, more important part of banks’ profit and global financial trading, henceforth derivative regulation increased in importance and complexity as the contagion risk to the financial sector become apparent.

 

During the financial crisis, banks suffered significant counterparty credit risk losses on their OTC derivative portfolios. According to the Basel Committee on Banking Supervision’s (BCBS) CVA amendment, the majority of these losses directly resulted from fair value adjustments on derivatives. Basel II simply required firms to hold capital against the variability of their derivatives in the trading book and did not require firms to capitalise counterparty credit risk. The first response of the BCBS was to formally introduce the CVA as part of the Basel III Accord. The CVA is aimed at increasing banks’ resilience to potential mark-to-market losses following the deterioration in creditworthiness of counterparties. Secondly, the BCBS addressed the possible drawbacks of the methods used by and prescribed by the Bank for International Settlements (BIS) to calculate the EAD component of the CVA. In March 2014 the BCBS published a final paper mandating the new standardised SA-CCR method in calculating EAD, a new more risk-sensitive measure of counterparty exposure.

 

crisisAs mentioned, banks currently have a choice between two standardised methods namely the CEM and the SM methods and one internal method, IMM, which requires supervisory approval. The SA-CCR intends to address some of the criticisms of the CEM and SM approaches and will replace these two standardised methods with effect of 1st January 2017. It should be noted that some regulatory derivative exposure reporting metrics, such as the Leverage Ratio, will be required under the Basel framework when implementing SA-CCR, regardless of CVA method used. Therefore the reality is that all banks worldwide will be affected to some extent and will need to implement at least part of the SA-CCR method in their systems.

 

Calculating SA-CCR

 

The SA-CCR methodology is similar to that of its standardised predecessor, the CEM, and the formula below aids in the explanation.

formula

 

 

The SA-CCR EAD calculation includes the variables used in the CEM and an additional factor (alpha) set equal to a multiple of 1.4. The formula has two regulatory components: the replacement cost denoted “RC” and the potential future exposure denoted “PFE”. The RC is the cost of replacing a position should the counterparty default today. The calculation under SA-CCR addresses previous drawbacks in that it accurately recognises the benefit of collateral given or received. It is also important to note that margining under a central clearing house is specified to be categorised as collateral under SA-CCR. The PFE has also been enhanced to include two new add-on factors that account for maturity and certain hedging positions. The maturity factor is a welcome addition and accounts for the fact that longer maturity derivative positions are inherently more risky, ceteris paribus. The hedging factor allows for the full or partial offsetting of long and short transactions that share common attributes within an asset class.

 

No reason for this alpha multiple is provided in the BCBS’s SA-CCR document, indicating only that it is consistent with the 1.4 set by the Basel Committee for the IMM. Thus the only explanation for its inclusion lies in the BCBS’s IMM document which suggests that the alpha multiple of 1.4 is set too conservatively and practically accounts for a “bad state” economy. The alpha multiple factor has taken the brunt of SA-CCR’s public scrutiny and large banks such as UBS have publicly criticised the BCBS, requesting it be removed from the EAD formula.

 

Operational Impact on Banks

 

The introduction of the CVA caused many firms to reconsider their current risk systems’ design which led to the implementation of new sophisticated CVA models. Firms incurred high costs developing complex CVA models based on the CEM or SM methods. Post implementation, CVA models are an ongoing drain on time and resources. The implementation of SA-CCR will inevitably require banks to improve their data infrastructure and invest more resources into either developing new or updating their existing CVA models.

 

Banks’ risk management and information technology departments will need to understand the new calculation and the impact of the new variables on the EAD amount. For instance, the PFE component in the calculation includes two new add-on variables mentioned previously and requires different asset classes to be considered individually. These teams will need to be aware of the SA-CCR requirements such as the inclusion of sub-classes under the various existing asset-classes, supervisory deltas, correlation and volatility measurements. Banks will need to understand the SA-CCR improvements on the CEM in order for them to fully benefit from the compulsory regulatory change. For example, if the RC differs among margined and un-margined transactions, a bank can reduce their EAD on their margined transactions if they recognise margins as collateral as permitted under SA-CCR. All of the above translates to additional data input requirements and additional variables to consider when adjusting from CEM to SA-CCR.

 

It is therefore necessary for banks to consider the full implementation effects of this new standard in order to ensure successful transition takes place. Banks will also need to make judgment on whether their current IMM is fundamentally better than the proposed SA-CCR or not, given the cost implications and resource constraints. Specific attention should be given to data availability and sufficiency, data architecture and management information systems.

 

The Way forward

 

SA-CCR is still viewed as a crude, conservative methodology that will continue to significantly overstate the true counterparty exposure in well diversified portfolios. It is still unclear whether any capital benefits will accrue when banks transition from CEM to SA-CCR. Indeed, some banks may even be penalised when using SA-CCR compared to the CEM, especially on unsecured, non-diversified counterparty portfolios. In terms of regulatory capital, the use of IMM might therefore be a better alternative to many banks. The mandatory move from CEM to SA-CCR will force banks to redesign and re-implement their current counterparty exposure measurement systems. The additional complexities of SA-CCR will result in significant implementation and running costs. Subsequently some banks may decide to request IMM approval from regulatory authorities if they are going to have to change their systems anyway. It should be noted, however, that the IMM approval process can be quite onerous and time consuming to banks. Moreover, some national regulators have become reluctant to even consider granting this approval in order to avoid any dubious methodologies, especially following the financial crisis.

 

 

 

0

Research
Executive Summary

In the wake of the Global Financial Crisis, the G20 and Basel Committee for Banking Supervision introduced a raft of new banking regulations, known collectively as Basel III.

 

Basel III has five key objectives: increased and better quality of capital, broader risk coverage, capital build up during credit booms, leverage, and liquidity.

 

The impacts of these five key objectives of Basel III on banks’ balance sheets imply significant changes to banks’ business models. As a result corporations will need to assess the impact of Basel III on the funding they access from banks.

 

Basel III has five key objectives

1. Introduction

At the time of its publication in June 2006, Basel II was widely regarded as transformational for the banking industry internationally. Basel II provided a framework which allowed banks to hold capital against a more granular measurement of risk exposures, both within and across asset classes.

 

Unfortunately, the timing of Basel II’s implementation at the beginning of 2008 proved to be particularly inauspicious, coinciding closely with the onset of the Global Financial Crisis (GFC).

 

Post the GFC the G20 and Basel Committee for Banking Supervisors (BCBS) introduced a raft of new regulatory proposals and measures, collectively known as Basel III. While these measures will considerably bolster the safety not only of individual banks, but also financial stability, broader economic implications inevitably flow from new constraints on banks’ business models.

 

2. Global Financial Crisis (GFC)

It’s often noted that regulators are forever doomed to fight the last crisis. It’s clear that the financial crisis starting in 2007 has resulted in severe hardship in many countries. Jobs have been lost, inequality has grown, and the negative impact on economic growth will be felt for many years to come. Understanding the origins of financial crises is imperative, even if the next crisis is unlikely to share quite the same features as the last.

 

Global Financial Crisis key events

By their nature, crises tend to proceed from an unforeseen mix of inter-related contributory factors. Key events that can be identified in the build up to the GFC are:

 

  • Gramm-Leach Bliley Act of 1999.
  • Easy credit and leveraging during the early and mid 2000s.
  • Community Reinvestment Act of 1977, which encouraged home ownership, and subprime lending after 2000.
  • Rating agencies’ ‘AAA’ stamp on securitisations.
  • Boom and bust of the shadow banking sector, fuelled by AAA stamps.
  • June 2007 collapse of 2 Bear Stearns hedge funds.
  • Failure of wholesale funding markets.
  • March 2008 sale of Bear Stearns to JP Morgan.
  • September 2008 bankruptcy of Lehman Brothers.

 

Basel III Impact on Banks

The severe losses on securities during the GFC prompted the G20 and the BCBS to introduce stiffer capital requirements, in terms of both increased capital requirements and far better quality of capital on banks’ balance sheets. Market risk capital requirements were ramped up by around 4 times with the introduction of Stressed VaR and the Incremental Risk Capital (IRC) charge, and counterparty credit risk addressed through the Credit Valuation Adjustment (CVA).

 

Further revisions to the framework were also introduced in respect of a non risk-based leverage ratio and two liquidity ratios. Banks will also be required to build up an additional capital buffer during credit booms to counter procyclicality.

 

The impacts of these key objectives of Basel III on the balance sheet mean that there will be major challenges arising for banks’ business models.

 

South Africa’s banking sector is well capitalised from a banking book point of view, with a capital adequacy ratio of 15.09% at December 2011, comfortably above the 10.5% -13% range banks internationally must ramp up to by 2018 (12% – 14.5% in SA). Trading book capital requirements however will result in ongoing assessment of capital allocation to these activities, particularly in respect of capital requirements and the new liquidity ratios.

 

The new non risk-based leverage ratio should not present a problem for SA banks. The financial leverage ratio for the SA banking sector amounted to 14.4 times at December 2011 (total banking-sector assets divided by total banking-sector equity attributable to equity holders). Basel III seeks to limit banks’ leverage ratios to 3% tier 1 equity of total assets.

 

Meeting the liquidity ratios

 

The real challenge for SA banks lies in meeting the new liquidity ratios, the Liquidity Coverage Ratio (LCR) and Net Stable Funding Ratio (NSFR). The LCR seeks to ensure that a bank has enough liquid assets to meet 30 days’ stressed outflow, while the NSFR seeks to ensure that a bank has stable funding for assets with a duration greater than one year.

 

Given the present structure of SA’s economy and financial system these ratios are currently extremely difficult for any SA banks to meet. The Bureau for Economic Research estimates that banks face a funding shortfall of around R240 billion on the LCR and R680 billion on the NSFR.

 

Banks should in future be able to meet the LCR however. The South African Reserve Bank (SARB) is to introduce a committed liquidity facility for this purpose. The facility will cost banks up to 40 basis points against collateral at the SARB.

 

Meeting the NSFR will be more difficult. This will certainly require significant structural adjustments to banks’ balance sheets. Lending rates will increase on longer maturity products, and banks’ appetite for products such as mortgages curtailed.

 

Basel III also introduced the notion of Global Systemically Important Financial Institutions (SIFIs), and Domestic Systemically Important Banks (D-SIBS). The latter may become relevant in future given the concentration of SA’s banking sector.

 

4. Basel III Impact on Corporations

From a capital perspective, Basel III will likely impact banks’ investments in, and loans to, private equity, hedge funds, and venture capital. Higher capital requirements, particularly the increase in required common equity tier 1 capital, will cause many banks to reassess whether exposure to such risky assets is economic on a risk-adjusted return basis.

 

For corporations more generally, the question that should now be asked by treasurers is what impact the liquidity ratios will have on the type of funding they access from banks. The proposals as currently envisaged will substantially increase lending costs on commercial paper, working capital, and longer dated funding. Treasuries will also need to factor in the increased costs of shorter dated funding, and increased reliance on capital markets in future.

0

Research
Executive Summary

IFRS 9 is the IASB’s envisaged answer to the criticisms cast at its current financial instruments standard, IAS 39. As part of the three-phase project to bring it about, the IASB has been cooperating with the US FASB on key aspects so as to bring about the convergence and greater consistency the G20 so stresses. In terms of IFRS 9, it is impairment methodology which remains at the heart of the boards’ ongoing deliberations, and it is the focus of our discussion in this paper.

 

The foundations of the new impairment standard have been laid and spell the end of incurred loss provisioning. In its place comes the notion of expected loss provisioning, which seeks to estimate losses on a probabilistic, ‘expected value’ basis and recognise them in a timely manner. In doing so, the aim is to reflect the deterioration in the credit quality of financial assets.

 

Delays in finalising this project belie the fervour with which convergence has been pursued. Indeed, broad consultation with the IASB’s Expert Advisory Panel and supervisors and stakeholders in countries the world over has uncovered how divergent current practices are. At the heart of this endeavour, the urgency behind replacing divergent practices which have hurt entities during the financial crisis vies with the necessity for operational, informative, comparable, and consistently applied standards to supersede them.

 

Expected loss provisioning seeks to estimate losses on a probabilistic ‘expected value’ basis.

1. Introduction

The IASB and the FASB have been working toward aligning the IFRS and US GAAP standards they disseminate for close to ten years now. The Group of Twenty (G20) talks at the 2009 Pittsburgh Summit, however, urged the two boards to re-double their efforts in finalising a common set of high quality accounting standards.

 

Valuation practices for financial instruments played an important role in the run up to the financial crisis. The G20 believes that inadequate and divergent accounting standards served to mask impending trouble and exacerbate it once it arrived. Accounting standards relating to the classification and measurement of financial instruments have come under intense scrutiny as a result.

The G20’s charge to standard-setters in this regard was to:

 

  • Simplify the accounting standards for financial instruments; and
  • Place more emphasis on asset impairment measurement by integrating more credit-related information.

 

Currently in IASB jurisdictions IAS 39 prescribes treatment of financial assets and liabilities in terms of recognition, measurement, impairment, and hedging. It is the intention of the IASB to entirely replace the latter with a new standard, IFRS 9. Development has been divided into three phases:

 

  1. Classification and Measurement;
  2. Impairment methodology (working jointly with the FASB); and
  3. Hedge Accounting.

 

Despite completion of the project having been initially envisaged for the second half of 2011, only the first phase remains complete. Consequently, in December 2011 the IASB issued a proposal to defer the mandatory date of IFRS 9 application to 1 January 2015, from 1 January 2013. However, early adoption is permitted and comparative-period financial statements need not be restated.

 

2. Current definition of Impairment

Under IAS 39 all financial assets must be measured at fair value at initial recognition. For subsequent measurement, assets are classified into four categories: financial assets at fair value through profit or loss; available for-sale financial assets; loans and receivables; and held-to-maturity investments. The latter two categories of assets are subsequently measured at amortised cost, and need to be tested for impairment at the end of each reporting period so that any gains or losses in value can be ascertained. For assets measured at fair value, gains and losses are naturally determined in updating their fair value.

 

Impairment can be generically defined as follows:

 

Impairment is the difference between the initial recorded value of an asset (carrying value) and the value of economic benefits (e.g. resale value, or contractual cash flows) that will conceivably accrue to the entity from that asset.

 

IAS 39 uses an incurred loss model to determine impairment of financial assets measured at amortised cost. The model is so called since it requires that there be objective evidence that an impairment loss has been incurred. The amount of the loss is measured as the difference between the asset’s carrying amount and the present value of estimated future cash flows (excluding future credit losses that have not been incurred) discounted at the financial asset’s original effective interest rate; i.e. the effective interest rate computed at initial recognition.

 

3. The problem with IAS 39

IAS 39 is a backward-looking impairment model based on financial assets’ actual behaviour, reliant on detection of loss events that have occurred. It effectively assumes that past behaviour is indicative of the future as it takes no account of perceived future losses and economic conditions. As a result allowances for credit losses have typically been found to be too low as the economy slows and greater losses are experienced.

 

That non-incurred future losses are ignored in determining the present value of estimated future cash flows means that, depending on current economic conditions and how much history is factored in, impairment allowances are too low. Inasmuch as incurred losses act as the triggers for impairment, allowance for loss is made too late.

 

The incurred loss framework leads to an inconsistency between the initial measurement of financial assets and their subsequent (amortised cost) measurement and impairment. The initial fair value of the asset takes into account its credit risk (the premium over the basic interest rate for credit risk), thus implicitly accounting for expected future credit losses. However, expected losses are expressly ignored when determining the effective interest rate with which to subsequently value the asset at amortised cost. The inflated effective interest rate means interest revenue is overstated in the periods before a loss event occurs and the resulting impairment losses are therefore partly adjustments of that inappropriate revenue recognition.

 

4. The changes IFRS 9 will bring

IFRS 9 (Phase 1) narrowed the four subsequent asset measurement categories of IAS 39 to two: assets measured at fair value; and those measured at amortised cost. Although the finer details of the new impairment model (Phase 2) have not been finalised, the major concepts have been laid down and agreed on by the two boards. The move is one away from incurred loss provisioning to expected loss provisioning. In this way, a statistical approach is implicitly incorporated in that forward-looking estimates of future cash flows are sought. The guiding principle in terms of impairment recognition is to reflect the deterioration in the credit quality of the financial assets.

 

To that end the boards have proposed a three-bucket structure into which assets will be placed depending on the deterioration of their credit quality since initial recognition. Bucket 1 is where financial assets would be placed at initial recognition. Here only a portion of lifetime expected losses is recognised as an impairment allowance. The transfer to Buckets 2 and 3 will depend on the ensuing deterioration in the credit quality of the assets. In these two buckets, the full amount of lifetime expected losses would be recognised as an allowance.

 

Bucket 1 impairment measurement and recognition

 

The period and recognition of expected lifetime losses is under review. For Bucket 1 assets, losses expected to materialise over either an emergence period or a set number of months will be recognised as a loss allowance.

 

When to recognise lifetime expected losses

 

This concept is tantamount to deciding when to transfer an asset to Bucket 2 or 3 – see “Differentiating between Bucket 2 and Bucket 3” below – since lifetime losses are then recognised. This decision will depend on the definition or guidance provided relating to ‘meaningful’ deterioration of credit quality.

 

Grouping of Assets

 

Assets are to be grouped into categories by which to evaluate whether transfer out of Bucket 1 is appropriate. The principles behind this categorisation are:

 

  • Assets are to be grouped on the basis of ‘shared risk characteristics’;
  • Assets may not be grouped at a more aggregated level if there are shared risk characteristics for a sub-group that would indicate that recognition of lifetime losses is appropriate;
  • If assets cannot be appropriately grouped, or are individually significant, then those assets are to be evaluated individually;
  • An entity may evaluate assets within a group of similar assets with shared risk characteristics, or individually.

 

Differentiating between Bucket 2 and Bucket 3

 

One approach suggests there is a simply a “unit of evaluation” difference between Bucket 2 and Bucket 3, in that Bucket 2 only includes financial assets evaluated collectively and Bucket 3 only includes financial assets evaluated individually. Alternatively, the transfer point from Bucket 2 to
Bucket 3 would be based on deterioration to a particular level of credit risk. In this case, that level would need to be defined.

 

Application of the credit deterioration model to publicly traded debt instruments and loans

 

Credit impairment is typically evaluated for debt securities (typically carried at amortised cost) on an instrument-by-instrument basis as in most cases the security represents a unique instrument. The boards do not expect the three-bucket approach to change this practice, but the “Grouping of Assets” guidance above still allows for any collective assessment. Although explored, the boards decided against the use of any explicit rules to trigger recognition of lifetime expected credit losses, such as if the fair value of the security (which should be available since the securities are publicly traded) is less than a certain percentage of its amortised cost over a specified time period.

 

Similarly, for commercial and consumer loans, the boards also rejected including a presumption of when recognition of lifetime losses is appropriate, such as reaching a particular delinquency status.

 

Estimating expected losses as an ‘expected value’

 

In March 2011, the boards suggested that expected losses should be estimated under the framework of an “expected value”. Methods by which to implement this framework include:

  • Estimating expected losses based on the probability-weighted mean of possible outcomes, starting with appropriate loss rates, say;
  • Estimating the amount of cash flows expected not to be recovered using supportable historical, current, and forecasted information;
  • Using probabilities of default (PD), loss given default (LGD), and exposure at default (EAD) estimates as in the Basel II regulatory
    capital framework.

 

In future meetings, it might be decided that further application guidance is necessary as to other appropriate methods that could be used. On the other hand, the boards may conclude that certain methods actually are not justifiable means of achieving the expected value objective.

 

Other considerations

 

The boards plan to consider the principle behind recognition of lifetime losses being applied to assets that improve in credit quality so as to warrant a transfer from Bucket 2 to Bucket 1. Practical applications of the expected value objective will be further explored.

 

5. Conclusion

A simplified, more principles-based approach to asset impairment has been sought. Such a framework is more flexible and customisable, which promotes more meaningful results for an organisation. At the same time, robust disclosure requirements and appropriate implementation advice will need to be put in place in order to protect the interests of comparability and consistency of application.

 

It is within the framework of expected loss measurement that decisive guidance is essential as it forms the basis of impairment measurement. A Basel II expected loss approach, as described above, would be favourable in that impairment and credit risk management would bear closer ties and existing systems and data collections could be utilised. Regardless of the method, expected loss estimation and the need to revise these at each reporting period could prove operationally burdensome to implement.

 

Monocle Solutions has extensive international experience in data management, risk modelling and implementation, and business strategy. Monocle is able to identify challenges and pitfalls and how practicably to overcome them. Our know-how and unique approach, along with keeping abreast of the latest developments in risk and other arenas, ensures that we consistently deliver relevant, valuable solutions.

0

Research
Executive Summary

A model may be defined as a mathematical process, whether deterministic or stochastic in nature, which transfers raw data into a synthetic score upon which decisions can be made in the acquisition of new business, mitigation of risk, and in the planning of future scenarios.

 

A model should not be seen as a separate or distinct application within the organisation, but rather as a piece of intellectual property originating through specific underlying factors unique to that organisation. These factors include input and output data, statistical regressions and estimations (performance of the model), information for decision making (model output), personnel training and support (administrative information), and reporting (validation). Without proper management and monitoring of the model and all its underlying factors the model may produce inaccurate results, or become redundant in the organisational structure, resulting in a monetary loss.

 

This paper focuses on the threats to an organisation without proper model management, and the management of such threats across the model life-cycle.

 

Models are complex applications which affect the manner in which a corporation makes its day-to-day decisions, as well as the way in which the organisation reports to its regulator. Without proper model management and monitoring, a corporation may suffer monetary losses, either by basing decisions on incorrect information, or the impact on regulatory capital.

Model Related Threats to an Organisation

Various types of models are utilised within the day-to-day business of an organisation. In the context of the type of model, or the business motivation thereof, the output calculated or derived from the model will have an impact on management decisions. These decisions could involve introducing a new product, entering a new market, providing credit to a risky client, assessing the premium or interest charge on a product to a specific customer, or financial planning, among many others. Crucial business decisions based on inaccurate information may jeopardise the future of the organisation. Business models should be seen by the organisation as an investment of similar consequence to purchasing a new office building or even the acquisition of a new business. Investments such as these are under constant scrutiny to ensure that the required return is realised.

 

The initial drive to implement a business model in an organisation originates from those underlying factors which require fast reaction to achieve full benefit. Some examples include the need to comply with regulatory requirements, financial planning for assets or a business venture, monitoring of risky customers, and striving to outperform competitors in a particular business line. In the initial momentum behind such drives, best practice procedures with respect to the development, implementation, and maintenance of models is often an afterthought.

 

A prominent example is the immense range of challenges the banking industry has confronted in complying with regulatory standards. Three approaches exist within the Basel II directive for the calculation of a bank’s regulatory capital. The Standardised Approach uses simple risk weights and hence requires no statistical models. The Foundation Internal Ratings Based Approach (FIRB) requires probability of default models, while the Advanced Internal Ratings Based approach (AIRB) requires probability of default (PD), exposure at default (EAD), loss given default (LGD) and effective maturity (M) models across the clusters.

 

In order to derive the potential benefit of a reduced regulatory capital requirement using the AIRB calculation, numerous banks developed and implemented their models in an extremely short space of time. At that stage, credit modelling was a fairly new concept to banks, with the result that the necessary processes and structures were not put in place. Inevitably, this lack of processes and structures allowed inaccuracies, errors, and omissions to creep into the model environment over time, giving rise to potential capital penalties rather than the envisaged capital reduction. For an organisation to fully comprehend a model and its output, a few factors should be taken into account:

 

  • What was the initial purpose of the model?
  • At what stage is the model in its life cycle?
  • How was the model developed?
    • Who developed the model?
    • What methodologies / parameters does the model utilise to estimate its final output?
    • Is there a clear understanding of the data requirement of the model?
    • What data was utilised with the development and initial calibration of the model?
    • How was the model initially validated to the regulator? Could this validation be replicated to the regulator if required?
  • When last was the model calibrated? Could this calibration be replicated to determine its current parameters?
  • Who is responsible for the day-to-day operation of the model? Does this person have the necessary understanding / support to operate the model?
  • Is the data utilised by the model of the required quality? Is this data accessible in a controlled environment?
  • What is the change control procedure which a model should adhere to? Are these changes properly documented?
  • Is there any documentation available for the model?

 

This is merely some of the information which an organisation needs to fully comprehend to grasp the output and workings of a single model. If this information cannot be obtained, an extensive model analysis may be required. Poor model governance may result in a necessity to conduct a full scale analysis, whether by costly internal resources or external consultancy.

 

Many organisations operate numerous models across various divisions, each serving a different purpose, and with disparate mechanisms and integration lines into the organisation.

More alarming yet is the fact that many organisations operate numerous models across various divisions, each serving a different purpose, and with disparate mechanisms and integration lines into the organisation. If there are no structures and procedures according to which all these models are governed on a group consolidated level, an organisation will suffer the consequences of having implemented models which are or may become inaccurate and / or redundant.

 

With proper model management from the initial development of a model, all necessary information and data is immediately available to management to rectify any underlying issues, as well as providing the ability to comprehend the originating source of these issues. It should be performed in a structured manner through all the phases of the model’s lifecycle (Analysis, Development, Implementation, and Business As Usual (BAU)).

 

Management and Monitoring

In order to understand how a model is monitored and managed in a structured environment an understanding of the impacting factors is required. Four areas can be identified which need to be taken into consideration with the management of a model, namely:

 

  • Data Management
  • Model Performance Management
  • Model Monitoring and Reporting
  • Administrative Information Management

 

These areas should not be assessed and analysed separately, but rather as intertwined pieces which contribute to the final output of the model. If just one of these factors is neglected, model performance and model results may become questionable.

 

Data Management

The quality and availability of the relevant data determines the strength of a model. Without proper data the model will not be able to function at an optimal level. Data management contributes to the performance of a model. It should be performed throughout the model’s lifecycle and should include:

 

  • Development Data Information, which refers to the data used in the development of the model.
  • Calibration Data, which is used to calibrate the model.
  • Validation Data, which is used in the validation of the model to regulators.
  • Production Model Data, which is the data used in the day-today running of the model.

 

However, within each phase there are specific factors which need to be taken into account, as illustrated below:

 

 

Data Requirement – A clear understanding should be available of the data fields / attributes applicable to a model for both its input as well as its output.

 

Data Availability – All sources from which the model derives the relevant data should be known to the organisation. These data fields should be monitored to ensure that data is not removed / manipulated by other business lines in the organisation.

 

Data Accessibility and Control – This refers purely to the transfer of data between source, model, and data warehouse. It is vital that the data pulls through to its destination in the desired format and that no undesired manipulation occurs.

 

Data Archiving – This involves the storage of data utilised by the model. It is important to have proper version / timestamp control on each data set / sample utilised in the model.

 

Data Quality – Data Quality management should be an ongoing objective of the organisation. It forms part of the data governance of the organisation and should be maintained by all data users, custodians, and data owners. The quality of data is correlated with the performance of a model.

 

Model Performance Management

Transformation in the organisation and industry in which an organisation operates has a huge impact on the requirements of a specific model. In an environment characterised by transformation a model may have a limited lifespan in terms of its functionality, predictive power, and ultimately its value to the organisation. It is for this reason that model performance and significance should be constantly monitored.

 

Model performance management ensures and encourages the accuracy, robustness, and timeliness of a model to ultimately produce meaningful and valuable output. In order for a model to operate at an optimal level, standards and controls should be implemented and monitored on a consistent basis. An organisation should assess model performance on an ongoing basis to identify whether the model should be:

 

  • Recalibrated – This may enhance the predictive power of the parameters in the model.
  • Redeveloped – If certain variables / parameters / methodologies have become obsolete, the model developer may decide to replace the model with a more robust solution.
  • Replaced – A better suited or more robust solution has been identified to replace the current model.
  • Decommissioned – The need and outcome of a model has become redundant, therefore the model is decommissioned.

 

The performance assessment of a model is purely based on the statistical standards implemented, as well as the calibration thereof.

 

Statistical Standards

 

There should be a clear connection between the final result vs. the input data and the assumptions / methodologies to derive the result. Statistical standards refer to the manner in which the model was developed as well as the overall functioning of the model. It mainly involves the statistical techniques the model is based on and the parameters utilised within these methodologies.

 

Calibration

 

Calibration refers to the adjustments made to the parameters utilised within the model using statistical techniques and sample data to enhance the model’s optimal performance. Management of the calibration process should include identifying and archiving the sample data utilised, the statistical techniques employed, and the results implemented in the model from the calibration. An organisation should be able to replicate the calibration of a model.

 

Monitoring and Reporting

The previous section touched on the monitoring of the statistical standards and calibration of the model. In fact the input and output of the model should be subject to continual review. In particular regular sanity checks should be conducted on the output of a model so that infeasible results can be responded to promptly. If the monitoring process has to comply with set standards potential issues can be rapidly and easily identified through exception reporting.

 

The first instance of actual monitoring and reporting of a model is within the validation phase of the model. From a supervisory point of view validation is required to prove the robustness and suitability of the model in the overall framework of an organisation. Validation is purely the responsibility of the organisation, but must be reported to its supervisor or regulator. Although the validation is mainly required for regulatory purposes, an organisation should implement frequent validations to ensure that the model is functioning as per its initial assessment and purpose.

 

Information Management

Analysis of an issue requires a basic background understanding. This often proves impossible given the administrative information supporting a model. Administrative information refers to amongst others, documentation stipulating the methodologies, business process documentation, user manuals, contact personnel, important dates, version control, and software and technology information. This information is related to the operation of the model within the organisation. All administrative information should be seen as a crucial part in the assessment and monitoring of a model at all phases of the model’s lifecycle.

 

Model Environment

With the appropriate measures in place for the management of individual models, the question can be addressed as to how an organisation can centrally consolidate all its implemented models? It is recommended that this be obtained by creating a model information and assessment environment at group consolidation level. This environment should cater for accessing the model, its current and historical data, the administrative information database, and reporting framework.

 

 

The purpose of a model environment is to ease the use and maintenance of a model for all model users and owners. It should provide a single interface where all activities and information pertinent to a model can be retrieved. A model friendly framework will enable an organisation to monitor its models on a day-to-day basis. Ongoing monitoring of the models assists users by gaining a better understanding of the models they utilise, identifying any unforeseen issues or constraints, and, if managed correctly, enhancing not only the performance of the models, but also the quality of the data they utilise.

 

Conclusion

Implementing world class models in an organisation, whether for decision-making or regulatory purposes, is only the start of model management. Providing adequate support to each model through a structured framework, and thereafter centrally consolidating the management of models in a model environment, not only improves model performance, but also creates the capability to identify lurking problems which under normal circumstances would go overlooked.

 

Monocle has extensive experience in assisting organisation around the world in the areas of model development, model validation, and model management.

0

Research

We know that fair value accounting doesn’t make sense for derivative instruments. A derivative can be potentially extremely volatile in terms of its value at any given point in time owing to small fluctuations in its underlying risk factors and market conditions. Physical assets that are more traditionally held at book value are far less volatile. Until the financial crisis hit, an accounting treatment that seemed extremely sensible was that everything not held to maturity is assumed to be traded, for which it then makes sense to mark-to-market or mark-to-model. When the financial crisis hit and the value of those instruments tanked, market participants began to protest against writing unrealised losses to the income statement and the resulting impact on capital through burning into retained earnings. The rationale behind these protests was that market values were ‘wrong’. But as John Maynard Keynes once observed “The market can stay irrational [far] longer than you… can stay solvent”.

 

fair-value-2It would of course be absurd to completely do away with fair value accounting. It is absurd to be allowed to trade in instruments without being required to write unrealised losses and gains to the income statement. One could indefinitely manipulate one’s income statement, capital, and solvency by simply not trading in those instruments that would realise a loss.

 

To address these concerns the IASB embarked upon the IFRS 9 project with the ultimate aim of reducing the complexity in reporting financial instruments. Phase one of this project culminated in November 2009 with the release of the first part of IFRS 9 Financial Instruments, which introduced new requirements for the classification and measurement of financial assets. Phase two was finalised in October 2010 with the reissuance of IFRS 9, incorporating new requirements for the accounting of financial liabilities, and also existing requirements for the derecognition of financial instruments. The third and final phase for the impairment of financial assets and hedge accounting was to have been completed by the end of June 2011.

 

In a nutshell, what the standard setters had realised was that it would be appropriate to make the accounting standards more straightforward to follow and less prone to abuse. At the same time, the standards should continue to penalise the use of very thinly traded instruments at book value, and also force firms to recognise unrealised losses and gains on instruments that could intrinsically be extremely risky.

 

IFRS 9 allows only two primary measurement categories for financial assets – fair value and amortised cost. The existing IAS 39 categories of “Held-to-maturity”, “Available-for-sale”, and “Loans and Receivables” have been eliminated. An asset is assigned to the amortised cost measurement category if the business model within which it is held has the objective of holding assets to collect contractual cash flows, and the terms of which give rise to cash flows comprised of principal and interest on specified dates. All other financial assets are designated fair value. Given that an asset is measured at fair value, all changes in fair value are recognised in profit and loss, with the exception of equity. Entities may choose, on a case by case basis, to recognise gains and losses on equity holdings not held for trading in other comprehensive income, with no recycling of gains or losses or recognition of impairments in profit and loss.

 

Another fascinating development, one that is in alignment with Solvency II in the insurance industry, is that IFRS 9 now not only restates financial liabilities in terms of an economic or total balance sheet approach, but is also based on an extremely statistical, actuarial view. The proposal takes a principles-based approach to accounting for financial liabilities on the premise that an insurance contract creates a bundle of rights and obligations that generate cash inflows (premiums) and cash outflows (claims, benefits, and expenses). The uncertainty inherent to those cash flows is reflected in a risk margin which is determined on the basis of standard deviation (or VaR), conditional tail expectation (or TVaR), or the cost of capital. The application of these statistical tools is in fact far more in keeping with the nature of the insurance industry, and the predominant role played by actuaries therein, in which the estimation of forward looking probabilities play such a crucial role.

 

The new provisioning standard also (implicitly) incorporates a statistical approach with the move away from incurred loss provisioning by the introduction of expected loss provisioning. Incurred loss provisioning can be seen as akin to backward looking provisioning based on financial assets’ actual behaviour. The treatment of impaired assets under IFRS 9 now consists of a dual impairment model driven off the credit characteristics of the loan book, which is also consistent with how banks manage credit risk, often referred to as a “good book” / “bad book” approach. Impairments on the bad book are recognised immediately, whereas impairments on the good book are the expected losses amortised according to a “time-proportional amount” or over a “foreseeable future period”. The IASB does not prescribe any particular statistical methodology to calculate expected losses.

 

No less important than the reduction in complexity associated with the reporting of financial instruments is the move toward convergence between the IASB and FASB standards that IFRS 9 represents. The emergence of a truly international standard should substantially elevate the transparency and comparability of large international institutions’ financial reporting. This is of particular importance in respect of the size of banks’ balance sheets. US GAAP nets out derivative positions, unlike European IFRS which deals with gross exposures, resulting in higher reported assets under IFRS. The difference is non-trivial. For Deutsche Bank, which reports its balance sheet under both standards, 2008 assets on balance sheet were $2 trillion under IFRS and $1 trillion under US GAAP, but with the same reported equity under both standards. Such a difference makes international comparisons of leverage all but impossible, hampering investor risk assessments and supervisory assessments.

 

Our review of IFRS 9 suggests that the new standards hold the promise of accounting for financial instruments in future that is much more closely aligned to economic substance. While the apparent failure of the efficient markets hypothesis implies that the fair value debate will never be satisfactorily resolved, the IASB seems to have deftly side-stepped the issue. By implementing the business model approach it will at least be clear to accountants, investors, and regulators to which of either category fair value or amortised cost an asset belongs. Moreover the adoption of a provisioning standard based on expected losses means that the carrying value of assets will reflect actual, anticipated losses on a forward looking probabilistic basis.

0

PREVIOUS POSTSPage 1 of 2NO NEW POSTS