Markus Brunnermeier and Arvind Krishnamurthy have assembled contributions from leading academic researchers, central bankers, and other financial-market experts to explore the possibilities for advancing macroeconomic modeling in order to achieve more accurate economic measurement. Essays in this volume focus on the development of models capable of highlighting the vulnerabilities that leave the economy susceptible to adverse feedback loops and liquidity spirals. While these types of vulnerabilities have often been identified, they have not been consistently measured. In a financial world of increasing complexity and uncertainty, this volume is an invaluable resource for policymakers working to improve current measurement systems and for academics concerned with conceptualizing effective measurement.
|Publisher:||University of Chicago Press|
|Series:||National Bureau of Economic Research Conference Report|
|Sold by:||Barnes & Noble|
|File size:||6 MB|
About the Author
Read an Excerpt
Systemic Risk and Macro Modeling
By Markus Brunnermeier, Arvind Krishnamurthy
The University of Chicago PressCopyright © 2014 National Bureau of Economic Research
All rights reserved.
Challenges in Identifying and Measuring Systemic Risk
Lars Peter Hansen
Discussions of public oversight of financial markets often make reference to "systemic risk" as a rationale for prudent policy making. For example, mitigating systemic risk is a common defense underlying the need for macroprudential policy initiatives. The term has become a grab bag, and its lack of specificity could undermine the assessment of alternative policies. At the outset of this essay I ask, should systemic risk be an explicit target of measurement, or should it be relegated to being a buzz word, a slogan or a code word used to rationalize regulatory discretion?
I remind readers of the dictum attributed to Sir William Thomson (Lord Kelvin):
I often say that when you can measure something that you are speaking about, express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of the meagre and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts advanced to the stage of science, whatever the matter might be.
While Lord Kelvin's scientific background was in mathematical physics, discussion of his dictum has pervaded the social sciences. An abbreviated version appears on the Social Science Research building at the University of Chicago and was the topic of a published piece of detective work by Merton, Sills, and Stigler (1984). I will revisit this topic at the end of this essay. Right now I use this quote as a launching pad for discussing systemic risk by asking if we should use measurement or quantification as a barometer of our understanding of this concept.
One possibility is simply to concede that systemic risk is not something that is amenable to quantification. Instead it is something that becomes self-evident under casual observation. This is quite different from Kelvin's assertion about the importance of measurement as a precursor to some form of scientific understanding and discourse. Kelvin's view was that for measurement to have any meaning requires that (a) we formalize the concept that is to be measured, and (b) we acquire data to support the measurement.
The need to implement new laws with expanded regulation and oversight puts pressure on public sector research groups to develop quick ways to provide useful measurements of systemic risk. This requires shortcuts, and it also can proliferate superficial answers. These short-term research responses will be revealing along some dimensions by providing useful summaries from new data sources or at least data sources that have been largely ignored in the past. Stopping with short-term or quick answers can lead to bad policy advice and should be avoided. It is important for researchers to take a broader and more ambitious attack on the problem of building quantitatively meaningful models with macroeconomic linkages to financial markets. Appropriately constructed, these models could provide a framework for the quantification of systemic risk.
In the short run, we may be limited in our ability to provide meaningful quantification. Perhaps we should defer and trust our governmental officials engaged in regulation and oversight to "know it when they see it." I have two concerns about leaving things vague, however. First, it opens the door to a substantial amount of regulatory discretion. In extreme circumstances that are not well guided by prior experience or supported by economic models that we have confidence in, some form of discretion may be necessary for prudent policy making. However, discretion can also lead to bad government policy, including the temptation to respond to political pressures. Second, it makes criticism of measurement and policy all the more challenging. When formal models are well constructed, they facilitate discussion and criticism. Delineating assumptions required to justify conclusions disciplines the communication and commentary necessary to nurture improvements in models, methods, and measurements. This leads me to be sympathetic to a longerterm objective of exploring the policy-relevant notions of the quantification of systemic risk. To embark on this ambitious agenda, we should do so with open eyes and a realistic perspective on the measurement challenges. In what follows, I explore these challenges, in part, by drawing on the experience from other such research agendas within economics and elsewhere.
In the remainder of this essay: (a) I explore some conceptual modeling and measurement challenges, and (b) I examine these challenges as they relate to existing approaches to measuring systemic risk.
1.2 Measurement with and without Theory
Sparked in part by the ambition set out in the Dodd-Frank Act and similar measures in Europe, the Board of Governors of the Federal Reserve System and some of the constituent regional banks have assembled research groups charged with producing measurements of systemic risk. Such measurements are also part of the job of the newly created Office of Financial Research housed in the Treasury Department. Similar research groups have been assembled in Europe. While the need for legislative responses puts pressure on research departments to produce quick "answers," I believe it is also critical to take a longer-term perspective so that we can do more than just respond to the last crisis. By now, a multitude of proposed measures exist and many of these are summarized in Bisias et al. (2012), where thirty-one ways to measure systemic risk are identified. While the authors describe this catalog as an "embarrassment of riches," I find this plethora to be a bit disconcerting. In describing why, in the next section, I will discuss briefly some of these measures without providing a full-blown critique. Moreover, I will not embark on a commentary of all thirty-one listed in their valuable and extensive summary. Prior to taking up that task, I consider some basic conceptual issues.
I am reminded of Koopmans's discussion of the Burns and Mitchell (1946) book on measuring business cycles. The Koopmans (1947) review has the famous title "Measurement without Theory." It provides an extensive discussion and sums things up saying:
The book is unbendingly empiricist in outlook.... But the decision not to use theories of man's economic behavior, even hypothetically, limits the value to economic science and to the maker of policies, of the results obtained or obtainable by the methods developed. (172)
The measurements by Burns and Mitchell generated a lot of attention and renewed interest in quantifying business cycles. They served to motivate development of both formal economic and statistical models. An unabashedly empirical approach can most definitely be of considerable value, especially in the initial stages of a research agenda. What is less clear is how to use such an approach as a direct input into policy making without an economic model to provide guidance as to how this should be done. An important role for economic modeling is to provide an interpretable structure for using available data to explore the consequences of alternative policies in a meaningful way.
In the remainder of this section, I feature two measurement challenges that should be central to any systemic research measurement agenda. How do we distinguish systemic from systematic risk? How do we conceptualize and quantify the uncertainty associated with systemic risk measurement?
1.2.1 Systematic or Systemic
The terms systematic and systemic risk are sometimes confused, but their distinction is critical for both measurement and interpretation. In sharp contrast with the former concept, the latter one is well studied and supported by extensive modeling and measurement. "Systematic risks" are macroeconomic or aggregate risks that cannot be avoided through diversification. According to standard models of financial markets, investors who are exposed to these risks require compensation because there is no simple insurance scheme whereby exposure to these risks can be averaged out. This compensation is typically expressed as a risk adjustment to expected returns.
Empirical macroeconomics aims to identify aggregate "shocks" in time series data and to measure their consequences. Exposure to these shocks is the source of systematic risk priced in security markets. These may include shocks induced by macroeconomic policy, and some policy analyses explore how to reduce the impact of these shocks to the macroeconomy through changes in monetary or fiscal policy. Often, but not always, as a separate research enterprise, empirical finance explores econometric challenges associated with measuring both the exposure to the components of systematic risk that require compensation and the associated compensations to investors.
"Systemic risk" is meant to be a different construct. It pertains to risks of breakdown or major dysfunction in financial markets. The potential for such risks provides a rationale for financial market monitoring, intervention, or regulation. The systemic risk research agenda aims to provide guidance about the consequences of alternative policies and to help anticipate possible breakdowns in financial markets. The formal definition of systemic risk is much less clear than its counterpart systematic risk.
Here are three possible notions of systemic risk that have been suggested. Some consider systemic risk to be a modern day counterpart to a bank run triggered by liquidity concerns. Measurement of that risk could be an essential input to the role of central banks as "lenders of last resort" to prevent failure of large financial institutions or groups of financial institutions. Others use systemic risk to describe the vulnerability of a financial network in which adverse consequences of internal shocks can spread and even magnify within the network. Here the measurement challenge is to identify when a financial network is potentially vulnerable and the nature of the disruptions that can trigger a problem. Still others use the term to include the potential insolvency of a major player in or component of the financial system. Thus systemic risk is basically a grab bag of scenarios that are supposed to rationalize intervention in financial markets. These interventions come under the heading of "macroprudential policies." Since the Great Recession was triggered by a financial crisis, it is not surprising that there were legislative calls for external monitoring, intervention, or regulation to reduce systemic risk. The outcome is legislation such as the rather cumbersome and still incomplete 2,319 page Dodd-Frank Wall Street Reform and Consumer Protection Act. The sets of constructs for measurement to support prudent policy making remain a challenge for future research.
Embracing Koopmans's call for models is appealing as a longer-term research agenda. Important aspects of his critique are just as relevant as a commentary on current systemic risk measurement as they were for Burns and Mitchell's business cycle measurement.
1.2.2 Systemic Risk or Uncertainty
There are important conceptual challenges that go along with the use of explicit dynamic economic models in formal ways. Paramount among these is how we confront risk and uncertainty. Economic models with explicit stochastic structures imply formal probability statements for a variety of questions related to implications and policy. In addition, uncertainty can come from limited data, unknown models, and misspecification of those models. Policy discussions too often have a bias toward ignoring the full impact of uncertainty quantification. But abstracting from uncertainty measurement can result in flawed policy advice and implementation.
There are various approaches to uncertainty quantification. While there is well-known and extensive literature on using probability models to support statistical measurement, I expect special challenges to emerge when we impose dynamic economic structure onto the measurement challenge. The discussion that follows is motivated by this latter challenge. It reflects my own perspective, not necessarily one that is widely embraced. My perspective is consonant, however, with some of the views expressed by Haldane (2011, 2012) in his discussions of policy simplicity and robustness when applied to regulating financial institutions.
I find it useful to draw a distinction between risk and alternative concepts better designed to capture our struggles with constructing fully specified probability models. Motivated by the insights of Knight (1921), decision theorists use the terms uncertainty and ambiguity as distinguished from risk. See Gilboa and Schmeidler (1989) for an initial entrant to this literature and Gilboa, Postlewaite, and Schmeidler (2008) for a recent survey. Alternatively, we can think of statistical models as approximations and we use such models in sophisticated ways with conservative adjustments that reflect the potential for misspecification. This latter ambition is sometimes formulated as a concern for robustness. For instance, Petersen, James, and Dupuis (2000) and Hansen and Sargent (2001) confront a decision problem with a family of possible probability specifications and seek conservative responses.
To appreciate the consequences of Knight's distinction, consider the following. Suppose we happen to have full confidence in a model specification of the macroeconomy appropriately enriched with financial linkages needed to capture system-wide exposure to risk. Since the model specifies the underlying probabilities, we could use it both to quantify systemic risk and to compute so-called counterfactuals. While this would be an attractive situation, it seems not to fit many circumstances. As systemic risk remains a poorly understood concept, there is no "off-the-shelf" model that we can use to measure it. Any stab at building such models, at least in the near future, is likely to yield, at best, a coarse approximation. This leads directly to the question: how do we best express skepticism in our probabilistic measurement of systemic risk?
Continuing with a rather idealized approach, we could formally articulate an array of models and weight these models using historical inputs and subjective priors. This articulation appears to be overly ambitious in practice, but it is certainly a good aim. Subjective inputs may not be commonly agreed upon and historical evidence distinguishing models may be weak. To make this approach operational leads naturally to a sensitivity analysis for priors including priors over parameters and alternative models.
A model by its very nature is wrong because it simplifies and abstracts. Including a formal probabilistic structure enriches predictions from a model, but we should not expect such an addition to magically fix or repair the model. It is often useful to throw other models "into the mix," so to speak. The same limitations are likely to carry over to each model we envision. Perhaps we could be lucky enough to delineate a big enough list of possible models to fill gaps left by any specific model. In practice, I suspect we cannot achieve complete success and certainly not in the short term. In some special circumstances, the gaps may be negligible. Probabilistic reasoning in conjunction with the use of models is a very valuable tool. But too often we suspect the remaining gaps are not trivial, and the challenge in using the models is capturing how to express the remaining skepticism. Simple models can contain powerful insights even if they are incomplete along some dimensions. As statisticians with incomplete knowledge, how do we embrace such models or collections of them while acknowledging skepticism that should justifiably go along with them? This is an enduring problem in the use of dynamic stochastic equilibrium models and it seems unavoidable as we confront the important task of building models designed to measure systemic risk. Even as we add modeling clarity, in my view we need to abandon the presumption that we can measure fully systemic risk and go after the conceptually more difficult notion of quantifying systemic uncertainty. See Haldane (2012) for a further discussion of this point.
What is at stake here is more than just a task for statisticians. Even though policy challenges may appear to be complicated, it does not follow that policy design should be complicated. Acknowledging or confronting gaps in modeling has long been conjectured to have important implications for economic policy. As an analogy, I recall Friedman's (1960) argument for a simplified approach to the design of monetary policy. His policy prescription was premised on the notion of "long and variable lags" in a monetary transmission mechanism that was too poorly understood to exploit formally in the design of policy. His perspective was that the gaps in our knowledge of this mechanism were sufficient that premising activist monetary policy on incomplete models could be harmful. Relatedly, Cogley et al. (2008) show how alternative misspecification in modeling can be expressed in terms of the design of policy rules. Hansen and Sargent (2012) explore challenges for monetary policy based on alternative specifications of incomplete knowledge on the part of a so-called "Ramsey planner." The task of this planner is to design formal rules for implementation. It is evident from their analyses that the potential source of misspecification can matter in the design of a robust rule. These contributions do not explore the policy ramifications for system-wide problems with the functioning of financial markets, but such challenges should be on the radar screen of financial regulation. In fact, implementation concerns and the need for simple rules underlie some of the arguments for imposing equity requirements on banks. See, for instance, Admati et al. (2010). Part of policy implementation requires attaching numerical values to parameters in such rules. Thus concerns about systemic uncertainty would still seem to be a potential contributor to the implementation of even seemingly simple rules for financial regulation.
Excerpted from Risk Topography by Markus Brunnermeier, Arvind Krishnamurthy. Copyright © 2014 National Bureau of Economic Research. Excerpted by permission of The University of Chicago Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Table of ContentsAcknowledgments
Markus Brunnermeier and Arvind Krishnamurthy
I. Measurement and Disclosure
1. Challenges in Identifying and Measuring Systemic Risk
Lars Peter Hansen
2. Regulating Systemic Risk through Transparency: Trade-Offs in Making Data Public
Augustin Landier and David Thesmar
II. Risk Exposures
3. Systemic Risk Exposures: A 10-by-10-by-10 Approach
4. Remapping the Flow of Funds
Juliane Begenau, Monika Piazzesi, and Martin Schneider
5. Measuring Margin
Robert L. McDonald
6. A Transparency Standard for Derivatives
Viral V. Acharya
III. Liquidity and Leverage
7. Liquidity Mismatch Measurement
Markus Brunnermeier, Arvind Krishnamurthy, and Gary Gorton
8. Monitoring Leverage
John Geanakoplos and Lasse Heje Pedersen
IV. Financial Intermediation and Credit
9. Repo and Securities Lending
Tobias Adrian, Brian Begalle, Adam Copeland, and Antoine Martin
10. Improving Our Ability to Monitor Bank Lending
William F. Bassett, Simon Gilchrist, Gretchen C. Weinbach, and Egon Zakrajšek
11. The Case for a Credit Registry
V. Household Sector
12. Monitoring the Financial Condition and Expenditures of Households
Robert E. Hall
13. LEADS on Macroeconomic Risks to and from the Household Sector
Jonathan A. Parker
14. Detecting “Bad” Leverage
VI. Corporate Sector
15. A Macroeconomist’s Wish List of Financial Data
V. V. Chari
VII. International Sector
16. Systemic Risks in Global Banking: What Available Data Can Tell Us and What More Data Are Needed?
Eugenio Cerutti, Stijn Claessens, and Patrick McGuire