Central Bankers' Speeches
Collection
Collections of speeches from Central Bankers. Elaborations from the Bank of International Settlements (BIS) Central Bankers' Speeches Dataset.
•
2 items
•
Updated
date
timestamp[ns] | title
stringlengths 19
231
| description
stringlengths 32
418
| text
stringlengths 263
106k
| mistral_ocr
stringlengths 13
95.6k
| author
stringclasses 117
values | country
stringclasses 2
values | url
stringlengths 38
46
| clean_text
stringlengths 0
80.9k
|
---|---|---|---|---|---|---|---|---|
1996-12-19T00:00:00 | Ms. Rivlin discusses the prudential regulation of banks and how to improve it (Central Bank Articles and Speeches, 19 Dec 96) | Remarks by the Vice Chairman of the Board of Governors of the US Federal Reserve System, Ms. Alice M. Rivlin, at the The Brookings Institution National Issues Forum in Washington on 19/12/96. | Ms. Rivlin discusses the prudential regulation of banks and how to improve
it Remarks by the Vice Chairman of the Board of Governors of the US Federal Reserve
System, Ms. Alice M. Rivlin, at the The Brookings Institution National Issues Forum in
Washington on 19/12/96.
I discovered when I joined the Board of Governors of the Federal Reserve System
about six months ago that most of my friends -- including my sophisticated public policy
oriented friends -- had only a hazy notion what their central bank did. Many of them said,
enthusiastically, "Congratulations!" Then they asked with a bit of embarrassment, "Is it a
full-time job?" or "What will you find to do between meetings?" The meetings they were aware
of, of course, were those of the Federal Open Market Committee. They knew that the FOMC
meets every six weeks or so to "set interest rates." That sounds like real power, so the FOMC
gets a lot of press attention even when, as happened again this week, we meet and decide to do
absolutely nothing at all.
The group gathered here today, however, realizes that monetary policy, while
important, is not actually very time-consuming. If you cared enough to come to this conference,
you also have a strong conviction that the health and vigor of the American economy depends
not only on good macro-economic policy, although that certainly helps, but also on the safety,
soundness and efficiency of the banking system. We need a banking system that works well and
one in which citizens and businesses, foreign and domestic, have high and well placed
confidence.
So I want to talk today, as seems appropriate on the fifth anniversary of FDICIA,
about the subject that occupies much of our attention at the Federal Reserve: the prudential
regulation of banks and how to improve it. Indeed, I want to focus today, not so much on what
Congress needs to do to ensure the safety and soundness of the bank system in this rapidly
changing world -- there are others on the program to take on that task -- but more narrowly on
how bank regulators should go about their jobs of supervising bank risk-taking.
The evolving search for policies that would guarantee a safe, sound and efficient
banking system has featured learning from experience. In the 1930s, Americans learned,
expensively, about the hazards of not having a safety net in a crisis that almost wiped out the
banking system. In the 1980s, they learned a lot about the hazards of having a safety net,
especially about the moral hazard associated with deposit insurance.
Deposit insurance, which had seemed so benign and so successful in building
confidence and preventing runs on banks, suddenly revealed its downside for all to see. Some
insured institutions, mostly thrifts, but also savings banks, and not a few commercial banks,
were taking on risks with a "heads I win, tails you lose" attitude -- sometimes collecting on high
stakes bets but often leaving deposit insurance funds to pick up the pieces. At the same time,
some regulators, especially the old FSLIC, which was notably strapped for funds, were
compounding the problem -- and greatly increasing the ultimate cost of its resolution -- by
engaging in regulatory "forbearance" when faced with technically insolvent institutions.
The lessons were costly, but Americans do learn from their mistakes. The
advocates of banking reform, many of them participants in this conference, saw the problems
posed by moral hazard in the context of ineffectual supervision and set out to design a better
system.
Essentially, the reform agenda had two main components:
* First, expanded powers for depository institutions that would permit them to
diversify in ways that might reduce risks and improve operating efficiency;
* Second, improving the effectiveness of regulation and supervision by
instructing regulators, in effect, to act more like the market itself when
conducting prudential regulation.
FDICIA was a first step toward meeting the second challenge -- how to make
regulators act more like the market. It called for a reduction in the potential for regulatory
"forbearance" by laying down the conditions under which conservatorship and receivership
should be initiated. It called for supervisory sanctions based on measurable performance (in
particular, the Prompt Corrective Action provisions that based supervisory action on a bank's
risk-based capital ratio). The Act required the FDIC and RTC to resolve failed institutions on a
least-cost basis. In other words, the Act required the depository receivers to act as if the
insurance funds were private insurers, rather than continue the past policy of protecting
uninsured depositors and other bank creditors. Finally, FDICIA placed limitations on the
doctrine of "Too Big To Fail," by requiring agency consensus and administration concurrence in
order to prop up any large, failing bank. In a few places, however, FDICIA went too far. The
provisions of the Act that dealt with micro management by regulators were immediately seen to
be "over the top," and were later repealed. The Act provided a framework for regulators to
invoke market-like discipline. It left room for them to move their own regulatory techniques in
this direction -- a subject to which I will return in a minute.
The other objective of reform -- diversification of bank activities through an
expansion of bank powers -- has not yet resulted in legislation and is still very much an on-going
debate. In part, this failure to take legislative action reflected the long-running ability of the
nonbank competition to use its political muscle to forestall increased powers for banks. But the
inaction on expanded powers also reflected a Congressional concern that additional powers
might be used to take on additional risk, which, on the heels of the banking collapse of the late
1980s, represented poor timing, to say the least. There was also some Congressional disposition
to punish "greedy bankers," who were seen as the reason for the collapse and the diversion of
taxpayer funds to pay for thrift insolvencies. Whatever the reasons, not only did the 102nd
Congress fail to enact expanded bank powers, but so did the next two Congresses. We are
hopeful that the 105th Congress will succeed where its predecessors have failed. Meanwhile, the
regulatory agencies have acted to expand bank powers within the limits of existing law.
The Federal Reserve has proposed both liberalization of Section 20 activities and
expedited procedures for processing applications under Regulation Y. The OCC has acted to
liberalize banks' insurance agency powers and, most recently, to liberalize procedures for
operating subsidiaries of national banks. Of course, I would have to turn in my Federal Reserve
badge and give up my parking pass if I did not mention that we at the Fed believe that some
activities are best carried out in a subsidiary of the holding company rather than a subsidiary of
the bank. We believe that the more distance between the bank and its new, nonbank operations,
the more likely that we can separate one from the other and avoid the spreading of the subsidy
associated with the safety net.
While the regulators can move in the right direction, it is still imperative that
Congress act. Artificial barriers between and among various forms of financial activity are
harmful to the best interests of the consumers of financial services, to the providers of those
services, and to the general stability and well-being of our financial system, most broadly
defined. Congress should consider this issue and take the next steps.
Let me turn now to what I consider to be one of the most critical issues facing
regulators, especially in a future in which financial markets likely will dictate significant further
increases in the scope and complexity of banking activities. I am referring to the issue of how to
conduct optimal supervision of banks. Fortunately, there appears actually to be an evolving
consensus at least on the general principle. Regulators, including the Federal Reserve, strongly
support the basic approach embodied in FDICIA; namely that regulators should place limits on
depository institutions in such a way as to replicate, as closely as possible, the discipline that
would be imposed by a marketplace consisting of informed participants and devoid of the moral
hazard associated with the safety net.
Unfortunately, as always, the devil is in the details. The difficult question is how
should a regulator use "market-based" or "performance-based" measures in determining which,
if any, supervisory sanctions or limits to place on a bank. FDICIA's approach was
straightforward. Supervisory sanctions under Prompt Corrective Action were to be based on the
bank's risk performance as measured by its levels of regulatory capital, in particular its leverage
ratio and total risk-based capital ratio under the Basle capital standards. These standards now
seem well-intended but rather outdated. Certainly, the Basle capital standards did the job for
which they were designed, namely stopping the secular decline in bank capital levels that, by the
late 1980s, threatened general safety and soundness. But the scope and complexity of banking
activities has proceeded apace during the last two decades or so, and standard capital measures,
at least for our very largest and most complex organizations, are no longer adequate measures on
which to base supervisory action for several reasons:
* The regulatory capital standards apportion capital only for credit risk and, most
recently, for market risk of trading activities. Interest rate risk is dealt with
subjectively, and other forms of risk, including operating risk, are not treated
within the standards.
* Also, the capital standards are, despite the appellation "risk-based," very much
a "one-size-fits-all" rule. For example, all non-mortgage loans to corporations
and households receive the same arbitrary 8 percent capital requirement. A
secured loan to a triple-A rated company receives the same treatment as an
unsecured loan to a junk-rated company. In other words, the capital standards
don't measure credit risk although they represent a crude proxy for such risk
within broad categories of banking assets.
* Finally, the capital standards give insufficient consideration to hedging or
mitigating risk through the use of credit derivatives or effective portfolio
diversification.
These shortcomings of the regulatory capital standards were beginning to be
understood even as they were being implemented, but no consistent, consensus technology
existed at that time for invoking a more sophisticated standard than the Basle norms. To be sure,
more sophisticated standards were being used by bank supervisors, during the examination
process, to determine the adequacy of capital at any individual institution. These supervisory
determinations of capital adequacy on a bank-by-bank basis, reflected in the CAMEL ratings
given to banks and the BOPEC ratings given to bank holding companies, are much more
inclusive than the Basle standards. Research shows that CAMEL ratings are much better
predictors of bank insolvency than "risk-based" capital ratios. But, a bank-by-bank supervision,
of course, is not the same thing as the writing of regulations that apply to all banks.
It is now evident that the simple regulatory capital standards that apply to all
banks can be quite misleading. Nominally high regulatory capital ratios -- even risk-based
capital ratios that are 50 or 100 percent higher than the minimums -- are no longer indicators of
bank soundness.
Meanwhile, however, some of our largest and most sophisticated banks have been
getting ahead of the regulators and doing the two things one must do in order to properly
manage risk and determine capital adequacy. First, they are statistically quantifying risk by
estimating the shape of loss probability distributions associated with their risk positions. These
quantitative measures of risk are calculated by asset type, by product line, and, in some cases,
even down to the individual customer level. Second, the more sophisticated banks are
calculating economic capital, or "risk capital," to be allocated to each asset, each line of
business, and even to each customer, in order to determine risk-adjusted profitability of each
type of bank activity. In making these risk capital allocations, banks are defining and meeting
internal corporate standards for safety and soundness. For example, a banker might desire to
achieve at least a single-A rating on his own corporate debt. He sees that, over history, single-A
debt has a default probability of less than one-tenth of one percent over a one year time horizon.
So the banker sets an internal corporate goal to allocate enough capital so that the probability of
losses exceeding capital is less than 0.1 percent. In the language of statistics, this means that
allocated capital must "cover" 99.9 percent of the estimated loss probability distribution.
Once the banker estimates risk and allocates capital to that risk, the internal
capital allocations can be used in a variety of ways -- for example, in so-called RAROC or
risk-adjusted return on capital models that measure the relative profitability of bank activities. If
a particular bank product generates a return to allocated capital that is too low, the bank can seek
to cut expenses, reprice the product, or focus its efforts on other, more profitable ventures. These
profitability analyses, moreover, are conducted on an "apples-to-apples" basis, since the
profitability of each business line is adjusted to reflect the riskiness of the business line.
What these bankers have actually done themselves, in calculating these internal
capital requirements, is something regulators have never done -- defined a bank soundness
target. What regulator, for example, has said that he wants capital to be high enough to reduce to
0.1 percent the probability of insolvency? Regulators have said only that capital ratios should be
no lower than some number (8 percent in the case of the Basle standards). But as we should all
be aware, a high capital ratio, if it is accompanied by a highly risky portfolio composition, can
result in a bank with a high probability of insolvency. The question should not be how high is
the bank's capital ratio, but how low is its failure probability.
In sharp contrast to our 8 percent one-size-fits-all capital standard, the internal
risk-capital calculations of banks result in a very wide range of capital allocations, even within a
particular category of credit instrument. For example, for an unsecured commercial credit line,
typical internal capital allocations might range from less than 1 percent for a triple-A or
double-A rated obligor, to well over 20 percent for an obligor in one of the lowest rating
categories. The range of internal capital allocations widens even more when we look at capital
calculations for complex risk positions such as various forms of credit derivatives. This great
diversity in economic capital allocations, as compared to regulatory capital allocations, creates at
least two types of problem.
* When the regulatory capital requirement is higher than the economic capital
allocation, the bank must either engage in costly regulatory arbitrage to evade
the regulatory requirement or change its portfolio, possibly leading to
suboptimal resource allocation.
* When the regulatory requirement is lower than the economic capital
requirement, the bank may choose to hold capital above the regulatory
requirement but below the economic requirement; in this case, the bank's
nominally high capital ratio may mask the true nature of its risk position.
Measuring bank soundness and overall bank performance is becoming more
critical as the risk activities of banks become more complex. This condition is especially evident
in the various nontraditional activities of banks. In fact, "nontraditional" is no longer a very
good adjective to describe much of what goes on at our larger institutions. Take asset
securitization, for example. No longer do our largest banks simply take in deposit funds and lend
out the money to borrowers. Currently, well over $200 billion in assets that, in times past, have
resided on the books of banks, now are owned by remote securitization conduits sponsored by
banks. Sponsorship of securitization, which is now almost solely a large bank phenomenon,
holds the potential for completely transforming the traditional paradigm of "banking." Now,
loans are made directly by the conduits, or are made by the banks and then immediately sold to
the conduits. To finance the origination or purchase of the loans, a conduit issues several classes
of asset-backed securities collateralized by the loans. Most of the conduit's debt is issued to
investors who require that the senior securities be highly rated, generally double-A and triple-A.
In order to achieve these ratings, the conduit obtains credit enhancements insulating the senior
security holders from defaults on the underlying loans. Generally, it is the bank sponsor that
provides these credit enhancements, which can be in the form of standby letters of credit to the
conduit, or via the purchase of the most junior/subordinated securities issued by the conduit. In
return for providing the credit protection, as well as the loan origination and servicing functions,
the bank lays claim to all residual spreads between the yields on the loans and the interest and
non-interest cost of the conduit's securities, net of any loan losses. In other words, securitization
results in banks taking on almost identically the same risks as if the loans were kept on the books
of the bank the old-fashioned way.
But while the credit risk of a securitized loan pool may be the same as the credit
risk of holding that loan pool on the books, our capital standards do not always recognize this
fact. For example, by supplying a standby letter of credit covering so-called "first-dollar" losses
for the conduit, a bank might be able to reduce its regulatory capital requirement, for some of its
activities, by 90 percent or more compared with what would be required if the bank held the
loans directly on its own books. The question, of course, is whether the bank's internal capital
allocation systems recognize the similarity in risk between, on the one hand, owning the whole
loans and, on the other hand, providing a credit enhancement to a securitization conduit.
If the risk measurement and management systems of the bank are faulty, then
holding a nominally high capital ratio -- say, 10 percent -- is little consolation. In fact, nominally
high capital ratios can be deceiving to market participants. If, for example, the bank's balance
sheet is less than transparent, potential investors or creditors, seeing the nominally high
10 percent capital, but not recognizing that the economic risk capital allocation should, in
percentage terms, be much higher, could direct an inappropriately high level of scarce resources
toward the bank.
Credit derivatives are another example of the evolution. The bottom line is that,
as we move into the 21st century, traditional notions of "capital adequacy" will become less
useful in determining the safety and soundness of our largest, most sophisticated, banking
organizations. This growing discrepancy is important because "performance-based" solutions
likely will continue to be touted as the basis for expanded bank powers or reductions in
burdensome regulation. For example, the Federal Reserve's recent proposed liberalization of
procedures for Regulation Y activities applies to banking companies that are "well-capitalized"
and "well-managed." Similarly, the OCC's recent proposed liberalization of rules for bank
operating subsidiaries applies to "well-capitalized" institutions. Also, industry participants
continue to call for expanded powers and/or reduced regulatory burden based on "market tests"
of good management and adequate capital.
It will not be easy reaching consensus on how to measure bank soundness and
overall bank performance. It cannot simply be done by observing market indicators. For
example, we cannot easily use the public ratings of holding company debt. The ratings, after all,
are achieved given the existence of the safety net. The ratings are biased, therefore, from the
perspective of achieving our stated goal -- to impose prudential limits on banks as if there were
no net. In addition, I am sure that there would be disagreement between market participants and
regulators over what should be acceptable debt ratings. The solution may be for the regulators to
use the analytical tools developed by the market participants themselves for risk and
performance assessment. Regulators already have begun to move in this direction. For example,
beginning in January 1998, qualifying large multinational banks will be able to use their internal
Value-at-Risk models to help set capital requirements for the market risk inherent in their
trading activities. The Federal Reserve is also conducting a pilot test of the pre-commitment
approach to capital for market risk. In this approach, banks can choose their own capital
allocations, but would be sanctioned heavily if cumulative trading losses during a quarter were
to exceed their chosen capital allocations. These new and innovative methods for treating the
age-old problem of capital adequacy are likely to be followed by an unending, evolutionary flow
of improvements in the prudential supervisory process. As the industry makes technological
advances in risk measurement, these advances will become imbedded in the supervisory process.
For example, the banking agencies have announced programs to place an increased emphasis on
banks' internal risk measurement and management processes within the assessment of overall
management quality -- that is, how well a bank employs modern technology to manage risk will
be reflected in the "M" portion of the bank's CAMEL rating. In a similar vein, now that VaR
models are being used to assess regulatory capital for market risk, it is easy to envision that,
down the road, banks' internal credit risk models and associated internal capital allocations will
also be used to help set regulatory capital requirements.
Regulation and supervision, like industry practices themselves, are continually
evolving processes. As supervisors, our goal must be to stay abreast of best practices,
incorporate these practices into our own procedures where appropriate, and do so in a way that
allows banks to remain sufficiently flexible and competitive. In conducting prudential regulation
we should always remember that the optimal number of bank failures is not zero. Indeed,
"market-based" performance means that some institutions, either through poor management
choices, or just because of plain old bad luck, will fail. As regulators, we must carefully balance
these market-like results with concerns over systemic risk. And, as regulators of banks, we must
always remember that we do not operate in a vacuum -- the activities of nonbank financial
institutions are also important to the general well-being of our financial system and the macro
economy.
Regulators, of course, can only work with the framework laid down by Congress.
Let me conclude with the hope that this Congress will build on the experience of the last few
years, including the experience with FDICIA, and take the next steps toward creating a structural
and regulatory framework appropriate to the 21st century. |
---[PAGE_BREAK]---
# Ms. Rivlin discusses the prudential regulation of banks and how to improve
it Remarks by the Vice Chairman of the Board of Governors of the US Federal Reserve System, Ms. Alice M. Rivlin, at the The Brookings Institution National Issues Forum in Washington on 19/12/96.
I discovered when I joined the Board of Governors of the Federal Reserve System about six months ago that most of my friends -- including my sophisticated public policy oriented friends -- had only a hazy notion what their central bank did. Many of them said, enthusiastically, "Congratulations!" Then they asked with a bit of embarrassment, "Is it a full-time job?" or "What will you find to do between meetings?" The meetings they were aware of, of course, were those of the Federal Open Market Committee. They knew that the FOMC meets every six weeks or so to "set interest rates." That sounds like real power, so the FOMC gets a lot of press attention even when, as happened again this week, we meet and decide to do absolutely nothing at all.
The group gathered here today, however, realizes that monetary policy, while important, is not actually very time-consuming. If you cared enough to come to this conference, you also have a strong conviction that the health and vigor of the American economy depends not only on good macro-economic policy, although that certainly helps, but also on the safety, soundness and efficiency of the banking system. We need a banking system that works well and one in which citizens and businesses, foreign and domestic, have high and well placed confidence.
So I want to talk today, as seems appropriate on the fifth anniversary of FDICIA, about the subject that occupies much of our attention at the Federal Reserve: the prudential regulation of banks and how to improve it. Indeed, I want to focus today, not so much on what Congress needs to do to ensure the safety and soundness of the bank system in this rapidly changing world -- there are others on the program to take on that task -- but more narrowly on how bank regulators should go about their jobs of supervising bank risk-taking.
The evolving search for policies that would guarantee a safe, sound and efficient banking system has featured learning from experience. In the 1930s, Americans learned, expensively, about the hazards of not having a safety net in a crisis that almost wiped out the banking system. In the 1980s, they learned a lot about the hazards of having a safety net, especially about the moral hazard associated with deposit insurance.
Deposit insurance, which had seemed so benign and so successful in building confidence and preventing runs on banks, suddenly revealed its downside for all to see. Some insured institutions, mostly thrifts, but also savings banks, and not a few commercial banks, were taking on risks with a "heads I win, tails you lose" attitude -- sometimes collecting on high stakes bets but often leaving deposit insurance funds to pick up the pieces. At the same time, some regulators, especially the old FSLIC, which was notably strapped for funds, were compounding the problem -- and greatly increasing the ultimate cost of its resolution -- by engaging in regulatory "forbearance" when faced with technically insolvent institutions.
The lessons were costly, but Americans do learn from their mistakes. The advocates of banking reform, many of them participants in this conference, saw the problems posed by moral hazard in the context of ineffectual supervision and set out to design a better system.
---[PAGE_BREAK]---
Essentially, the reform agenda had two main components:
* First, expanded powers for depository institutions that would permit them to diversify in ways that might reduce risks and improve operating efficiency;
* Second, improving the effectiveness of regulation and supervision by instructing regulators, in effect, to act more like the market itself when conducting prudential regulation.
FDICIA was a first step toward meeting the second challenge -- how to make regulators act more like the market. It called for a reduction in the potential for regulatory "forbearance" by laying down the conditions under which conservatorship and receivership should be initiated. It called for supervisory sanctions based on measurable performance (in particular, the Prompt Corrective Action provisions that based supervisory action on a bank's risk-based capital ratio). The Act required the FDIC and RTC to resolve failed institutions on a least-cost basis. In other words, the Act required the depository receivers to act as if the insurance funds were private insurers, rather than continue the past policy of protecting uninsured depositors and other bank creditors. Finally, FDICIA placed limitations on the doctrine of "Too Big To Fail," by requiring agency consensus and administration concurrence in order to prop up any large, failing bank. In a few places, however, FDICIA went too far. The provisions of the Act that dealt with micro management by regulators were immediately seen to be "over the top," and were later repealed. The Act provided a framework for regulators to invoke market-like discipline. It left room for them to move their own regulatory techniques in this direction -- a subject to which I will return in a minute.
The other objective of reform -- diversification of bank activities through an expansion of bank powers -- has not yet resulted in legislation and is still very much an on-going debate. In part, this failure to take legislative action reflected the long-running ability of the nonbank competition to use its political muscle to forestall increased powers for banks. But the inaction on expanded powers also reflected a Congressional concern that additional powers might be used to take on additional risk, which, on the heels of the banking collapse of the late 1980s, represented poor timing, to say the least. There was also some Congressional disposition to punish "greedy bankers," who were seen as the reason for the collapse and the diversion of taxpayer funds to pay for thrift insolvencies. Whatever the reasons, not only did the 102nd Congress fail to enact expanded bank powers, but so did the next two Congresses. We are hopeful that the 105th Congress will succeed where its predecessors have failed. Meanwhile, the regulatory agencies have acted to expand bank powers within the limits of existing law.
The Federal Reserve has proposed both liberalization of Section 20 activities and expedited procedures for processing applications under Regulation Y. The OCC has acted to liberalize banks' insurance agency powers and, most recently, to liberalize procedures for operating subsidiaries of national banks. Of course, I would have to turn in my Federal Reserve badge and give up my parking pass if I did not mention that we at the Fed believe that some activities are best carried out in a subsidiary of the holding company rather than a subsidiary of the bank. We believe that the more distance between the bank and its new, nonbank operations, the more likely that we can separate one from the other and avoid the spreading of the subsidy associated with the safety net.
While the regulators can move in the right direction, it is still imperative that Congress act. Artificial barriers between and among various forms of financial activity are harmful to the best interests of the consumers of financial services, to the providers of those
---[PAGE_BREAK]---
services, and to the general stability and well-being of our financial system, most broadly defined. Congress should consider this issue and take the next steps.
Let me turn now to what I consider to be one of the most critical issues facing regulators, especially in a future in which financial markets likely will dictate significant further increases in the scope and complexity of banking activities. I am referring to the issue of how to conduct optimal supervision of banks. Fortunately, there appears actually to be an evolving consensus at least on the general principle. Regulators, including the Federal Reserve, strongly support the basic approach embodied in FDICIA; namely that regulators should place limits on depository institutions in such a way as to replicate, as closely as possible, the discipline that would be imposed by a marketplace consisting of informed participants and devoid of the moral hazard associated with the safety net.
Unfortunately, as always, the devil is in the details. The difficult question is how should a regulator use "market-based" or "performance-based" measures in determining which, if any, supervisory sanctions or limits to place on a bank. FDICIA's approach was straightforward. Supervisory sanctions under Prompt Corrective Action were to be based on the bank's risk performance as measured by its levels of regulatory capital, in particular its leverage ratio and total risk-based capital ratio under the Basle capital standards. These standards now seem well-intended but rather outdated. Certainly, the Basle capital standards did the job for which they were designed, namely stopping the secular decline in bank capital levels that, by the late 1980s, threatened general safety and soundness. But the scope and complexity of banking activities has proceeded apace during the last two decades or so, and standard capital measures, at least for our very largest and most complex organizations, are no longer adequate measures on which to base supervisory action for several reasons:
* The regulatory capital standards apportion capital only for credit risk and, most recently, for market risk of trading activities. Interest rate risk is dealt with subjectively, and other forms of risk, including operating risk, are not treated within the standards.
* Also, the capital standards are, despite the appellation "risk-based," very much a "one-size-fits-all" rule. For example, all non-mortgage loans to corporations and households receive the same arbitrary 8 percent capital requirement. A secured loan to a triple-A rated company receives the same treatment as an unsecured loan to a junk-rated company. In other words, the capital standards don't measure credit risk although they represent a crude proxy for such risk within broad categories of banking assets.
* Finally, the capital standards give insufficient consideration to hedging or mitigating risk through the use of credit derivatives or effective portfolio diversification.
These shortcomings of the regulatory capital standards were beginning to be understood even as they were being implemented, but no consistent, consensus technology existed at that time for invoking a more sophisticated standard than the Basle norms. To be sure, more sophisticated standards were being used by bank supervisors, during the examination process, to determine the adequacy of capital at any individual institution. These supervisory determinations of capital adequacy on a bank-by-bank basis, reflected in the CAMEL ratings given to banks and the BOPEC ratings given to bank holding companies, are much more inclusive than the Basle standards. Research shows that CAMEL ratings are much better predictors of bank insolvency than "risk-based" capital ratios. But, a bank-by-bank supervision, of course, is not the same thing as the writing of regulations that apply to all banks.
---[PAGE_BREAK]---
It is now evident that the simple regulatory capital standards that apply to all banks can be quite misleading. Nominally high regulatory capital ratios -- even risk-based capital ratios that are 50 or 100 percent higher than the minimums -- are no longer indicators of bank soundness.
Meanwhile, however, some of our largest and most sophisticated banks have been getting ahead of the regulators and doing the two things one must do in order to properly manage risk and determine capital adequacy. First, they are statistically quantifying risk by estimating the shape of loss probability distributions associated with their risk positions. These quantitative measures of risk are calculated by asset type, by product line, and, in some cases, even down to the individual customer level. Second, the more sophisticated banks are calculating economic capital, or "risk capital," to be allocated to each asset, each line of business, and even to each customer, in order to determine risk-adjusted profitability of each type of bank activity. In making these risk capital allocations, banks are defining and meeting internal corporate standards for safety and soundness. For example, a banker might desire to achieve at least a single-A rating on his own corporate debt. He sees that, over history, single-A debt has a default probability of less than one-tenth of one percent over a one year time horizon. So the banker sets an internal corporate goal to allocate enough capital so that the probability of losses exceeding capital is less than 0.1 percent. In the language of statistics, this means that allocated capital must "cover" 99.9 percent of the estimated loss probability distribution.
Once the banker estimates risk and allocates capital to that risk, the internal capital allocations can be used in a variety of ways -- for example, in so-called RAROC or risk-adjusted return on capital models that measure the relative profitability of bank activities. If a particular bank product generates a return to allocated capital that is too low, the bank can seek to cut expenses, reprice the product, or focus its efforts on other, more profitable ventures. These profitability analyses, moreover, are conducted on an "apples-to-apples" basis, since the profitability of each business line is adjusted to reflect the riskiness of the business line.
What these bankers have actually done themselves, in calculating these internal capital requirements, is something regulators have never done -- defined a bank soundness target. What regulator, for example, has said that he wants capital to be high enough to reduce to 0.1 percent the probability of insolvency? Regulators have said only that capital ratios should be no lower than some number ( 8 percent in the case of the Basle standards). But as we should all be aware, a high capital ratio, if it is accompanied by a highly risky portfolio composition, can result in a bank with a high probability of insolvency. The question should not be how high is the bank's capital ratio, but how low is its failure probability.
In sharp contrast to our 8 percent one-size-fits-all capital standard, the internal risk-capital calculations of banks result in a very wide range of capital allocations, even within a particular category of credit instrument. For example, for an unsecured commercial credit line, typical internal capital allocations might range from less than 1 percent for a triple-A or double-A rated obligor, to well over 20 percent for an obligor in one of the lowest rating categories. The range of internal capital allocations widens even more when we look at capital calculations for complex risk positions such as various forms of credit derivatives. This great diversity in economic capital allocations, as compared to regulatory capital allocations, creates at least two types of problem.
* When the regulatory capital requirement is higher than the economic capital allocation, the bank must either engage in costly regulatory arbitrage to evade
---[PAGE_BREAK]---
the regulatory requirement or change its portfolio, possibly leading to suboptimal resource allocation.
* When the regulatory requirement is lower than the economic capital requirement, the bank may choose to hold capital above the regulatory requirement but below the economic requirement; in this case, the bank's nominally high capital ratio may mask the true nature of its risk position.
Measuring bank soundness and overall bank performance is becoming more critical as the risk activities of banks become more complex. This condition is especially evident in the various nontraditional activities of banks. In fact, "nontraditional" is no longer a very good adjective to describe much of what goes on at our larger institutions. Take asset securitization, for example. No longer do our largest banks simply take in deposit funds and lend out the money to borrowers. Currently, well over $\$ 200$ billion in assets that, in times past, have resided on the books of banks, now are owned by remote securitization conduits sponsored by banks. Sponsorship of securitization, which is now almost solely a large bank phenomenon, holds the potential for completely transforming the traditional paradigm of "banking." Now, loans are made directly by the conduits, or are made by the banks and then immediately sold to the conduits. To finance the origination or purchase of the loans, a conduit issues several classes of asset-backed securities collateralized by the loans. Most of the conduit's debt is issued to investors who require that the senior securities be highly rated, generally double-A and triple-A. In order to achieve these ratings, the conduit obtains credit enhancements insulating the senior security holders from defaults on the underlying loans. Generally, it is the bank sponsor that provides these credit enhancements, which can be in the form of standby letters of credit to the conduit, or via the purchase of the most junior/subordinated securities issued by the conduit. In return for providing the credit protection, as well as the loan origination and servicing functions, the bank lays claim to all residual spreads between the yields on the loans and the interest and non-interest cost of the conduit's securities, net of any loan losses. In other words, securitization results in banks taking on almost identically the same risks as if the loans were kept on the books of the bank the old-fashioned way.
But while the credit risk of a securitized loan pool may be the same as the credit risk of holding that loan pool on the books, our capital standards do not always recognize this fact. For example, by supplying a standby letter of credit covering so-called "first-dollar" losses for the conduit, a bank might be able to reduce its regulatory capital requirement, for some of its activities, by 90 percent or more compared with what would be required if the bank held the loans directly on its own books. The question, of course, is whether the bank's internal capital allocation systems recognize the similarity in risk between, on the one hand, owning the whole loans and, on the other hand, providing a credit enhancement to a securitization conduit.
If the risk measurement and management systems of the bank are faulty, then holding a nominally high capital ratio -- say, 10 percent -- is little consolation. In fact, nominally high capital ratios can be deceiving to market participants. If, for example, the bank's balance sheet is less than transparent, potential investors or creditors, seeing the nominally high 10 percent capital, but not recognizing that the economic risk capital allocation should, in percentage terms, be much higher, could direct an inappropriately high level of scarce resources toward the bank.
Credit derivatives are another example of the evolution. The bottom line is that, as we move into the 21 st century, traditional notions of "capital adequacy" will become less useful in determining the safety and soundness of our largest, most sophisticated, banking organizations. This growing discrepancy is important because "performance-based" solutions
---[PAGE_BREAK]---
likely will continue to be touted as the basis for expanded bank powers or reductions in burdensome regulation. For example, the Federal Reserve's recent proposed liberalization of procedures for Regulation Y activities applies to banking companies that are "well-capitalized" and "well-managed." Similarly, the OCC's recent proposed liberalization of rules for bank operating subsidiaries applies to "well-capitalized" institutions. Also, industry participants continue to call for expanded powers and/or reduced regulatory burden based on "market tests" of good management and adequate capital.
It will not be easy reaching consensus on how to measure bank soundness and overall bank performance. It cannot simply be done by observing market indicators. For example, we cannot easily use the public ratings of holding company debt. The ratings, after all, are achieved given the existence of the safety net. The ratings are biased, therefore, from the perspective of achieving our stated goal -- to impose prudential limits on banks as if there were no net. In addition, I am sure that there would be disagreement between market participants and regulators over what should be acceptable debt ratings. The solution may be for the regulators to use the analytical tools developed by the market participants themselves for risk and performance assessment. Regulators already have begun to move in this direction. For example, beginning in January 1998, qualifying large multinational banks will be able to use their internal Value-at-Risk models to help set capital requirements for the market risk inherent in their trading activities. The Federal Reserve is also conducting a pilot test of the pre-commitment approach to capital for market risk. In this approach, banks can choose their own capital allocations, but would be sanctioned heavily if cumulative trading losses during a quarter were to exceed their chosen capital allocations. These new and innovative methods for treating the age-old problem of capital adequacy are likely to be followed by an unending, evolutionary flow of improvements in the prudential supervisory process. As the industry makes technological advances in risk measurement, these advances will become imbedded in the supervisory process. For example, the banking agencies have announced programs to place an increased emphasis on banks' internal risk measurement and management processes within the assessment of overall management quality -- that is, how well a bank employs modern technology to manage risk will be reflected in the "M" portion of the bank's CAMEL rating. In a similar vein, now that VaR models are being used to assess regulatory capital for market risk, it is easy to envision that, down the road, banks' internal credit risk models and associated internal capital allocations will also be used to help set regulatory capital requirements.
Regulation and supervision, like industry practices themselves, are continually evolving processes. As supervisors, our goal must be to stay abreast of best practices, incorporate these practices into our own procedures where appropriate, and do so in a way that allows banks to remain sufficiently flexible and competitive. In conducting prudential regulation we should always remember that the optimal number of bank failures is not zero. Indeed, "market-based" performance means that some institutions, either through poor management choices, or just because of plain old bad luck, will fail. As regulators, we must carefully balance these market-like results with concerns over systemic risk. And, as regulators of banks, we must always remember that we do not operate in a vacuum -- the activities of nonbank financial institutions are also important to the general well-being of our financial system and the macro economy.
Regulators, of course, can only work with the framework laid down by Congress. Let me conclude with the hope that this Congress will build on the experience of the last few years, including the experience with FDICIA, and take the next steps toward creating a structural and regulatory framework appropriate to the 21 st century. | Alice M Rivlin | United States | https://www.bis.org/review/r970108b.pdf | it Remarks by the Vice Chairman of the Board of Governors of the US Federal Reserve System, Ms. Alice M. Rivlin, at the The Brookings Institution National Issues Forum in Washington on 19/12/96. I discovered when I joined the Board of Governors of the Federal Reserve System about six months ago that most of my friends -- including my sophisticated public policy oriented friends -- had only a hazy notion what their central bank did. Many of them said, enthusiastically, "Congratulations!" Then they asked with a bit of embarrassment, "Is it a full-time job?" or "What will you find to do between meetings?" The meetings they were aware of, of course, were those of the Federal Open Market Committee. They knew that the FOMC meets every six weeks or so to "set interest rates." That sounds like real power, so the FOMC gets a lot of press attention even when, as happened again this week, we meet and decide to do absolutely nothing at all. The group gathered here today, however, realizes that monetary policy, while important, is not actually very time-consuming. If you cared enough to come to this conference, you also have a strong conviction that the health and vigor of the American economy depends not only on good macro-economic policy, although that certainly helps, but also on the safety, soundness and efficiency of the banking system. We need a banking system that works well and one in which citizens and businesses, foreign and domestic, have high and well placed confidence. So I want to talk today, as seems appropriate on the fifth anniversary of FDICIA, about the subject that occupies much of our attention at the Federal Reserve: the prudential regulation of banks and how to improve it. Indeed, I want to focus today, not so much on what Congress needs to do to ensure the safety and soundness of the bank system in this rapidly changing world -- there are others on the program to take on that task -- but more narrowly on how bank regulators should go about their jobs of supervising bank risk-taking. The evolving search for policies that would guarantee a safe, sound and efficient banking system has featured learning from experience. In the 1930s, Americans learned, expensively, about the hazards of not having a safety net in a crisis that almost wiped out the banking system. In the 1980s, they learned a lot about the hazards of having a safety net, especially about the moral hazard associated with deposit insurance. Deposit insurance, which had seemed so benign and so successful in building confidence and preventing runs on banks, suddenly revealed its downside for all to see. Some insured institutions, mostly thrifts, but also savings banks, and not a few commercial banks, were taking on risks with a "heads I win, tails you lose" attitude -- sometimes collecting on high stakes bets but often leaving deposit insurance funds to pick up the pieces. At the same time, some regulators, especially the old FSLIC, which was notably strapped for funds, were compounding the problem -- and greatly increasing the ultimate cost of its resolution -- by engaging in regulatory "forbearance" when faced with technically insolvent institutions. The lessons were costly, but Americans do learn from their mistakes. The advocates of banking reform, many of them participants in this conference, saw the problems posed by moral hazard in the context of ineffectual supervision and set out to design a better system. Essentially, the reform agenda had two main components: First, expanded powers for depository institutions that would permit them to diversify in ways that might reduce risks and improve operating efficiency;. Second, improving the effectiveness of regulation and supervision by instructing regulators, in effect, to act more like the market itself when conducting prudential regulation. FDICIA was a first step toward meeting the second challenge -- how to make regulators act more like the market. It called for a reduction in the potential for regulatory "forbearance" by laying down the conditions under which conservatorship and receivership should be initiated. It called for supervisory sanctions based on measurable performance (in particular, the Prompt Corrective Action provisions that based supervisory action on a bank's risk-based capital ratio). The Act required the FDIC and RTC to resolve failed institutions on a least-cost basis. In other words, the Act required the depository receivers to act as if the insurance funds were private insurers, rather than continue the past policy of protecting uninsured depositors and other bank creditors. Finally, FDICIA placed limitations on the doctrine of "Too Big To Fail," by requiring agency consensus and administration concurrence in order to prop up any large, failing bank. In a few places, however, FDICIA went too far. The provisions of the Act that dealt with micro management by regulators were immediately seen to be "over the top," and were later repealed. The Act provided a framework for regulators to invoke market-like discipline. It left room for them to move their own regulatory techniques in this direction -- a subject to which I will return in a minute. The other objective of reform -- diversification of bank activities through an expansion of bank powers -- has not yet resulted in legislation and is still very much an on-going debate. In part, this failure to take legislative action reflected the long-running ability of the nonbank competition to use its political muscle to forestall increased powers for banks. But the inaction on expanded powers also reflected a Congressional concern that additional powers might be used to take on additional risk, which, on the heels of the banking collapse of the late 1980s, represented poor timing, to say the least. There was also some Congressional disposition to punish "greedy bankers," who were seen as the reason for the collapse and the diversion of taxpayer funds to pay for thrift insolvencies. Whatever the reasons, not only did the 102nd Congress fail to enact expanded bank powers, but so did the next two Congresses. We are hopeful that the 105th Congress will succeed where its predecessors have failed. Meanwhile, the regulatory agencies have acted to expand bank powers within the limits of existing law. The Federal Reserve has proposed both liberalization of Section 20 activities and expedited procedures for processing applications under Regulation Y. The OCC has acted to liberalize banks' insurance agency powers and, most recently, to liberalize procedures for operating subsidiaries of national banks. Of course, I would have to turn in my Federal Reserve badge and give up my parking pass if I did not mention that we at the Fed believe that some activities are best carried out in a subsidiary of the holding company rather than a subsidiary of the bank. We believe that the more distance between the bank and its new, nonbank operations, the more likely that we can separate one from the other and avoid the spreading of the subsidy associated with the safety net. While the regulators can move in the right direction, it is still imperative that Congress act. Artificial barriers between and among various forms of financial activity are harmful to the best interests of the consumers of financial services, to the providers of those services, and to the general stability and well-being of our financial system, most broadly defined. Congress should consider this issue and take the next steps. Let me turn now to what I consider to be one of the most critical issues facing regulators, especially in a future in which financial markets likely will dictate significant further increases in the scope and complexity of banking activities. I am referring to the issue of how to conduct optimal supervision of banks. Fortunately, there appears actually to be an evolving consensus at least on the general principle. Regulators, including the Federal Reserve, strongly support the basic approach embodied in FDICIA; namely that regulators should place limits on depository institutions in such a way as to replicate, as closely as possible, the discipline that would be imposed by a marketplace consisting of informed participants and devoid of the moral hazard associated with the safety net. Unfortunately, as always, the devil is in the details. The difficult question is how should a regulator use "market-based" or "performance-based" measures in determining which, if any, supervisory sanctions or limits to place on a bank. FDICIA's approach was straightforward. Supervisory sanctions under Prompt Corrective Action were to be based on the bank's risk performance as measured by its levels of regulatory capital, in particular its leverage ratio and total risk-based capital ratio under the Basle capital standards. These standards now seem well-intended but rather outdated. Certainly, the Basle capital standards did the job for which they were designed, namely stopping the secular decline in bank capital levels that, by the late 1980s, threatened general safety and soundness. But the scope and complexity of banking activities has proceeded apace during the last two decades or so, and standard capital measures, at least for our very largest and most complex organizations, are no longer adequate measures on which to base supervisory action for several reasons: The regulatory capital standards apportion capital only for credit risk and, most recently, for market risk of trading activities. Interest rate risk is dealt with subjectively, and other forms of risk, including operating risk, are not treated within the standards. Also, the capital standards are, despite the appellation "risk-based," very much a "one-size-fits-all" rule. For example, all non-mortgage loans to corporations and households receive the same arbitrary 8 percent capital requirement. A secured loan to a triple-A rated company receives the same treatment as an unsecured loan to a junk-rated company. In other words, the capital standards don't measure credit risk although they represent a crude proxy for such risk within broad categories of banking assets. Finally, the capital standards give insufficient consideration to hedging or mitigating risk through the use of credit derivatives or effective portfolio diversification. These shortcomings of the regulatory capital standards were beginning to be understood even as they were being implemented, but no consistent, consensus technology existed at that time for invoking a more sophisticated standard than the Basle norms. To be sure, more sophisticated standards were being used by bank supervisors, during the examination process, to determine the adequacy of capital at any individual institution. These supervisory determinations of capital adequacy on a bank-by-bank basis, reflected in the CAMEL ratings given to banks and the BOPEC ratings given to bank holding companies, are much more inclusive than the Basle standards. Research shows that CAMEL ratings are much better predictors of bank insolvency than "risk-based" capital ratios. But, a bank-by-bank supervision, of course, is not the same thing as the writing of regulations that apply to all banks. It is now evident that the simple regulatory capital standards that apply to all banks can be quite misleading. Nominally high regulatory capital ratios -- even risk-based capital ratios that are 50 or 100 percent higher than the minimums -- are no longer indicators of bank soundness. Meanwhile, however, some of our largest and most sophisticated banks have been getting ahead of the regulators and doing the two things one must do in order to properly manage risk and determine capital adequacy. First, they are statistically quantifying risk by estimating the shape of loss probability distributions associated with their risk positions. These quantitative measures of risk are calculated by asset type, by product line, and, in some cases, even down to the individual customer level. Second, the more sophisticated banks are calculating economic capital, or "risk capital," to be allocated to each asset, each line of business, and even to each customer, in order to determine risk-adjusted profitability of each type of bank activity. In making these risk capital allocations, banks are defining and meeting internal corporate standards for safety and soundness. For example, a banker might desire to achieve at least a single-A rating on his own corporate debt. He sees that, over history, single-A debt has a default probability of less than one-tenth of one percent over a one year time horizon. So the banker sets an internal corporate goal to allocate enough capital so that the probability of losses exceeding capital is less than 0.1 percent. In the language of statistics, this means that allocated capital must "cover" 99.9 percent of the estimated loss probability distribution. Once the banker estimates risk and allocates capital to that risk, the internal capital allocations can be used in a variety of ways -- for example, in so-called RAROC or risk-adjusted return on capital models that measure the relative profitability of bank activities. If a particular bank product generates a return to allocated capital that is too low, the bank can seek to cut expenses, reprice the product, or focus its efforts on other, more profitable ventures. These profitability analyses, moreover, are conducted on an "apples-to-apples" basis, since the profitability of each business line is adjusted to reflect the riskiness of the business line. What these bankers have actually done themselves, in calculating these internal capital requirements, is something regulators have never done -- defined a bank soundness target. What regulator, for example, has said that he wants capital to be high enough to reduce to 0.1 percent the probability of insolvency? Regulators have said only that capital ratios should be no lower than some number ( 8 percent in the case of the Basle standards). But as we should all be aware, a high capital ratio, if it is accompanied by a highly risky portfolio composition, can result in a bank with a high probability of insolvency. The question should not be how high is the bank's capital ratio, but how low is its failure probability. In sharp contrast to our 8 percent one-size-fits-all capital standard, the internal risk-capital calculations of banks result in a very wide range of capital allocations, even within a particular category of credit instrument. For example, for an unsecured commercial credit line, typical internal capital allocations might range from less than 1 percent for a triple-A or double-A rated obligor, to well over 20 percent for an obligor in one of the lowest rating categories. The range of internal capital allocations widens even more when we look at capital calculations for complex risk positions such as various forms of credit derivatives. This great diversity in economic capital allocations, as compared to regulatory capital allocations, creates at least two types of problem. When the regulatory capital requirement is higher than the economic capital allocation, the bank must either engage in costly regulatory arbitrage to evade the regulatory requirement or change its portfolio, possibly leading to suboptimal resource allocation. When the regulatory requirement is lower than the economic capital requirement, the bank may choose to hold capital above the regulatory requirement but below the economic requirement; in this case, the bank's nominally high capital ratio may mask the true nature of its risk position. Measuring bank soundness and overall bank performance is becoming more critical as the risk activities of banks become more complex. This condition is especially evident in the various nontraditional activities of banks. In fact, "nontraditional" is no longer a very good adjective to describe much of what goes on at our larger institutions. Take asset securitization, for example. No longer do our largest banks simply take in deposit funds and lend out the money to borrowers. Currently, well over $\$ 200$ billion in assets that, in times past, have resided on the books of banks, now are owned by remote securitization conduits sponsored by banks. Sponsorship of securitization, which is now almost solely a large bank phenomenon, holds the potential for completely transforming the traditional paradigm of "banking." Now, loans are made directly by the conduits, or are made by the banks and then immediately sold to the conduits. To finance the origination or purchase of the loans, a conduit issues several classes of asset-backed securities collateralized by the loans. Most of the conduit's debt is issued to investors who require that the senior securities be highly rated, generally double-A and triple-A. In order to achieve these ratings, the conduit obtains credit enhancements insulating the senior security holders from defaults on the underlying loans. Generally, it is the bank sponsor that provides these credit enhancements, which can be in the form of standby letters of credit to the conduit, or via the purchase of the most junior/subordinated securities issued by the conduit. In return for providing the credit protection, as well as the loan origination and servicing functions, the bank lays claim to all residual spreads between the yields on the loans and the interest and non-interest cost of the conduit's securities, net of any loan losses. In other words, securitization results in banks taking on almost identically the same risks as if the loans were kept on the books of the bank the old-fashioned way. But while the credit risk of a securitized loan pool may be the same as the credit risk of holding that loan pool on the books, our capital standards do not always recognize this fact. For example, by supplying a standby letter of credit covering so-called "first-dollar" losses for the conduit, a bank might be able to reduce its regulatory capital requirement, for some of its activities, by 90 percent or more compared with what would be required if the bank held the loans directly on its own books. The question, of course, is whether the bank's internal capital allocation systems recognize the similarity in risk between, on the one hand, owning the whole loans and, on the other hand, providing a credit enhancement to a securitization conduit. If the risk measurement and management systems of the bank are faulty, then holding a nominally high capital ratio -- say, 10 percent -- is little consolation. In fact, nominally high capital ratios can be deceiving to market participants. If, for example, the bank's balance sheet is less than transparent, potential investors or creditors, seeing the nominally high 10 percent capital, but not recognizing that the economic risk capital allocation should, in percentage terms, be much higher, could direct an inappropriately high level of scarce resources toward the bank. Credit derivatives are another example of the evolution. The bottom line is that, as we move into the 21 st century, traditional notions of "capital adequacy" will become less useful in determining the safety and soundness of our largest, most sophisticated, banking organizations. This growing discrepancy is important because "performance-based" solutions likely will continue to be touted as the basis for expanded bank powers or reductions in burdensome regulation. For example, the Federal Reserve's recent proposed liberalization of procedures for Regulation Y activities applies to banking companies that are "well-capitalized" and "well-managed." Similarly, the OCC's recent proposed liberalization of rules for bank operating subsidiaries applies to "well-capitalized" institutions. Also, industry participants continue to call for expanded powers and/or reduced regulatory burden based on "market tests" of good management and adequate capital. It will not be easy reaching consensus on how to measure bank soundness and overall bank performance. It cannot simply be done by observing market indicators. For example, we cannot easily use the public ratings of holding company debt. The ratings, after all, are achieved given the existence of the safety net. The ratings are biased, therefore, from the perspective of achieving our stated goal -- to impose prudential limits on banks as if there were no net. In addition, I am sure that there would be disagreement between market participants and regulators over what should be acceptable debt ratings. The solution may be for the regulators to use the analytical tools developed by the market participants themselves for risk and performance assessment. Regulators already have begun to move in this direction. For example, beginning in January 1998, qualifying large multinational banks will be able to use their internal Value-at-Risk models to help set capital requirements for the market risk inherent in their trading activities. The Federal Reserve is also conducting a pilot test of the pre-commitment approach to capital for market risk. In this approach, banks can choose their own capital allocations, but would be sanctioned heavily if cumulative trading losses during a quarter were to exceed their chosen capital allocations. These new and innovative methods for treating the age-old problem of capital adequacy are likely to be followed by an unending, evolutionary flow of improvements in the prudential supervisory process. As the industry makes technological advances in risk measurement, these advances will become imbedded in the supervisory process. For example, the banking agencies have announced programs to place an increased emphasis on banks' internal risk measurement and management processes within the assessment of overall management quality -- that is, how well a bank employs modern technology to manage risk will be reflected in the "M" portion of the bank's CAMEL rating. In a similar vein, now that VaR models are being used to assess regulatory capital for market risk, it is easy to envision that, down the road, banks' internal credit risk models and associated internal capital allocations will also be used to help set regulatory capital requirements. Regulation and supervision, like industry practices themselves, are continually evolving processes. As supervisors, our goal must be to stay abreast of best practices, incorporate these practices into our own procedures where appropriate, and do so in a way that allows banks to remain sufficiently flexible and competitive. In conducting prudential regulation we should always remember that the optimal number of bank failures is not zero. Indeed, "market-based" performance means that some institutions, either through poor management choices, or just because of plain old bad luck, will fail. As regulators, we must carefully balance these market-like results with concerns over systemic risk. And, as regulators of banks, we must always remember that we do not operate in a vacuum -- the activities of nonbank financial institutions are also important to the general well-being of our financial system and the macro economy. Regulators, of course, can only work with the framework laid down by Congress. Let me conclude with the hope that this Congress will build on the experience of the last few years, including the experience with FDICIA, and take the next steps toward creating a structural and regulatory framework appropriate to the 21 st century. |
1997-01-05T00:00:00 | Mr. Meyer examines the role for structural macroeconomic models in the monetary policy process (Central Bank Articles and Speeches, 5 Jan 97) | Remarks by Mr. Laurence H. Meyer, a member of the Board of Governors of the US Federal Reserve System, at the AEA Panel on Monetary and Fiscal Policy held in New Orleans on 5/1/97. | Mr. Meyer examines the role for structural macroeconomic models in the
monetary policy process Remarks by Mr. Laurence H. Meyer, a member of the Board of
Governors of the US Federal Reserve System, at the AEA Panel on Monetary and Fiscal Policy
held in New Orleans on 5/1/97.
The Role for Structural Macroeconomic Models
I am in the middle of my third interesting and active encounter with the
development and/or use of macroeconometric models for forecasting and policy analysis. My
journey began at MIT as a research assistant to Professors Franco Modigiliani and Albert Ando
during the period of development of the MPS model, continued at Laurence H. Meyer &
Associates with the development of The Washington University Macro Model under the
direction of my partner, Joel Prakken, and the use of that model for both forecasting and policy
analysis, and now has taken me to the Board of Governors where macro models have long
played an important role in forecasting and policy analysis and the MPS model has recently been
replaced by the FRB-US model.
I bring to this panel a perspective shaped by both my earlier experience and my
new responsibilities. I will focus my presentation on the role of structural macro models in the
monetary policy process, compare the use of models at the Board with their use at Laurence H.
Meyer & Associates, and discuss how the recently introduced innovations in the Federal Reserve
model might further advance the usefulness of models in the monetary policy process.
I. Structural Models and Monetary Policy Analysis
I want to focus on three contributions of models to the monetary policy process:
as an input to the forecast process; as a vehicle for analyzing alternative scenarios; and a vehicle
for developing a strategy for implementing monetary policy that disciplines the juggling of
multiple objectives and ensures a bridge from short-run policy to long-run objectives.
1. The forecast context for monetary policy decisions
Because monetary policy has the ability to adjust quickly to changing economic
conditions and because lags in the response to monetary policy make it important that monetary
policy be forward-looking, monetary policy is very much influenced both by incoming data and
by forecasts of spending and price developments. Forecasts are central to monetary policy
setting. Models make a valuable contribution to forecasting. Therefore, models can make an
important contribution to the setting of monetary policy.
Models capture historical regularities, identify key assumptions that must be made
to condition the forecast, embody estimates of the effects of past and future policy actions on the
economy, and provide a disciplined approach to learning from past errors. I attribute much of
the forecasting success of myself and my partners at LHM&A to the way in which we allowed
our model to discipline our judgment in making the forecast. A model also helps to defend and
communicate the forecast, by providing a coherent story that ties together the details of the
forecast. It also helps to isolate the source of disagreements about the forecast, helping to
separate differences in assumptions (oil prices, fiscal policy, etc.) from disagreements about the
structure of the economy or judgments about special factors that the model may not fully
capture.
At the Board, the staff forecast, presented in the Green Book prior to each of the
eight FOMC meetings each year, is fundamentally judgmental. It is developed by a team of
sector specialists who consult, but are not bound by, a number of structural econometric
equations describing their sectors, and further armed, in some cases, with reduced-form
equations and atheoretical time series models. The team develops the forecast within the context
of agreed-upon conditioning assumptions, including, for example, a path for short-term interest
rates, fiscal policy, oil prices, and foreign economic policies. They begin with an income
constraint and then participate in an interactive process of revisions to ensure that the
aggregation of sector forecasts is consistent with the evolving forecast for the overall level of
output.
Models play an important supporting role in the development of the staff forecast.
A separate model group uses a formal structural macroeconometric model, the FRB-US Model,
to make a "pure model" forecast which is also available to the FOMC and is an input to the
judgmental forecast process. The model forecast is conditioned by the same set of assumptions
as the judgmental forecast and statistical models are used to generate the path of adjustment
factors, avoiding any role for judgment in the forecast. The members of the model group also
actively participate in the discussions as the judgmental forecast evolves, focusing in particular
on the consistency between the adjustment factors that would be required to impose the
judgmental forecast on the model and the pattern of adjustment factors in the "pure" model
forecast.
There are two important differences from the private sector use of models for
forecasting, at least based on my experience at LHM&A. First, the staff is not truly making a
forecast of economic activity, prices, etc., because the staff forecast is usually conditioned on an
unchanged path of the funds rate. Thus the staff is projecting how the economy would evolve if
there were no change in the federal funds rate (which does not even always translate cleanly into
no change in monetary policy). The rationale for this procedure is to separate the forecast
process from the policy-making process, and therefore avoid appearing to prejudge the FOMC's
decisions. This procedure may be modified when there is a strong presumption that conditions
will unambiguously call for significant action if the Committee is to achieve its objectives. But it
does, nevertheless, make the forecast process at the Board fundamentally different from that in
the private sector where one of the key decisions in the forecast is the direction of monetary
policy. It is ironic that, at the Board, where the staff is presumably more knowledgeable about
the direction of policy than in the private sector, forecasting is constrained from using that
information in developing the forecast. On the other hand, the practice at the Board may be very
well suited to the process of making policy by forcing the FOMC to confront the implications of
maintaining an unchanged path for the funds rate.
A second difference relative to my experience in the private sector has to do with
the way in which judgment and model interact in the development of the forecast. My first
impression of the process at the Board was that the judgmental team made its forecast without a
model and the model team made its forecast without judgment, leaving the blending of model
and judgment to be worked out in the process of discussion and iteration as the judgmental
group looks at the model output and the model group joins the discussion of the forecast. The
process is, I have come to appreciate, more complicated and subtle than this caricature. For
example, when there have been important shocks (e.g., unexpected rise in oil prices or an
increase in the minimum wage), model simulations of the effect of the shocks will provide a
point of departure for the initial judgmental forecast. But it is, nevertheless, a different way of
combining model and judgment than we used at LHM&A where the model played a more
central role in the forecast process. An advantage of the Board's approach is that it makes the
forecast less dependent on a single model (perhaps desirable given the diversity of views on the
FOMC) and forces recognition of uncertainties in the outlook when alternative sector models
yield very different forecasts.
2. Policy alternatives and alternative scenarios to support FOMC policy decisions
A second valuable contribution of models is to provide alternative scenarios
around a base forecast. I will focus on three examples of this use of models at the Board, though
there is also, of course, widespread use of alternative model-based scenario analysis in the
private sector.
First, the staff regularly provides alternative forecasts roughly corresponding to
the policy options that will be considered at the upcoming FOMC meeting. The staff first
imposes the judgmental forecast on the FRB-US model and then uses the model to provide
alternative scenarios for a policy of rising rates and a policy of declining rates, bracketing the
staff forecast which assumes an unchanged federal funds rate. While this is the most direct use
of the model in the forecast process, it is recognized that it has become a problematical one,
especially given the structure of the new FRB-US model that otherwise treats policy as
determined by a rule, a prerequisite to the forward-looking approach to expectations formation
that is a major innovation in the new model. Indeed, it might well be that the presentation of a
forecast that incorporates a simple monetary policy rule might be a more useful complement to
the staff's judgmental forecast than the mechanical bracketing of the judgmental forecast with
pre-determined paths of rising or falling rates.
Second, the staff, on occasion, uses the model to provide information about the
projected effects of significant contingencies: e.g., the return of Iraq to oil exporting under the
U.N. agreement for humanitarian aid or the effect of an increase in the minimum wage. Models
are particularly well suited to providing this information.
Third, the model can be used to evaluate the consistency of alternative policies
with the Federal Reserve's long-run objective of price stability. One of the challenges of
monetary policy making is to ensure that the meeting-to-meeting policy deliberations maintain a
disciplined focus on the Federal Reserve's long-term price stability objective. To facilitate this
focus, five-year simulations under alternative policy assumptions are generally run
semi-annually, to coincide with the FOMC meetings preceding the preparation of the
Humphrey-Hawkins report and the Chairman's testimony on monetary policy before Congress.
These simulations have recently focused on policy options allowing for more gradual or more
rapid convergence over time to long-run inflation targets, allowing the FOMC to focus on both
the different time-paths to achieve the long-run objective and the alternative paths of output and
employment during the transition to the long-run target.
3. Policy rules to inform discretionary monetary policy
A third contribution of models to the monetary policy process is through
simulations with alternative rules for Federal Reserve action. At LHM&A we designed our
model to offer users four policy regimes: setting paths for the money supply, nonborrowed
reserves or the federal funds rate or turning on a reaction function according to which the federal
funds rate responds to developments in output, unemployment and inflation. While we
increasingly used the reaction function in our analysis of alternative fiscal policies, we did not
routinely take advantage of the reaction function to forecast monetary policy. Another irony is
that there is a much more active interest in the implications of monetary policy rules at the
Board, where discretionary policy is made, than in the private sector, where estimated rules
might be effectively used to forecast monetary policy.
The staff has examined a number of alternative rules, including those based on
monetary aggregates, commodity prices, exchange rates, nominal income, and, most recently,
Taylor-type rules. These rules, in effect, adjust the real federal funds rate relative to some
long-run equilibrium level in response to the gaps between actual and potential output and
between inflation and some long-run inflation target.
Such a rule can be interpreted as either a descriptive or normative guide to policy.
If the parameters of the policy rule are estimated over some recent sample period, the rule may
describe the average response of the FOMC over the period. Alternatively, parameters can be
derived from some optimizing framework, dependent on a specific objective function and model
of the economy. Stochastic simulations with such a rule can provide some confidence that
following the rule will contribute to both short-run stabilization and long-term inflation goals in
response to historical shocks to the economy and the rule, in turn, can provide discipline to
discretionary policy by providing guidance on when and how aggressively to move interest rates
in response to movements in output and inflation.
The focus on rules is much more important under an interest rate operating
procedure than under an operating procedure focused directly on monetary aggregate targets and
is also more important under an interest rate operating procedure when the monetary aggregates,
as has been the case for some time, do not bear a stable relationship to overall economic
performance and therefore do not provide useful information about when and how aggressively
to change interest rates. Taylor-type rules, in this environment, provide a disciplined approach to
varying interest rates in response to economic developments that both ensures a pro-cyclical
response of interest rates to demand shocks and imposes a nominal anchor in much the same
way as would be the case under a monetary aggregate strategy with a stable money demand
function. For this reason, I like to refer to the strategy implicit in such rules as "monetarism
without money."
This should not suggest that we can write a rule that is appropriate, in all
circumstances, to all varieties of shocks, and to all the varieties of cyclical experience. Rules, at
best, can discipline judgment rather than replace judgment. A particular problem with
Taylor-type rules is that we do not know the equilibrium real federal funds rate and, whatever it
might be at one point in time, it likely varies over time. There is considerable research under
way at the Board in an effort to find specifications and parameters for rules which achieve an
efficient balancing of inflation and output variability and provide guidance about patterns and
aggressiveness of interest rate adjustments consistent with the stabilizing properties of
high-performing rules.
II. The FRB-US Model: Rational Expectations in a Sticky-Price Model
The newly redesigned model at the Board, the FRB-US model, replaces the MPS
model. The MPS model, developed in the mid to late-1960s, revolutionized macroeconometric
modeling and set the standard for a considerable period of time. The Board participated in the
development of the MPS model and then became its home and the Board staff kept the faith
alive during the lean years when such models lost respectability in academic circles, even as
their usefulness and value in forecasting and practical policy analysis was growing in the "real"
world. The FRB-US model retains much of the underlying structure in terms of equilibrium
relationships and even more of the fundamental simulation properties of the MPS model, but
significantly modernizes the estimation of the model and the treatment of expectations.
The vision in the new work is to separate macro-dynamics into adjustment cost
and expectations formation components, with adjustment costs imposing a degree of inertia and
expectations introducing a forward-looking element into the dynamics. The net result is a
structure that integrates rational expectations into a sticky-price model. In this respect, the new
model follows closely the approach pioneered by John Taylor. Finally, the estimation technique
makes use of co-integration and an error-correction framework.
Financial and exchange rate relationships are based on arbitrage equations, with
no adjustment costs but with explicitly forward-looking expectations. The specification of
nonfinancial equations, in contrast, incorporates both adjustment costs and rational expectations.
Rational expectations are implemented in two alternative ways. First, expectations
can be specified as "model-consistent" expectations; that is, the expectations about future
inflation can be set to equal future inflation (perfect foresight) through iterative solutions of the
model. Model-consistent expectations may, but need not, assume that the private sector has
complete knowledge of the policy rule being followed by the Federal Reserve. In the second
approach, expectations are also viewed as being model-consistent, but in this case the model
relevant to expectations is not precisely the same as the FRB-US model. Instead, expectations
are formed based on a simpler VAR model of the economy. The VAR model always includes
three variables -- the output gap, a short-term interest rate, and inflation. When expectations of
additional sector-specific variables are required, the system is expanded to include the additional
variable. A unique aspect of the VAR expectations is that these equations also incorporate
explicit forward-looking information through an error-correction specification. For example, the
VAR equations include a term for the gap between actual inflation and the public's "long-run"
expectations of inflation, based on survey measures of long-run inflation expectations which, in
turn, might be viewed as based on a combination of the public's perception of the Federal
Reserve's reaction function, including its tolerance of inflation over the long run. The equations
also include the gap between actual short-term interest rates and the public's long-run
expectations of short-term rates, gleaned from the yield curve.
The model retains the neo-Classical synthesis vision of the MPS model
-short-run output dynamics based on sticky prices and long-run Classical properties associated
with price flexibility -- and therefore produces multiplier results, both in the short and longer
runs, that are very similar to those produced by the MPS model. The result is that the model
produces, for the most part, what may be the best of two worlds - a modern form and traditional
results! But the better articulated role of expectations in the new model also allows a richer
analysis of the response to those policy actions which might have immediate impacts on inflation
and/or interest rate expectations.
The model has several advantages. The first is it may be more credible to a wider
audience because of its modernization in terms of cointegration and error-learning specification
on the one hand and explicit use of rational expectations on the other hand. Second, the model is
much more flexible in terms of research potential. It allows one to study in particular how the
response to monetary or fiscal policies depends on features of the expectation formation process.
Third, the model forces the user to make assumptions explicitly about expectations formation
that otherwise could be avoided or hidden.
Let me give two examples of policy options that can be analyzed more effectively
in the new model. First, consider a deficit reduction package that is credible and promises to
lower interest rates in the future. In models like MPS and WUMM, the mechanical fiscal policy
simulation would ignore any "bond market effect" associated with changed expectations about
future short-term rates. One could, of course, add-factor downward the long-term bond rate in
the term structure equation to impose a bond market effect, but the structure of the model neither
immediately points you in this direction nor provides any guidance about how to intervene. In
FRB-US, in contrast, one cannot avoid making an explicit assumption about the credibility of
such a policy (through assumptions about future short-term interest rates in the VAR
expectations or in the context of model-consistent expectations) and the assumption made about
credibility will importantly affect the short-run dynamics though not the long-run effects of the
policy.
Second, consider the transitional costs of reducing inflation. The transitional
effects on output depend importantly on the assumptions made about the credibility of the
inflation commitment. Note, however, that there are significant transitional output costs of
disinflation even under full credibility and the model-consistent specification of rational
expectations, arising from the sticky price implication of the adjustment cost specification. For
my part, I prefer the FRB-US simulations based on limited rather than perfect credibility,
because I do not believe that credibility effects significantly diminish the transition costs of
lowering inflation. But I also value having a disciplined approach to showing how the costs of
disinflation would vary with the differing degrees of credibility.
___________________
References
Brayton, F., A. Levin, R. Tryon, and J. Williams. "The Evolution of Macro
Models at the Federal Reserve Board." mimeo. Board of Governors of the Federal Reserve
System, November 1996.
Brayton, F. and P. Tinsley. "A Guide to FRB/US: A Macroeconometric Model of
the United States." FEDS 96-42, 1996.
Reifschneider, D., D. Stockton, and D. Wilcox. "Econometric Models and the
Monetary Policy Process." mimeo. Board of Governors of the Federal Reserve System,
November 1996.
Taylor, J. "Discretion versus Policy Rules in Practice." Carnegie Rochester
Conference Series on Public Policy, vol. 39, 1993. |
---[PAGE_BREAK]---
# Mr. Meyer examines the role for structural macroeconomic models in the
monetary policy process Remarks by Mr. Laurence H. Meyer, a member of the Board of Governors of the US Federal Reserve System, at the AEA Panel on Monetary and Fiscal Policy held in New Orleans on 5/1/97.
## The Role for Structural Macroeconomic Models
I am in the middle of my third interesting and active encounter with the development and/or use of macroeconometric models for forecasting and policy analysis. My journey began at MIT as a research assistant to Professors Franco Modigiliani and Albert Ando during the period of development of the MPS model, continued at Laurence H. Meyer \& Associates with the development of The Washington University Macro Model under the direction of my partner, Joel Prakken, and the use of that model for both forecasting and policy analysis, and now has taken me to the Board of Governors where macro models have long played an important role in forecasting and policy analysis and the MPS model has recently been replaced by the FRB-US model.
I bring to this panel a perspective shaped by both my earlier experience and my new responsibilities. I will focus my presentation on the role of structural macro models in the monetary policy process, compare the use of models at the Board with their use at Laurence H. Meyer \& Associates, and discuss how the recently introduced innovations in the Federal Reserve model might further advance the usefulness of models in the monetary policy process.
## I. Structural Models and Monetary Policy Analysis
I want to focus on three contributions of models to the monetary policy process: as an input to the forecast process; as a vehicle for analyzing alternative scenarios; and a vehicle for developing a strategy for implementing monetary policy that disciplines the juggling of multiple objectives and ensures a bridge from short-run policy to long-run objectives.
## 1. The forecast context for monetary policy decisions
Because monetary policy has the ability to adjust quickly to changing economic conditions and because lags in the response to monetary policy make it important that monetary policy be forward-looking, monetary policy is very much influenced both by incoming data and by forecasts of spending and price developments. Forecasts are central to monetary policy setting. Models make a valuable contribution to forecasting. Therefore, models can make an important contribution to the setting of monetary policy.
Models capture historical regularities, identify key assumptions that must be made to condition the forecast, embody estimates of the effects of past and future policy actions on the economy, and provide a disciplined approach to learning from past errors. I attribute much of the forecasting success of myself and my partners at LHM\&A to the way in which we allowed our model to discipline our judgment in making the forecast. A model also helps to defend and communicate the forecast, by providing a coherent story that ties together the details of the forecast. It also helps to isolate the source of disagreements about the forecast, helping to separate differences in assumptions (oil prices, fiscal policy, etc.) from disagreements about the structure of the economy or judgments about special factors that the model may not fully capture.
---[PAGE_BREAK]---
At the Board, the staff forecast, presented in the Green Book prior to each of the eight FOMC meetings each year, is fundamentally judgmental. It is developed by a team of sector specialists who consult, but are not bound by, a number of structural econometric equations describing their sectors, and further armed, in some cases, with reduced-form equations and atheoretical time series models. The team develops the forecast within the context of agreed-upon conditioning assumptions, including, for example, a path for short-term interest rates, fiscal policy, oil prices, and foreign economic policies. They begin with an income constraint and then participate in an interactive process of revisions to ensure that the aggregation of sector forecasts is consistent with the evolving forecast for the overall level of output.
Models play an important supporting role in the development of the staff forecast. A separate model group uses a formal structural macroeconometric model, the FRB-US Model, to make a "pure model" forecast which is also available to the FOMC and is an input to the judgmental forecast process. The model forecast is conditioned by the same set of assumptions as the judgmental forecast and statistical models are used to generate the path of adjustment factors, avoiding any role for judgment in the forecast. The members of the model group also actively participate in the discussions as the judgmental forecast evolves, focusing in particular on the consistency between the adjustment factors that would be required to impose the judgmental forecast on the model and the pattern of adjustment factors in the "pure" model forecast.
There are two important differences from the private sector use of models for forecasting, at least based on my experience at LHM\&A. First, the staff is not truly making a forecast of economic activity, prices, etc., because the staff forecast is usually conditioned on an unchanged path of the funds rate. Thus the staff is projecting how the economy would evolve if there were no change in the federal funds rate (which does not even always translate cleanly into no change in monetary policy). The rationale for this procedure is to separate the forecast process from the policy-making process, and therefore avoid appearing to prejudge the FOMC's decisions. This procedure may be modified when there is a strong presumption that conditions will unambiguously call for significant action if the Committee is to achieve its objectives. But it does, nevertheless, make the forecast process at the Board fundamentally different from that in the private sector where one of the key decisions in the forecast is the direction of monetary policy. It is ironic that, at the Board, where the staff is presumably more knowledgeable about the direction of policy than in the private sector, forecasting is constrained from using that information in developing the forecast. On the other hand, the practice at the Board may be very well suited to the process of making policy by forcing the FOMC to confront the implications of maintaining an unchanged path for the funds rate.
A second difference relative to my experience in the private sector has to do with the way in which judgment and model interact in the development of the forecast. My first impression of the process at the Board was that the judgmental team made its forecast without a model and the model team made its forecast without judgment, leaving the blending of model and judgment to be worked out in the process of discussion and iteration as the judgmental group looks at the model output and the model group joins the discussion of the forecast. The process is, I have come to appreciate, more complicated and subtle than this caricature. For example, when there have been important shocks (e.g., unexpected rise in oil prices or an increase in the minimum wage), model simulations of the effect of the shocks will provide a point of departure for the initial judgmental forecast. But it is, nevertheless, a different way of combining model and judgment than we used at LHM\&A where the model played a more central role in the forecast process. An advantage of the Board's approach is that it makes the
---[PAGE_BREAK]---
forecast less dependent on a single model (perhaps desirable given the diversity of views on the FOMC) and forces recognition of uncertainties in the outlook when alternative sector models yield very different forecasts.
# 2. Policy alternatives and alternative scenarios to support FOMC policy decisions
A second valuable contribution of models is to provide alternative scenarios around a base forecast. I will focus on three examples of this use of models at the Board, though there is also, of course, widespread use of alternative model-based scenario analysis in the private sector.
First, the staff regularly provides alternative forecasts roughly corresponding to the policy options that will be considered at the upcoming FOMC meeting. The staff first imposes the judgmental forecast on the FRB-US model and then uses the model to provide alternative scenarios for a policy of rising rates and a policy of declining rates, bracketing the staff forecast which assumes an unchanged federal funds rate. While this is the most direct use of the model in the forecast process, it is recognized that it has become a problematical one, especially given the structure of the new FRB-US model that otherwise treats policy as determined by a rule, a prerequisite to the forward-looking approach to expectations formation that is a major innovation in the new model. Indeed, it might well be that the presentation of a forecast that incorporates a simple monetary policy rule might be a more useful complement to the staff's judgmental forecast than the mechanical bracketing of the judgmental forecast with pre-determined paths of rising or falling rates.
Second, the staff, on occasion, uses the model to provide information about the projected effects of significant contingencies: e.g., the return of Iraq to oil exporting under the U.N. agreement for humanitarian aid or the effect of an increase in the minimum wage. Models are particularly well suited to providing this information.
Third, the model can be used to evaluate the consistency of alternative policies with the Federal Reserve's long-run objective of price stability. One of the challenges of monetary policy making is to ensure that the meeting-to-meeting policy deliberations maintain a disciplined focus on the Federal Reserve's long-term price stability objective. To facilitate this focus, five-year simulations under alternative policy assumptions are generally run semi-annually, to coincide with the FOMC meetings preceding the preparation of the Humphrey-Hawkins report and the Chairman's testimony on monetary policy before Congress. These simulations have recently focused on policy options allowing for more gradual or more rapid convergence over time to long-run inflation targets, allowing the FOMC to focus on both the different time-paths to achieve the long-run objective and the alternative paths of output and employment during the transition to the long-run target.
## 3. Policy rules to inform discretionary monetary policy
A third contribution of models to the monetary policy process is through simulations with alternative rules for Federal Reserve action. At LHM\&A we designed our model to offer users four policy regimes: setting paths for the money supply, nonborrowed reserves or the federal funds rate or turning on a reaction function according to which the federal funds rate responds to developments in output, unemployment and inflation. While we increasingly used the reaction function in our analysis of alternative fiscal policies, we did not routinely take advantage of the reaction function to forecast monetary policy. Another irony is that there is a much more active interest in the implications of monetary policy rules at the
---[PAGE_BREAK]---
Board, where discretionary policy is made, than in the private sector, where estimated rules might be effectively used to forecast monetary policy.
The staff has examined a number of alternative rules, including those based on monetary aggregates, commodity prices, exchange rates, nominal income, and, most recently, Taylor-type rules. These rules, in effect, adjust the real federal funds rate relative to some long-run equilibrium level in response to the gaps between actual and potential output and between inflation and some long-run inflation target.
Such a rule can be interpreted as either a descriptive or normative guide to policy. If the parameters of the policy rule are estimated over some recent sample period, the rule may describe the average response of the FOMC over the period. Alternatively, parameters can be derived from some optimizing framework, dependent on a specific objective function and model of the economy. Stochastic simulations with such a rule can provide some confidence that following the rule will contribute to both short-run stabilization and long-term inflation goals in response to historical shocks to the economy and the rule, in turn, can provide discipline to discretionary policy by providing guidance on when and how aggressively to move interest rates in response to movements in output and inflation.
The focus on rules is much more important under an interest rate operating procedure than under an operating procedure focused directly on monetary aggregate targets and is also more important under an interest rate operating procedure when the monetary aggregates, as has been the case for some time, do not bear a stable relationship to overall economic performance and therefore do not provide useful information about when and how aggressively to change interest rates. Taylor-type rules, in this environment, provide a disciplined approach to varying interest rates in response to economic developments that both ensures a pro-cyclical response of interest rates to demand shocks and imposes a nominal anchor in much the same way as would be the case under a monetary aggregate strategy with a stable money demand function. For this reason, I like to refer to the strategy implicit in such rules as "monetarism without money."
This should not suggest that we can write a rule that is appropriate, in all circumstances, to all varieties of shocks, and to all the varieties of cyclical experience. Rules, at best, can discipline judgment rather than replace judgment. A particular problem with Taylor-type rules is that we do not know the equilibrium real federal funds rate and, whatever it might be at one point in time, it likely varies over time. There is considerable research under way at the Board in an effort to find specifications and parameters for rules which achieve an efficient balancing of inflation and output variability and provide guidance about patterns and aggressiveness of interest rate adjustments consistent with the stabilizing properties of high-performing rules.
# II. The FRB-US Model: Rational Expectations in a Sticky-Price Model
The newly redesigned model at the Board, the FRB-US model, replaces the MPS model. The MPS model, developed in the mid to late-1960s, revolutionized macroeconometric modeling and set the standard for a considerable period of time. The Board participated in the development of the MPS model and then became its home and the Board staff kept the faith alive during the lean years when such models lost respectability in academic circles, even as their usefulness and value in forecasting and practical policy analysis was growing in the "real" world. The FRB-US model retains much of the underlying structure in terms of equilibrium
---[PAGE_BREAK]---
relationships and even more of the fundamental simulation properties of the MPS model, but significantly modernizes the estimation of the model and the treatment of expectations.
The vision in the new work is to separate macro-dynamics into adjustment cost and expectations formation components, with adjustment costs imposing a degree of inertia and expectations introducing a forward-looking element into the dynamics. The net result is a structure that integrates rational expectations into a sticky-price model. In this respect, the new model follows closely the approach pioneered by John Taylor. Finally, the estimation technique makes use of co-integration and an error-correction framework.
Financial and exchange rate relationships are based on arbitrage equations, with no adjustment costs but with explicitly forward-looking expectations. The specification of nonfinancial equations, in contrast, incorporates both adjustment costs and rational expectations.
Rational expectations are implemented in two alternative ways. First, expectations can be specified as "model-consistent" expectations; that is, the expectations about future inflation can be set to equal future inflation (perfect foresight) through iterative solutions of the model. Model-consistent expectations may, but need not, assume that the private sector has complete knowledge of the policy rule being followed by the Federal Reserve. In the second approach, expectations are also viewed as being model-consistent, but in this case the model relevant to expectations is not precisely the same as the FRB-US model. Instead, expectations are formed based on a simpler VAR model of the economy. The VAR model always includes three variables -- the output gap, a short-term interest rate, and inflation. When expectations of additional sector-specific variables are required, the system is expanded to include the additional variable. A unique aspect of the VAR expectations is that these equations also incorporate explicit forward-looking information through an error-correction specification. For example, the VAR equations include a term for the gap between actual inflation and the public's "long-run" expectations of inflation, based on survey measures of long-run inflation expectations which, in turn, might be viewed as based on a combination of the public's perception of the Federal Reserve's reaction function, including its tolerance of inflation over the long run. The equations also include the gap between actual short-term interest rates and the public's long-run expectations of short-term rates, gleaned from the yield curve.
The model retains the neo-Classical synthesis vision of the MPS model -short-run output dynamics based on sticky prices and long-run Classical properties associated with price flexibility -- and therefore produces multiplier results, both in the short and longer runs, that are very similar to those produced by the MPS model. The result is that the model produces, for the most part, what may be the best of two worlds - a modern form and traditional results! But the better articulated role of expectations in the new model also allows a richer analysis of the response to those policy actions which might have immediate impacts on inflation and/or interest rate expectations.
The model has several advantages. The first is it may be more credible to a wider audience because of its modernization in terms of cointegration and error-learning specification on the one hand and explicit use of rational expectations on the other hand. Second, the model is much more flexible in terms of research potential. It allows one to study in particular how the response to monetary or fiscal policies depends on features of the expectation formation process. Third, the model forces the user to make assumptions explicitly about expectations formation that otherwise could be avoided or hidden.
---[PAGE_BREAK]---
Let me give two examples of policy options that can be analyzed more effectively in the new model. First, consider a deficit reduction package that is credible and promises to lower interest rates in the future. In models like MPS and WUMM, the mechanical fiscal policy simulation would ignore any "bond market effect" associated with changed expectations about future short-term rates. One could, of course, add-factor downward the long-term bond rate in the term structure equation to impose a bond market effect, but the structure of the model neither immediately points you in this direction nor provides any guidance about how to intervene. In FRB-US, in contrast, one cannot avoid making an explicit assumption about the credibility of such a policy (through assumptions about future short-term interest rates in the VAR expectations or in the context of model-consistent expectations) and the assumption made about credibility will importantly affect the short-run dynamics though not the long-run effects of the policy.
Second, consider the transitional costs of reducing inflation. The transitional effects on output depend importantly on the assumptions made about the credibility of the inflation commitment. Note, however, that there are significant transitional output costs of disinflation even under full credibility and the model-consistent specification of rational expectations, arising from the sticky price implication of the adjustment cost specification. For my part, I prefer the FRB-US simulations based on limited rather than perfect credibility, because I do not believe that credibility effects significantly diminish the transition costs of lowering inflation. But I also value having a disciplined approach to showing how the costs of disinflation would vary with the differing degrees of credibility.
# References
Brayton, F., A. Levin, R. Tryon, and J. Williams. "The Evolution of Macro Models at the Federal Reserve Board." mimeo. Board of Governors of the Federal Reserve System, November 1996.
Brayton, F. and P. Tinsley. "A Guide to FRB/US: A Macroeconometric Model of the United States." FEDS 96-42, 1996.
Reifschneider, D., D. Stockton, and D. Wilcox. "Econometric Models and the Monetary Policy Process." mimeo. Board of Governors of the Federal Reserve System, November 1996.
Taylor, J. "Discretion versus Policy Rules in Practice." Carnegie Rochester Conference Series on Public Policy, vol. 39, 1993. | Laurence H Meyer | United States | https://www.bis.org/review/r970108a.pdf | monetary policy process Remarks by Mr. Laurence H. Meyer, a member of the Board of Governors of the US Federal Reserve System, at the AEA Panel on Monetary and Fiscal Policy held in New Orleans on 5/1/97. I am in the middle of my third interesting and active encounter with the development and/or use of macroeconometric models for forecasting and policy analysis. My journey began at MIT as a research assistant to Professors Franco Modigiliani and Albert Ando during the period of development of the MPS model, continued at Laurence H. Meyer \& Associates with the development of The Washington University Macro Model under the direction of my partner, Joel Prakken, and the use of that model for both forecasting and policy analysis, and now has taken me to the Board of Governors where macro models have long played an important role in forecasting and policy analysis and the MPS model has recently been replaced by the FRB-US model. I bring to this panel a perspective shaped by both my earlier experience and my new responsibilities. I will focus my presentation on the role of structural macro models in the monetary policy process, compare the use of models at the Board with their use at Laurence H. Meyer \& Associates, and discuss how the recently introduced innovations in the Federal Reserve model might further advance the usefulness of models in the monetary policy process. I want to focus on three contributions of models to the monetary policy process: as an input to the forecast process; as a vehicle for analyzing alternative scenarios; and a vehicle for developing a strategy for implementing monetary policy that disciplines the juggling of multiple objectives and ensures a bridge from short-run policy to long-run objectives. Because monetary policy has the ability to adjust quickly to changing economic conditions and because lags in the response to monetary policy make it important that monetary policy be forward-looking, monetary policy is very much influenced both by incoming data and by forecasts of spending and price developments. Forecasts are central to monetary policy setting. Models make a valuable contribution to forecasting. Therefore, models can make an important contribution to the setting of monetary policy. Models capture historical regularities, identify key assumptions that must be made to condition the forecast, embody estimates of the effects of past and future policy actions on the economy, and provide a disciplined approach to learning from past errors. I attribute much of the forecasting success of myself and my partners at LHM\&A to the way in which we allowed our model to discipline our judgment in making the forecast. A model also helps to defend and communicate the forecast, by providing a coherent story that ties together the details of the forecast. It also helps to isolate the source of disagreements about the forecast, helping to separate differences in assumptions (oil prices, fiscal policy, etc.) from disagreements about the structure of the economy or judgments about special factors that the model may not fully capture. At the Board, the staff forecast, presented in the Green Book prior to each of the eight FOMC meetings each year, is fundamentally judgmental. It is developed by a team of sector specialists who consult, but are not bound by, a number of structural econometric equations describing their sectors, and further armed, in some cases, with reduced-form equations and atheoretical time series models. The team develops the forecast within the context of agreed-upon conditioning assumptions, including, for example, a path for short-term interest rates, fiscal policy, oil prices, and foreign economic policies. They begin with an income constraint and then participate in an interactive process of revisions to ensure that the aggregation of sector forecasts is consistent with the evolving forecast for the overall level of output. Models play an important supporting role in the development of the staff forecast. A separate model group uses a formal structural macroeconometric model, the FRB-US Model, to make a "pure model" forecast which is also available to the FOMC and is an input to the judgmental forecast process. The model forecast is conditioned by the same set of assumptions as the judgmental forecast and statistical models are used to generate the path of adjustment factors, avoiding any role for judgment in the forecast. The members of the model group also actively participate in the discussions as the judgmental forecast evolves, focusing in particular on the consistency between the adjustment factors that would be required to impose the judgmental forecast on the model and the pattern of adjustment factors in the "pure" model forecast. There are two important differences from the private sector use of models for forecasting, at least based on my experience at LHM\&A. First, the staff is not truly making a forecast of economic activity, prices, etc., because the staff forecast is usually conditioned on an unchanged path of the funds rate. Thus the staff is projecting how the economy would evolve if there were no change in the federal funds rate (which does not even always translate cleanly into no change in monetary policy). The rationale for this procedure is to separate the forecast process from the policy-making process, and therefore avoid appearing to prejudge the FOMC's decisions. This procedure may be modified when there is a strong presumption that conditions will unambiguously call for significant action if the Committee is to achieve its objectives. But it does, nevertheless, make the forecast process at the Board fundamentally different from that in the private sector where one of the key decisions in the forecast is the direction of monetary policy. It is ironic that, at the Board, where the staff is presumably more knowledgeable about the direction of policy than in the private sector, forecasting is constrained from using that information in developing the forecast. On the other hand, the practice at the Board may be very well suited to the process of making policy by forcing the FOMC to confront the implications of maintaining an unchanged path for the funds rate. A second difference relative to my experience in the private sector has to do with the way in which judgment and model interact in the development of the forecast. My first impression of the process at the Board was that the judgmental team made its forecast without a model and the model team made its forecast without judgment, leaving the blending of model and judgment to be worked out in the process of discussion and iteration as the judgmental group looks at the model output and the model group joins the discussion of the forecast. The process is, I have come to appreciate, more complicated and subtle than this caricature. For example, when there have been important shocks (e.g., unexpected rise in oil prices or an increase in the minimum wage), model simulations of the effect of the shocks will provide a point of departure for the initial judgmental forecast. But it is, nevertheless, a different way of combining model and judgment than we used at LHM\&A where the model played a more central role in the forecast process. An advantage of the Board's approach is that it makes the forecast less dependent on a single model (perhaps desirable given the diversity of views on the FOMC) and forces recognition of uncertainties in the outlook when alternative sector models yield very different forecasts. A second valuable contribution of models is to provide alternative scenarios around a base forecast. I will focus on three examples of this use of models at the Board, though there is also, of course, widespread use of alternative model-based scenario analysis in the private sector. First, the staff regularly provides alternative forecasts roughly corresponding to the policy options that will be considered at the upcoming FOMC meeting. The staff first imposes the judgmental forecast on the FRB-US model and then uses the model to provide alternative scenarios for a policy of rising rates and a policy of declining rates, bracketing the staff forecast which assumes an unchanged federal funds rate. While this is the most direct use of the model in the forecast process, it is recognized that it has become a problematical one, especially given the structure of the new FRB-US model that otherwise treats policy as determined by a rule, a prerequisite to the forward-looking approach to expectations formation that is a major innovation in the new model. Indeed, it might well be that the presentation of a forecast that incorporates a simple monetary policy rule might be a more useful complement to the staff's judgmental forecast than the mechanical bracketing of the judgmental forecast with pre-determined paths of rising or falling rates. Second, the staff, on occasion, uses the model to provide information about the projected effects of significant contingencies: e.g., the return of Iraq to oil exporting under the U.N. agreement for humanitarian aid or the effect of an increase in the minimum wage. Models are particularly well suited to providing this information. Third, the model can be used to evaluate the consistency of alternative policies with the Federal Reserve's long-run objective of price stability. One of the challenges of monetary policy making is to ensure that the meeting-to-meeting policy deliberations maintain a disciplined focus on the Federal Reserve's long-term price stability objective. To facilitate this focus, five-year simulations under alternative policy assumptions are generally run semi-annually, to coincide with the FOMC meetings preceding the preparation of the Humphrey-Hawkins report and the Chairman's testimony on monetary policy before Congress. These simulations have recently focused on policy options allowing for more gradual or more rapid convergence over time to long-run inflation targets, allowing the FOMC to focus on both the different time-paths to achieve the long-run objective and the alternative paths of output and employment during the transition to the long-run target. A third contribution of models to the monetary policy process is through simulations with alternative rules for Federal Reserve action. At LHM\&A we designed our model to offer users four policy regimes: setting paths for the money supply, nonborrowed reserves or the federal funds rate or turning on a reaction function according to which the federal funds rate responds to developments in output, unemployment and inflation. While we increasingly used the reaction function in our analysis of alternative fiscal policies, we did not routinely take advantage of the reaction function to forecast monetary policy. Another irony is that there is a much more active interest in the implications of monetary policy rules at the Board, where discretionary policy is made, than in the private sector, where estimated rules might be effectively used to forecast monetary policy. The staff has examined a number of alternative rules, including those based on monetary aggregates, commodity prices, exchange rates, nominal income, and, most recently, Taylor-type rules. These rules, in effect, adjust the real federal funds rate relative to some long-run equilibrium level in response to the gaps between actual and potential output and between inflation and some long-run inflation target. Such a rule can be interpreted as either a descriptive or normative guide to policy. If the parameters of the policy rule are estimated over some recent sample period, the rule may describe the average response of the FOMC over the period. Alternatively, parameters can be derived from some optimizing framework, dependent on a specific objective function and model of the economy. Stochastic simulations with such a rule can provide some confidence that following the rule will contribute to both short-run stabilization and long-term inflation goals in response to historical shocks to the economy and the rule, in turn, can provide discipline to discretionary policy by providing guidance on when and how aggressively to move interest rates in response to movements in output and inflation. The focus on rules is much more important under an interest rate operating procedure than under an operating procedure focused directly on monetary aggregate targets and is also more important under an interest rate operating procedure when the monetary aggregates, as has been the case for some time, do not bear a stable relationship to overall economic performance and therefore do not provide useful information about when and how aggressively to change interest rates. Taylor-type rules, in this environment, provide a disciplined approach to varying interest rates in response to economic developments that both ensures a pro-cyclical response of interest rates to demand shocks and imposes a nominal anchor in much the same way as would be the case under a monetary aggregate strategy with a stable money demand function. For this reason, I like to refer to the strategy implicit in such rules as "monetarism without money." This should not suggest that we can write a rule that is appropriate, in all circumstances, to all varieties of shocks, and to all the varieties of cyclical experience. Rules, at best, can discipline judgment rather than replace judgment. A particular problem with Taylor-type rules is that we do not know the equilibrium real federal funds rate and, whatever it might be at one point in time, it likely varies over time. There is considerable research under way at the Board in an effort to find specifications and parameters for rules which achieve an efficient balancing of inflation and output variability and provide guidance about patterns and aggressiveness of interest rate adjustments consistent with the stabilizing properties of high-performing rules. The newly redesigned model at the Board, the FRB-US model, replaces the MPS model. The MPS model, developed in the mid to late-1960s, revolutionized macroeconometric modeling and set the standard for a considerable period of time. The Board participated in the development of the MPS model and then became its home and the Board staff kept the faith alive during the lean years when such models lost respectability in academic circles, even as their usefulness and value in forecasting and practical policy analysis was growing in the "real" world. The FRB-US model retains much of the underlying structure in terms of equilibrium relationships and even more of the fundamental simulation properties of the MPS model, but significantly modernizes the estimation of the model and the treatment of expectations. The vision in the new work is to separate macro-dynamics into adjustment cost and expectations formation components, with adjustment costs imposing a degree of inertia and expectations introducing a forward-looking element into the dynamics. The net result is a structure that integrates rational expectations into a sticky-price model. In this respect, the new model follows closely the approach pioneered by John Taylor. Finally, the estimation technique makes use of co-integration and an error-correction framework. Financial and exchange rate relationships are based on arbitrage equations, with no adjustment costs but with explicitly forward-looking expectations. The specification of nonfinancial equations, in contrast, incorporates both adjustment costs and rational expectations. Rational expectations are implemented in two alternative ways. First, expectations can be specified as "model-consistent" expectations; that is, the expectations about future inflation can be set to equal future inflation (perfect foresight) through iterative solutions of the model. Model-consistent expectations may, but need not, assume that the private sector has complete knowledge of the policy rule being followed by the Federal Reserve. In the second approach, expectations are also viewed as being model-consistent, but in this case the model relevant to expectations is not precisely the same as the FRB-US model. Instead, expectations are formed based on a simpler VAR model of the economy. The VAR model always includes three variables -- the output gap, a short-term interest rate, and inflation. When expectations of additional sector-specific variables are required, the system is expanded to include the additional variable. A unique aspect of the VAR expectations is that these equations also incorporate explicit forward-looking information through an error-correction specification. For example, the VAR equations include a term for the gap between actual inflation and the public's "long-run" expectations of inflation, based on survey measures of long-run inflation expectations which, in turn, might be viewed as based on a combination of the public's perception of the Federal Reserve's reaction function, including its tolerance of inflation over the long run. The equations also include the gap between actual short-term interest rates and the public's long-run expectations of short-term rates, gleaned from the yield curve. The model retains the neo-Classical synthesis vision of the MPS model -short-run output dynamics based on sticky prices and long-run Classical properties associated with price flexibility -- and therefore produces multiplier results, both in the short and longer runs, that are very similar to those produced by the MPS model. The result is that the model produces, for the most part, what may be the best of two worlds - a modern form and traditional results! But the better articulated role of expectations in the new model also allows a richer analysis of the response to those policy actions which might have immediate impacts on inflation and/or interest rate expectations. The model has several advantages. The first is it may be more credible to a wider audience because of its modernization in terms of cointegration and error-learning specification on the one hand and explicit use of rational expectations on the other hand. Second, the model is much more flexible in terms of research potential. It allows one to study in particular how the response to monetary or fiscal policies depends on features of the expectation formation process. Third, the model forces the user to make assumptions explicitly about expectations formation that otherwise could be avoided or hidden. Let me give two examples of policy options that can be analyzed more effectively in the new model. First, consider a deficit reduction package that is credible and promises to lower interest rates in the future. In models like MPS and WUMM, the mechanical fiscal policy simulation would ignore any "bond market effect" associated with changed expectations about future short-term rates. One could, of course, add-factor downward the long-term bond rate in the term structure equation to impose a bond market effect, but the structure of the model neither immediately points you in this direction nor provides any guidance about how to intervene. In FRB-US, in contrast, one cannot avoid making an explicit assumption about the credibility of such a policy (through assumptions about future short-term interest rates in the VAR expectations or in the context of model-consistent expectations) and the assumption made about credibility will importantly affect the short-run dynamics though not the long-run effects of the policy. Second, consider the transitional costs of reducing inflation. The transitional effects on output depend importantly on the assumptions made about the credibility of the inflation commitment. Note, however, that there are significant transitional output costs of disinflation even under full credibility and the model-consistent specification of rational expectations, arising from the sticky price implication of the adjustment cost specification. For my part, I prefer the FRB-US simulations based on limited rather than perfect credibility, because I do not believe that credibility effects significantly diminish the transition costs of lowering inflation. But I also value having a disciplined approach to showing how the costs of disinflation would vary with the differing degrees of credibility. Brayton, F., A. Levin, R. Tryon, and J. Williams. "The Evolution of Macro Models at the Federal Reserve Board." mimeo. Board of Governors of the Federal Reserve System, November 1996. Brayton, F. and P. Tinsley. "A Guide to FRB/US: A Macroeconometric Model of the United States." FEDS 96-42, 1996. Reifschneider, D., D. Stockton, and D. Wilcox. "Econometric Models and the Monetary Policy Process." mimeo. Board of Governors of the Federal Reserve System, November 1996. Taylor, J. "Discretion versus Policy Rules in Practice." Carnegie Rochester Conference Series on Public Policy, vol. 39, 1993. |
1997-01-14T00:00:00 | "Mr. Greenspan addresses some key roles of a central bank (Central Bank Articles and Speeches, 14 J(...TRUNCATED) | "Remarks by the Chairman of the Board of Governors of the US Federal Reserve System, Mr. Alan Greens(...TRUNCATED) | "Remarks by the\nMr. Greenspan addresses some key roles of a central bank\nChairman of the Board of (...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\nMr. Greenspan addresses some key roles of a central bank Remarks by the C(...TRUNCATED) | Alan Greenspan | United States | https://www.bis.org/review/r970116.pdf | "Mr. Greenspan addresses some key roles of a central bank Remarks by the Chairman of the Board of Go(...TRUNCATED) |
1997-01-16T00:00:00 | "Mr. Meyer reviews the economic outlook and challenges for monetary policy in the United States (Ce(...TRUNCATED) | "Remarks by Mr. Laurence H. Meyer, a member of the Board of Governors of the US Federal Reserve Syst(...TRUNCATED) | "Mr. Meyer reviews the economic outlook and challenges for monetary policy\nin the United States Rem(...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\n# Mr. Meyer reviews the economic outlook and challenges for monetary poli(...TRUNCATED) | Laurence H Meyer | United States | https://www.bis.org/review/r970121a.pdf | "in the United States Remarks by Mr. Laurence H. Meyer, a member of the Board of Governors of the US(...TRUNCATED) |
1997-01-21T00:00:00 | "Mr. Greenspan gives some personal perspectives on the current economic situation in the United Stat(...TRUNCATED) | "Testimony of the Chairman of the Board of Governors of the US Federal Reserve System, Mr. Alan Gree(...TRUNCATED) | "Mr. Greenspan gives some personal perspectives on the current economic\nsituation in the United Sta(...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\n# Mr. Greenspan gives some personal perspectives on the current economic (...TRUNCATED) | Alan Greenspan | United States | https://www.bis.org/review/r970122a.pdf | "situation in the United States Testimony of the Chairman of the Board of Governors of the US Federa(...TRUNCATED) |
1997-01-24T00:00:00 | "Mr. Meyer looks at the need to rationalize the structure of the financial services industry in the (...TRUNCATED) | "Remarks by Mr. Laurence H. Meyer, a member of the Board of Governors of the US Federal Reserve Syst(...TRUNCATED) | "Mr. Meyer looks at the need to rationalize the structure of the financial\nservices industry in the(...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\n# Mr. Meyer looks at the need to rationalize the structure of the financi(...TRUNCATED) | Laurence H Meyer | United States | https://www.bis.org/review/r970129.pdf | "services industry in the United States Remarks by Mr. Laurence H. Meyer, a member of the Board of G(...TRUNCATED) |
1997-01-24T00:00:00 | Mr. Patrikis discusses the monetary policy and regulatory implications of banking on the Internet | "Address by the Vice-President of the Federal Reserve Bank of New York, Mr. Ernest T. Patrikis, at t(...TRUNCATED) | "Mr. Patrikis discusses the monetary policy and regulatory implications of\nbanking on the Internet (...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\n# Mr. Patrikis discusses the monetary policy and regulatory implications (...TRUNCATED) | Ernest T Patrikis | United States | https://www.bis.org/review/r970227e.pdf | "I want to share with you today some observations and speculations gathered this and last year about(...TRUNCATED) |
1997-01-28T00:00:00 | "Ms. Phillips examines whether national financial market regulatory systems should be harmonised in (...TRUNCATED) | "Remarks by Ms. Susan M. Phillips, a member of the Board of Governors of the US Federal Reserve Syst(...TRUNCATED) | "Ms. Phillips examines whether national financial market regulatory systems\nshould be harmonised in(...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\n# Ms. Phillips examines whether national financial market regulatory syst(...TRUNCATED) | Susan M Phillips | United States | https://www.bis.org/review/r970207b.pdf | "should be harmonised in the light of international competition Remarks by Ms. Susan M. Phillips, a (...TRUNCATED) |
1997-01-29T00:00:00 | "Mr. Kelley looks at the extent to which banks are still special (Central Bank Articles and Speeche(...TRUNCATED) | "Remarks by Mr. Edward W. Kelley, Jr., a member of the Board of Governors of the US Federal Reserve (...TRUNCATED) | "Mr. Kelley looks at the extent to which banks are still special Remarks by\nMr. Edward W. Kelley, J(...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\nMr. Kelley looks at the extent to which banks are still special Remarks b(...TRUNCATED) | Edward W Kelley, Jr | United States | https://www.bis.org/review/r970214d.pdf | "Mr. Kelley looks at the extent to which banks are still special Remarks by Mr. Edward W. Kelley, Jr(...TRUNCATED) |
1997-01-30T00:00:00 | "Mr. Greenspan dicusses the accuracy of the US consumer price index (Central Bank Articles and Spee(...TRUNCATED) | "Testimony of the Chairman of the Board of the US Federal Reserve System, Mr. Alan Greenspan, before(...TRUNCATED) | "Mr. Greenspan dicusses the accuracy of the US consumer price index\nTestimony of the Chairman of th(...TRUNCATED) | "\n\n---[PAGE_BREAK]---\n\n# Mr. Greenspan discusses the accuracy of the US consumer price index \n\(...TRUNCATED) | Alan Greenspan | United States | https://www.bis.org/review/r970205a.pdf | "Testimony of the Chairman of the Board of the US Federal Reserve System, Mr. Alan Greenspan, before(...TRUNCATED) |
This data contains speeches from European Central Bank (ECB) and Federal Reserve (FED) executives, from 1996 to 2025.
In addition to the text provided by the Bank of International Settlements (BIS), we also added a new textual column derived extracting information from the source PDF files using Mistral's OCR API. Page breaks are identified with the \n\n---[PAGE_BREAK]---\n\n
string.