[edmc id= 3432]DOWNLOAD AS PDF[/edmc]

1.      December 16th 2015: the Fed awakens

“Given the economic outlook, and recognizing the time it takes for policy actions to affect future economic conditions, the committee decided to raise the target range for the federal funds rate to ¼ to ½ percent”

Last December 16th, Chairman Janet Yellen announced that the Federal Open Market Committee (FOMC) unanimously approved a quarter-point increase in the target window for the federal funds rate to 0.25 – 0.50. This marked the first raise since June 29th 2006 and the first movement from the Zero Lower Bound (ZLB) reached on December 16th 2008 and not left ever since. Despite the importance of such a first step, the monetary policy as a whole remains accommodative (i.e. “the funds rate is likely to remain, for some time, below levels that are expected to prevail in the longer run”), as FOMC officials specified that the pace of rate-hikes will be slow, gradual and dependent on the quality of economic data. Additionally, the Fed is not planning to start unwinding its gigantic $4.5tn balance sheet any time soon. Finally, as of December 16th, the FOMC members median expectations called for around four hikes during 2016 (1.25 – 1.50 projected at the end of the year) and four more hikes during 2017 (2.25 – 2.50 projected).

In what follows, after a brief introduction of the main concepts behind central banking (2), we are going to set-up a Taylor-rule based model (3-5), estimate it (6), check whether the ZLB was a binding constraint from 2009 onwards (7) and simulate our expectations of the Fed hiking path (8). Finally, we are going to crosscheck our results against the market-implied expectations (9).

We propose this topic, because it is the first time in our career that we face a rising-rates environment. Additionally, forecasting monetary policy is today necessary in order to understand any financial market: central banks have in fact grown stronger in their influence on asset prices, as they have been politically allowed to obtain a very powerful and active role in the economy (see, for example, the role of ECB in the EU as opposed to national governments). Finally, we are going to show basic econometrics at work in an easy way.

 2.      Introduction: foundations of monetary policy

Central banks (i.e. the institutions that print and govern money in an economy) are nowadays usually set up as independent bodies, meaning that they take their decisions based on their own judgement, and should ignore any political pressure. However, they operate under a mandate that almost everywhere in the world prescribes positive, low and stable inflation. Other goals can also be included in the mandate, but they are usually subordinated to inflation targeting.

In order to get a solid understanding of this premise, it is useful to analyse it in a little more detail.

(i) Why independent?
Independence is generally considered an optimal feature (and solution of agency problems arising when a government establishes a central bank), as shown by many theoretical models (see Barro-Gordon model of time-inconsistency). Empirical observation supports this choice as – on average – the more independent the central bank, the lower and more stable the inflation rate.

(ii) Why inflation targeting?
Inflation targeting gives the value of the currency the central bank prints, a nominal anchor, exactly like gold standard or fixed exchange rates did in the past. In this way, central banks fulfil their key task of safeguarding the value of the home currency.

(iii) Why low and stable inflation?
Since monetary policy has no real effect in the long run, the best contribution to growth and welfare a central bank can provide is a positive, low and stable inflation. 2% is usually targeted as an optimal equilibrium between too low and too high inflation. In fact, high and unstable inflation is costly because: (a) it generates uncertainty about future price level (and agents are risk-averse); (b) it creates relative price distortions (and thus inefficiencies); (c) tax brackets are usually nominal and therefore inflation exacerbates distortions on the job market; (d) it generates “shoe-leather” costs as it forces individuals to put time and effort into managing their own cash balances. Too low inflation is also costly because: (a) nominal wages are rigid downwards (employers are not able to reduce wages efficiently); (b) nominal interest rates have a lower bound and the real interest rate can go negative during bad times (i.e. given that i = r + E(π), if inflation is too low and the real rate goes down, the nominal rate will be pushed against its lower bound); (c) debt contracts are usually written in nominal terms; (d) deflation may lead to recessions (as a consequence of the inefficiencies caused by the previous points).

In the Fed case, the mandate is laid down as operating to reach “the goals of maximum employment, stable prices, and moderate long-term interest rates”. It looks like it is a triple mandate but it is actually dual, as stable prices and moderate long-term interest rates can be treated as a single one (i.e. low and stable inflation implies moderate long-term rates, given once again that i = r + E(π)). On one hand, the Fed should pursue a 2% inflation target; on the other hand, it should act in a way that pushes unemployment rate close to its natural rate (i.e. the lowest rate that the economy can sustain over the long run). The second part of the mandate can be re-written – using Okun’s Law – as delivering an output close to its potential level (i.e. what an economy can produce when all its resources such as workforce, equipment, technology and others are fully utilized).

As we will see shortly, the primary tool used by the Fed in order to pursue its mandate is the federal funds rate. Assuming the FOMC strictly follows its mandate, the interest rate decision should then be well explained by inflation and output dynamics.

In principle, the Fed should not be concerned about other things happening in the markets. What we see in practice, though, is that also financial instability receives a big deal of attention. However, we can easily reconcile financial instability to the mandate, looking at a potential crisis as the cause for big and sustained deviations of both output/unemployment (go well below their potential/natural level) and inflation (during a recession prices go often down) from their respective target levels. As such, the probability of a crisis (started for example by a bubble bursting) would enter in the central bank’s loss function, perfectly in line with the mandate.

As a consequence of what we said, we expect that the following empirical estimation of Fed interest rate decision rule (the Taylor rule) should result in a very good fit (i.e. high R-squared).

2

This would not be a surprise, as the Fed is positively interested in being predictable. Indeed, being predictable allows the central bank decisions to penetrate the economy not only in the current period, but also down the road in the future. This means that the Fed is able to manage agents’ expectations over many future periods, and assuming that agents are forward looking (as they are in any multi-period setting), to greatly increase the effect of monetary policy today.

3.      Setting up the model: the data series

We used quarterly data, retrieved from the Fed of St. Louis research database. For what concerns explanatory variables forecasts for 2016 and 2017, we used estimations from OECD.

Dependent Variable – R – effective federal funds rate

It is the interest rate at which depository institutions (i.e. banks), trade federal funds (i.e. balances held at Federal Reserve banks), with each other overnight. In practice, it is the rate that banks with excess cash charge to other banks that need to quickly raise liquidity.

It is the central interest rate in the US financial market and it influences all other rates. It is therefore the primary monetary policy transmission tool for the Fed. It is essentially market determined but it is influenced by the Fed through open market operations, aimed at reaching the target rate decided during the previous FOMC meeting. In an open market operation, the Fed buys (sells) bonds from (to) banks in exchange for cash, causing the banks to have more (less) cash on average on their balances, incrementing (reducing) the net supply of federal funds, leading to a price (i.e. R) decrease (increase).

Our data were measured as the quarterly average of daily figures.

Explanatory Variable  x – the output gap

It represents the log-deviation of the real output measurement from the CBO potential level of output.
It is calculated in percentage terms as:

3

Explanatory Variable  π – the yearly domestic inflation rate

It represents the log-change of the price deflator over 4 quarters in percentage terms.

4

The price deflator (P) is calculated in the usual way as the ratio between nominal and real GDP.

5

The plotted data series look as follows:

In blue the fed funds rate, in red the output gap and in green the inflation rate over the period 1960Q1 – 2017Q4.

For what concerns OECD forecasts, we can see a steadily increasing output gap that turns positive in 2017. Inflation instead slowly declines, staying below the target of 2% for the entire period.

4.      Setting up the model: the estimation and simulation windows

Our model estimation window (1984Q1 – 2008Q3) is highlighted in orange in the previous figure. The lower bound of this window is chosen as the historical moment in which inflation could be officially considered again under control. Indeed, during the 70s, the Fed lost control of inflation rate in the well-known stagflation times. This situation was caused by a too soft response to shocks (i.e. cost-push shocks in commodities, oil crisis) by the central bank. This behaviour changed when Mr Volcker was appointed Chairman (1979), and the policy response to high inflation became much more aggressive (at the price of very negative output gap). By the beginning of 1984, inflation was moving in the range we are used to see in recent times. The upper bound is the beginning of the “liquidity trap” in the midst of the 2007-2008 financial crisis. Starting from the end of 2008, as noted before, fed funds rate was flat at the zero lower bound. This can be considered an artificial behaviour of the dependent variable at the constraint and, as such, not representative of the usual behaviour of the Fed.

Our simulation window 1 (2008Q4 – 2015Q4) is highlighted in red. In this window, we are going to see whether the zero lower bound was actually a binding constraint for the Fed.

Our simulation window 2 (2016Q1 – 2017Q4), is highlighted in blue. In this window, we are going to forecast the hiking path over the next 2 years.

 5.      Setting up the model: the equation

Starting from the simplest linear model with constant:

7

We add smoothing (i.e. an AR(1) term):

8

And then we rewrite it in deviations from steady state (it is easy to show the correspondence between parameters of the previous equation and the next one):

9

With steady state values implied by the mandate (for inflation and output) or to be estimated (for fed funds rate) of:

10

This last equation form allows us to interpret more easily the coefficients.

Three comments on the model equation:

(i) Why do we assume the Fed reacts simultaneously to the explanatory variables (i.e. why R, x and π have the same time subscript)?
Of course, the FOMC cannot observe precisely those values in real time, as they are published with a lag. However, monetary policy decisions are based on forward estimates of inflation and output, not on past values. Therefore, the true information set is the estimate of future values of those two variables, available at time t. Such estimates are complex, blended from many different models and not always publicly available. They are also based on real-time indicators and tend to be monotonic over time. We then approximate these values with the same quarter realization, as we consider it on average a good proxy for the information set of the central bank.

(ii) Why do we introduce the AR(1) term (smoothing)?
From the plot of the federal funds rate we can first of all see that the series never shows jumps and always moves gradually. This is a graphical indication that an AR(1) process can be actually at work. Indeed, central banks always change their policy in small steps for at least three reasons: (a) the data they use to make their choices are measured with a lot of noise. Therefore, the central bankers cannot be completely sure of the decision to take, and by adjusting gradually, they reduce the cost of mistakes; (b) credibility for a central bank is probably the most important asset. It becomes even more relevant during difficult times and unconventional monetary policy uses. Therefore, the central bank would avoid at any cost to be forced to revert a communicated decision. Gradual movements reduce this risk; (c) despite being delegated to technical bodies, monetary policy can still be considered a political decision in principle. As such, it follows some patterns of any political change. Political changes are easier to accept by voters when done gradually.

(iii) Why don’t we include any feedback effect from the dependent variable to the independent ones?
Although the mere existence of monetary policy implies that a change in R affects the real economy (in the short term), it can be shown that monetary policy has a lagged impact (i.e. at least two quarters between the change in policy and the first real effects). Furthermore, setting up a more complete model may very well result only in added complexity, without improving our understanding.

6.      Estimation of the model

9

11

Source: BSIC, Fed of St. Louis research database (data)

In blue the historical fed funds rate, in cyan the fitted series. R-squared is very high (as expected), all coefficients are statistically significant.
The AR(1) process has a 0.9 coefficient, meaning that 90% of each fitted value is just the previous period value.
The steady state fed fund rate is estimated at 4.75%, resulting in a long-term real rate of 2.75%, which by no-arbitrage arguments, should be the long-term real growth rate of US economy.
Focusing on the innovation term (keeping in mind that everything is pre-multiplied by 1-ρ=0.1), Fed reaction (i.e. change in the innovation term, not in R(t)) to one percentage point deviation of inflation from its target is an average movement of 1.93 in the same direction of the deviation. A percentage point deviation of the output gap from zero, results instead in an average 1.50-point reaction, still in the same direction. The higher coefficient for inflation means that an inflation deviation has been historically less tolerated than an output deviation, implying a hierarchy between the two parts of the mandate.

7. Simulation 1 (2008Q4 – 2015Q4): was the ZLB a binding constraint?

12

Source: BSIC, Fed of St. Louis research database (data)

As clearly visible from the red dashed line (simulation), the FOMC would have liked to go well below the ZLB in 2009. Therefore the constraint was binding.

As obvious, the values reached by the simulation are just theoretical. First of all, they do not consider any parallel effect of non-conventional monetary policy (in particular the concurrent expansion of the Fed balance sheet) happened over that period. But most importantly, they are not reasonable, as a -3% rate could not be obtained in the markets.

Indeed, since cash is a 0% yielding asset and banks can keep cash in their vaults, there is a no-arbitrage relation between cash and federal reserves, which gives rise to a lower bound for fed funds rate. However, the lower bound is not zero, as we must take into account the costs created by inefficiencies unavoidable when storing enormous sums of cash. Additionally, commercial banks are generally not able to pass on negative rates to households, therefore a big discrepancy between fed funds rate and 0% rate charged to households would affect the profitability of the banks. There is not yet a consensus on how deep into the negative the rate can go, but several central banks around the world have gone a handful of decimal points into negative territory over the last few years. The Fed has never gone below zero and has started considering the case only recently (Chairman Yellen even said they are not sure they have the legal power to do so).

Our theoretical simulation into negative territory would have shown positive rates for the first time in 2017Q2.

13

Source: BSIC, Fed of St. Louis research database (data)

Once again, we specify that these are theoretical values and ignore any unconventional monetary policy, as well as any effective lower bound.

8.      Simulation 2 (2016Q1 – 2017Q4): our expectations on the hiking path

However, starting from the previous simulation we can rebase the rates of 2015Q4 at their actual value and subsequently proceed with the same simulation as before for the remaining two years. This is equivalent to assuming that non-conventional monetary policy effects can be summarized as a positive delta on R, constant over the next two years. It is a strong assumption but it fits with the rule-of-thumb nature of the Taylor rule.

14

Source: BSIC, Fed of St. Louis research database (data)

Assuming no balance sheet reduction over the period, assuming that OECD forecasts are reliable and disregarding any political and credibility considerations (i.e. FOMC delivering what already promised in the past), we can derive some conclusions.

Indeed, OECD forecasts, coupled with our estimate of the historical behaviour of the Fed, suggest:

(i)   December hike was probably at least a quarter premature

(ii)  In 2016 we should not see any additional hike

(iii)  In 2017 we should see one hike per quarter, four in total

(iv)  The hiking path will be less steep than in the past

As obvious, our view depends on the assumptions specified above. For example, we are well aware that credibility and political considerations cannot be disregarded by any mean.

9.      What the markets expect: CME Group Fed Watch

By looking at the options on 30-day Fed Fund futures, it is possible to extract the probability the market prices (i.e. implies) for a hike at each of the next FOMC meetings. In this way, we can check how our view is positioned when compared to the market.

As of 14th February, the market discounts a 61.4% probability of no hikes during the next 12 months, which is consistent with our analysis.

When compared with December 16th FOMC expectations, both the market and our analysis suggest a slower than signalled return to the steady state.


1 Comment

Francesco Furno · 19 February 2016 at 11:01

Bel lavoro!

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *