R: [datascience] Empirical Macroeconomics and DSGE Modeling in Statistical Perspective


Cronologico Percorso di conversazione 
  • From: "Gianluca Cubadda" < >
  • To: "'Franco Peracchi'" < >, "'Alessandro Casini'" < >
  • Cc: < >, "'Pasquale Scaramozzino'" < >
  • Subject: R: [datascience] Empirical Macroeconomics and DSGE Modeling in Statistical Perspective
  • Date: Fri, 6 Jan 2023 14:51:24 +0100

I agree that the reduced form model does not generally encompass the data generating process, but I believe that this does not imply that a structural model is necessarily a good approximation of reality.

Statistical theory just says that under misspecification the MLE is the best approximation according to the KL metric, not that it is a good one!

In the specific case of DSGEs, first they are derived as a low-order approximation of the underlying theoretical model on the steady state. So, they approximate the theoretical system dynamics by construction.

Second, and even more importantly, DSGEs do not consider many relevant features of economic and financial time series. A very partial list would include phenomena as: Large disequilibrium errors (e.g. those connected to events such as pandemics, financial crisis, etc.), issues connected to aggregation (over agents and over time), errors in variables (macroeconomic data are typically estimated and not simply observed), various forms of instabilities over time (e.g. time varying volatilities and structural breaks), pretreatment of the data (e.g. seasonal adjustment), and discrepancies between the theoretical notion of variables and their definitions in national accounts (e.g. consumption and investment).

Thus, even taking the (likely heroic) assumption that the underlying theoretical model is a reasonable explanation of the actual macroeconomic dynamics along the steady state, there are still many reasons why an estimated DSGE may be a rather poor empirical model.

Matters are even more complicated by the fact that DSGEs are typically stochastically singular models (more variables than errors), hence the classical diagnostics that are used in reduced form models cannot be applied.

For instance, I would never buy an econometric model that is estimated with time series data and that its residuals display a significant linear dependence, whereas the component of the data that DSGEs cannot explain typically displays large autocorrelations (w.r.t. the seminal paper by Ireland, it’s well-known that there was a command in the attached Gauss routine that forced some autoregressive coefficients of order one to be equal 0.99 when their ML estimates where larger than 1…)

All the above to say that I believe that it’s never a good idea to estimate DSGEs with ML.

Instead, I have no objections on Bayesian approaches that aim to shrink a reduced form model towards a DSGE. That’s fine for me because the DSGE simply represents an a priori in a Bayesian setting and not the “true” model as in the ML framework.

Ciao,

Gianluca

  

 

Da: Franco Peracchi < >
Inviato: giovedì 5 gennaio 2023 20:01
A: Alessandro Casini < >
Cc: ; Gianluca Cubadda < >; Pasquale Scaramozzino < >
Oggetto: Re: [datascience] Empirical Macroeconomics and DSGE Modeling in Statistical Perspective

 

I agree that weak identification takes many forms but it essentially boils down to some asymptotic objective function (the average likelihood or minus the IV/GMM criterion) being “too flat” near its maximum over the relevant parameter space.

 

I personally think that the set of models we are willing to consider (the "model space”) does not contains the true DGP, except in some very special and very rare cases, so we need to understand better what we can learn about what we care about from a given misspecified models or, perhaps more common, from a set of alternative misspecified models. We also need to avoid the standard practice of selecting one model in the model space based on the data, and then using the selected model as if we had known it a priori, ignoring the data-based model selection step. This typically leads to wrong inferences.

 

Buona Befana/Re Magi a tutti!

 

Franco

 

 



On Jan 5, 2023, at 19:20, Alessandro Casini < "> > wrote:

 

Ok, I see your point. Consider the case of incorrect specification that you mentioned, i.e., correlation between the instrument and the errors in the structural equation. Suppose this correlation is a finite number, say c. I can define a form of weak IV by specifying that correlation as a sequence starting from zero and converging to c as the sample size increases.
In that case, the incorrect specification would be the limiting case of my weak IV condition.
What I am trying to say is that for a type of incorrect specification one can rewrite this as a limiting case of some appropriately chosen form of weak IV condition. It depends how one defines weak identification.

Regarding the White's (1982) result, the discussion above what based on Johansen's paper where he considered a statistical model that contains the economic model as a submodel, but not the data generating process. This corresponds to massive misspecification. I do not expect ML to work well in this case. It may yield non-sensible results.
White's (1982) does not say that ML will return economically meaningful estimates if the model is misspecified. It just says, as you precisely explained, that ML minimizes the KL distance from the true DGP. However, it is possible that such distance is too long for the estimates to be economically meaningful.
I think this is what Johansen was trying to say.
Of course, as you say, if you have mild misspecification then ML will return useful estimates.
It depends how far one is from the true model (or how much misspecified the model is).   

Alessandro

On 1/5/2023 6:41 PM, Franco Peracchi wrote:

Lack of correlation between the instruments and the endogenous regressors is just one of the many forms in which an IV model could be misspecified. Another forms of misspecification is correlation between the instruments and the error terms in the "structural equation” of interest despite the fact that instruments are “strong" (in the sense that they strongly predict the outcome in the "first stage” regression). Yet another form is incorrect specification of the “structural equation” itself. This is why I don’t agree that incorrect specification is an extreme form of weak identification. 

 

I also don't agree with what we should conclude from White (1982). In some cases the assumed model may still do fine (i.e., it’s a good approximation given our purposes), but it may also not do fine (i.e., it’s not a good approximation given our purposes). It all depends on the model you assume and your goals. This is why I said that the assumed model `better be “not too simplistic" given the problem at hand’. 

 

But I agree with you that the results by McDonald and Shalizi have little to do with DSGE in general and a lot to do instead with the use of the “canonical model” from Smets and Wouters (2007).

 

I hope this helps.

 

Franco

 



 

Franco think about IV. Weak IV means the first-stage coefficient approaches zero. Incorrect specification means that there is no first-stage coefficient (the instrument is not an instrument) (or the first-stage coefficient is zero). So you can seen incorrect instrument as an extreme case of weak IV.

Yes, your point on Kullback-Leibler (KL) follows White (1982). That paper told economists that even if we misspecified the model we may still do fine. In Theory. Unfortunately, if you misspecified a highly nonlinear, complex model like DSGE models, then you get weird results with ML. So ML theoretically useful even when models are misspecified but not so practically useful in certain circumstances. The types of weird results in Mc Donald and Shalizi are just due to losing identification. Things can go very wrong not matter what White (1982)'s results say.    

Alessandro

On 1/5/2023 3:38 PM, Franco Peracchi wrote:

I don’t agree that incorrect specification is an extreme form of weak identification and that ML is useless when the assumed model (a family of probability distributions indexed by a parameter that may well be infinite dimensional) is incorrectly specified, which is the typical case. If the assumed model is incorrectly specified, ML estimates the best approximation, in the Kullback-Leibler (KL) sense to the DGP. In my view, this is the best we can do given a model. Of course, the assumed model better be “not too simplistic" given the problem at hand. Further, it better be identifiable, that is, there must be a unique element in the assumed parametric family that is at shortest KL “distance” from the DGP.

 

Buon anno!

Franco

 



 

Thank you Gianluca. Yes, Johansen's point is formally very clear. If one chooses a statistical model that contains the economic model as a submodel, but not the DGP, then yes ML is useless. That would mean no identification, which is an extreme case of weak identification (I am a nice guy, I did not want to be too negative😁). 

I think a problem with identification can give rise to the results in Mc Donald and Shalizi. However, identification is a big deal in Macroeconomics or in Economics more generally. DSGE may be suffer more from it, but also other approaches have to deal with identification.  

I have been thinking often about what could be the sources of weak identification/no identification. My conclusion about identification problems in Macroeconomics is that any model cannot get (strong) identification when fitted to 20+ years of data. The parameters of the DGP has changed many times. Economic theory yields time-invariant restrictions between economic variables (i.e., parameters are implied to be constant). So any model cannot fit this data well. At  best, we can only hope to get strong identification in sub-samples. 

Alessandro

 

On 1/5/2023 2:34 PM, Gianluca Cubadda wrote:

Hi Alessandro,

 

please have a look at the paper by Soren Johansen that I mentioned in my reply to Pasquale.

Shortly, Soren argues that it is fundamentally wrong to use ML methods when the statistical model is not a credible (and testable) representation of the (statistical) data generating process. If one takes this point, then the problem with DSGEs is even more serious than weak identification…

 

Ciao,

 

Gianluca

 

 

Da: Alessandro Casini  ">< > 
Inviato: giovedì 5 gennaio 2023 14:11
A:  "> ; Pasquale Scaramozzino  ">< >; Gianluca Cubadda  ">< >
Oggetto: Re: [datascience] Empirical Macroeconomics and DSGE Modeling in Statistical Perspective

 

Hi, I only had a glance at the Introduction. I think their results might be credible. But isn't this known in Econometrics as weak identification? Because weak identification is a long-study problem in Macroeconometrics and also in DSGE model. It might be that their results are just consequences of weak identification.

The paper does not mention weak identification. I was expecting a discussion of this.

For example,  FernĂ©ndez-Villaverde (2010) in his survey of DSGE estimation writes: "likelihoods of DSGE models are full of local maxima and minima and of nearly at surfaces... the standard errors of the estimates are notoriously di‑difficult to compute and their asymptotic distribution a poor approximation to the small sample one." 

So it is possible to get weird results if one does not have strong identification.

Best,

Alessandro

 

On 1/5/2023 1:14 PM, Pasquale Scaramozzino wrote:

Caro Gianluca, 

Buon Anno anche a te! Grazie per la segnalazione del paper: sembra molto interessante e lo leggerò con cura. Da un primo sguardo ho l'impressione che confermi quanto giĂ  pensavo riguardo a questi modelli. 

Un caro saluto, 

Pasquale 



Quoting Gianluca Cubadda  ">< >: 



First of all, happy new year to you all! 

Here it is another paper that is raising lots of discussions: 

https://arxiv.org/abs/2210.16224 

I don't know if those results have already been formally checked. 

Ciao, 

Gianluca 




Pasquale Scaramozzino 

Professore di Economia Politica 
Dipartimento di Economia e Finanza 
FacoltĂ  di Economia 
UniversitĂ  di Roma "Tor Vergata" 
via Columbia, 2 
00133 Roma 
Italy 

pho. 0039 06 7259 5727 
fax  0039 06 2020 500 

http://economia.uniroma2.it/def/faculty/229/scaramozzino-pasquale 
http://ideas.repec.org/e/psc186.html

-- 
Alessandro Casini (
 
 ">
 )
Department of Economics and Finance 
University of Rome Tor Vergata
Via Columbia 2
00133 Rome, Italy
http://alessandro-casini.com
-- 
Alessandro Casini (
 
 ">
 )
Department of Economics and Finance 
University of Rome Tor Vergata
Via Columbia 2
00133 Rome, Italy
http://alessandro-casini.com

 

-- 
Alessandro Casini (
 
 ">
 )
Department of Economics and Finance 
University of Rome Tor Vergata
Via Columbia 2
00133 Rome, Italy
http://alessandro-casini.com

 

-- 
Alessandro Casini (
 
 ">
 )
Department of Economics and Finance 
University of Rome Tor Vergata
Via Columbia 2
00133 Rome, Italy
http://alessandro-casini.com

 




Archivio con motore MhonArc 2.6.16.

§