Learning generates Long Memory

Abstract : We consider a prototypical representative-agent forward-looking model, and study the low frequency variability of the data when the agent's beliefs about the model are updated through linear learning algorithms. We find that learning in this context can generate strong persistence. The degree of persistence depends on the weights agents place on past observations when they update their beliefs, and on the magnitude of the feedback from expectations to the endogenous variable. When the learning algorithm is recursive least squares, long memory arises when the coefficient on expectations is sufficiently large. In algorithms with discounting, long memory provides a very good approximation to the low-frequency variability of the data. Hence long memory arises endogenously, due to the self-referential nature of the model, without any persistence in the exogenous shocks. This is distinctly different from the case of rational expectations, where the memory of the endogenous variable is determined exogenously. Finally, this property of learning is used to shed light on some well-known empirical puzzles.
Document type :
Other publications
Complete list of metadatas

Cited literature [68 references]  Display  Hide  Download

https://hal-essec.archives-ouvertes.fr/hal-00661012
Contributor : Michel Demoura <>
Submitted on : Tuesday, October 15, 2013 - 3:53:08 PM
Last modification on : Monday, February 12, 2018 - 3:36:01 PM
Long-term archiving on : Friday, April 7, 2017 - 11:16:57 AM

File

WP1113_update.pdf
Publisher files allowed on an open archive

Identifiers

  • HAL Id : hal-00661012, version 2

Collections

Citation

Guillaume Chevillon, Sophocles Mavroeidis. Learning generates Long Memory. 2013, pp.49. ⟨hal-00661012v2⟩

Share

Metrics

Record views

220

Files downloads

355