# The Minimum Description Length Principle

## by Peter D. Grünwald, with a foreword by Jorma Rissanen.MIT Press, June 2007.

 This book provides a comprehensive introduction and reference guide to the minimum description length (MDL) Principle, a powerful method of inductive inference that holds that the best explanation, given a limited set of observed data, is the one that permits the greatest compression of the data. The central concepts of this theory are explained in great detail. The book should be accessible to researchers dealing with inductive inference in diverse areas including statistics, machine learning, data mining, biology, econometrics, and experimental psychology, as well as philosophers interested in the foundations of statistics. The book consists of four parts. Part I provides a basic introduction to MDL and an overview of the concepts in statistics and information theory needed to understand MDL. Part II treats universal coding, the information-theoretic notion on which MDL is built, and Part III gives a formal treatment of MDL theory as a theory of inductive inference based on universal coding. Part IV provides a comprehensive overview of the statistical theory of exponential families with an emphasis on their information-theoretic properties. The book contains some new results that have not been published elsewhere.

### General Information:

 Preface: an extensive overview of the book, including list of new results. Full Table of Contents (for a brief chapter outline, see below), and References and Index. Chapter 1: Learning, Regularity and Compression. Sample chapter. Chapter 17: MDL in Context. Sample chapter, containing extensive comparison between MDL and other statistical paradigms and methods (Bayes, frequentist, learning theory, PAC-Bayes, AIC/BIC/cross-validation model selection, universal prediction, maximum entropy, MML,...) You can order the book by clicking on the link on the right. Errata.

### Chapter Outline:

• Part I: Introductory Material
1. Learning, Regularity, and Compression (Sample Chapter).
2. Probabilistic and Statistical Preliminaries
3. Information-Theoretic Preliminaries
4. Information-Theoretic Properties of Statistical Models
5. Crude Two-Part Code MDL
• Part II: Universal Coding
6. Universal Coding with Countable Models
7. Parametric Models: Normalized Maximum Likelihood
8. Parametric Models: Bayes
9. Parametric Models: Prequential Plug-in
10. Parametric Models: Two-Part
11. NML With Infinite Complexity
12. Linear Regression
13. Beyond Parametrics
• Part III: Refined MDL
14. MDL Model Selection
15. MDL Prediction and Estimation
16. MDL Consistency and Convergence
17. MDL in Context (Sample Chapter)
18. The Exponential or "Maximum Entropy" Families
19. Information-Theoretic Properties of Exponential Families
• References and Index.
570 pages. ISBN-13: 978-0-262-07281-6

Dr. Peter Grünwald
CWI
P.O. Box 94079
NL-1090 GB The Netherlands
Telephone +31-20-5924115
Telefax + 31-20-5924312
E-mail pdg@cwi.nl
URL www.grunwald.nl

Last updated: August 2007. Back to Peter's homepage.