Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Advances in Probability and Mathematical StatisticsConjugate Predictive Distributions and Generalized Entropies

Advances in Probability and Mathematical Statistics: Conjugate Predictive Distributions and... [It is well-known that maximizing the Shannon entropy gives rise to an exponential family of distributions. On the other hand, some Bayesian predictive distributions, derived from exponential family sampling models with standard conjugate priors on the canonical parameter, maximize a generalized entropy indexed by a parameter α. As α →∞, this generalized entropy converges to the usual Shannon entropy, while the predictive distribution converges to its corresponding sampling model. The aim of this paper is to study this type of connection between generalized entropies based on a certain family of α-divergences and the class of predictive distributions mentioned above. We discuss two important examples in some detail, and argue that similar results must also hold for other exponential families.] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

Advances in Probability and Mathematical StatisticsConjugate Predictive Distributions and Generalized Entropies

Part of the Progress in Probability Book Series (volume 79)
Editors: Hernández‐Hernández, Daniel; Leonardi, Florencia; Mena, Ramsés H.; Pardo Millán, Juan Carlos

Loading next page...
 
/lp/springer-journals/advances-in-probability-and-mathematical-statistics-conjugate-znuK70Bq30

References (22)

Publisher
Springer International Publishing
Copyright
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
ISBN
978-3-030-85324-2
Pages
93 –102
DOI
10.1007/978-3-030-85325-9_6
Publisher site
See Chapter on Publisher Site

Abstract

[It is well-known that maximizing the Shannon entropy gives rise to an exponential family of distributions. On the other hand, some Bayesian predictive distributions, derived from exponential family sampling models with standard conjugate priors on the canonical parameter, maximize a generalized entropy indexed by a parameter α. As α →∞, this generalized entropy converges to the usual Shannon entropy, while the predictive distribution converges to its corresponding sampling model. The aim of this paper is to study this type of connection between generalized entropies based on a certain family of α-divergences and the class of predictive distributions mentioned above. We discuss two important examples in some detail, and argue that similar results must also hold for other exponential families.]

Published: Aug 5, 2021

Keywords: Conjugate distribution; Deformed exponential function; Deformed logarithm; Exponential family; Shannon entropy

There are no references for this article.