Access the full text.
Sign up today, get DeepDyve free for 14 days.
Qiang Wu, P. Vos (2012)
Decomposition of Kullback–Leibler risk and unbiasedness for parameter-free estimatorsJournal of Statistical Planning and Inference, 142
J. Hartigan (1998)
The maximum likelihood priorAnnals of Statistics, 26
C. Shannon (1948)
A mathematical theory of communicationBell Syst. Tech. J., 27
F. Topsøe (1979)
Information-theoretical optimization techniquesKybernetika, 15
AMC de Souza (1997)
10.1016/S0378-4371(96)00395-0Physica A, 236
P. Harremoës (2001)
Binomial and Poisson distributions as maximum entropy distributionsIEEE Trans. Inf. Theory, 47
P. Grünwald, A. Dawid, P. Grünwald, A. Dawid (2004)
Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theoryAnnals of Statistics, 32
O. Johnson (2006)
Log-concavity and the maximum entropy property of the Poisson distributionStochastic Processes and their Applications, 117
J Naudts (2011)
10.1007/978-0-85729-355-8
I. Csiszár (2008)
Axiomatic Characterizations of Information MeasuresEntropy, 10
H. Kesavan, J. Kapur (1989)
The generalized maximum entropy principleIEEE Trans. Syst. Man Cybern., 19
E. Lehmann (1951)
A General Concept of UnbiasednessAnnals of Mathematical Statistics, 22
S. Noorbaloochi, G. Meeden (1983)
Unbiasedness as the Dual of Being BayesJournal of the American Statistical Association, 78
E. Peña, M. Mendoza (1998)
A note on Bayes estimates for exponential families
(2011)
Generalised Thermostatistics
E. Peña, M. Mendoza (2013)
Proper and non-informative conjugate priors for exponential family models
J. Naudts (2008)
The q-exponential family in statistical physicsCentral European Journal of Physics, 7
ET Jaynes (1968)
10.1109/TSSC.1968.300117IEEE Trans. Syst. Sci. Cybern., 4
J. Naudts (2004)
Estimators, escort probabilities, and phi-exponential families in statistical physics, 5
CE Shannon (1948)
10.1002/j.1538-7305.1948.tb01338.xBell Syst. Technical J., 27
J. Hartigan (1965)
THE ASYMPTOTICALLY UNBIASED PRIOR DISTRIBUTIONAnnals of Mathematical Statistics, 36
[It is well-known that maximizing the Shannon entropy gives rise to an exponential family of distributions. On the other hand, some Bayesian predictive distributions, derived from exponential family sampling models with standard conjugate priors on the canonical parameter, maximize a generalized entropy indexed by a parameter α. As α →∞, this generalized entropy converges to the usual Shannon entropy, while the predictive distribution converges to its corresponding sampling model. The aim of this paper is to study this type of connection between generalized entropies based on a certain family of α-divergences and the class of predictive distributions mentioned above. We discuss two important examples in some detail, and argue that similar results must also hold for other exponential families.]
Published: Aug 5, 2021
Keywords: Conjugate distribution; Deformed exponential function; Deformed logarithm; Exponential family; Shannon entropy
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.