Access the full text.
Sign up today, get DeepDyve free for 14 days.
J. Wellner (1978)
Limit theorems for the ratio of the empirical distribution function to the true distribution functionZeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 45
J. Hiriart-Urruty, C. Lemaréchal (1993)
Convex analysis and minimization algorithms
A. Vaart, J. Wellner (2010)
A local maximal inequality under uniform entropy.Electronic journal of statistics, 5 2011
D. Cordero-Erausquin, M. Fradelizi, G. Paouris, P. Pivovarov (2013)
Volume of the polar of random sets and shadow systemsMathematische Annalen, 362
Jiange Li, M. Fradelizi, M. Madiman (2016)
Information concentration for convex measures2016 IEEE International Symposium on Information Theory (ISIT)
Franccois Bolley, I. Gentil, A. Guillin (2015)
Dimensional improvements of the logarithmic Sobolev, Talagrand and Brascamp-Lieb inequalitiesarXiv: Probability
C. Borell (1973)
Complements of Lyapunov's inequalityMathematische Annalen, 205
G. Bennett (1962)
Probability Inequalities for the Sum of Independent Random VariablesJournal of the American Statistical Association, 57
V. Nguyen (2013)
Dimensional variance inequalities of Brascamp–Lieb type and a local approach to dimensional Prékopaʼs theoremJournal of Functional Analysis, 266
Van Nguyen (2013)
Inégalités fonctionnelles et convexité
Cedex 2 France e-mail: matthieu.fradelizi@univ-mlv
e-mail: njuwangliyao@gmail
Matthieu Fradelizi Université Paris-Est Marne-la-Vallée Laboratoire d'Analyse et de Mathmatiques Appliques UMR 8050
S. Bobkov, M. Madiman (2011)
Reverse Brunn–Minkowski and reverse entropy power inequalities for convex measuresJournal of Functional Analysis, 262
M. Fradelizi (1997)
Sections of convex bodies through their centroidArchiv der Mathematik, 69
B. Klartag, V. Milman (2005)
Geometry of Log-concave Functions and MeasuresGeometriae Dedicata, 112
E. Lieb, M. Loss (2001)
Analysis, Second edition, 14
Ronen Eldan (2012)
Thin Shell Implies Spectral Gap Up to Polylog via a Stochastic Localization SchemeGeometric and Functional Analysis, 23
Ronen Eldan, B. Klartag (2010)
Approximately gaussian marginals and the hyperplane conjecturearXiv: Metric Geometry
(2001)
Analysis, volume 14 of Graduate Studies in Mathematics
B. Klartag, A. Kolesnikov (2014)
Eigenvalue distribution of optimal transportationAnalysis & PDE, 8
S. Bobkov, M. Madiman (2010)
The Entropy Per Coordinate of a Random Vector is Highly Constrained Under Convexity ConditionsIEEE Transactions on Information Theory, 57
B. Klartag (2006)
A central limit theorem for convex setsInventiones mathematicae, 168
S. Bobkov, M. Madiman (2011)
Dimensional behaviour of entropy and informationArXiv, abs/1101.3352
(2015)
When can one invert Hölder’s inequality? (and why one may want to)
G. Lugosi (2008)
Concentration Inequalities
S. Bobkov, M. Madiman (2010)
Concentration of the information in data with log-concave distributionsAnnals of Probability, 39
鈴木 貴 (1985)
Institute for Mathematics and Its Applications (IMA)について, 37
Liyao Wang, M. Madiman (2013)
Beyond the Entropy Power Inequality, via RearrangementsIEEE Transactions on Information Theory, 60
Liyao Wang (2014)
Heat Capacity Bound, Energy Fluctuations and Convexity
S. Boucheron, G. Lugosi, P. Massart (2013)
Concentration Inequalities - A Nonasymptotic Theory of Independence
Gilles Hargé (2008)
Reinforcement of an inequality due to Brascamp and LiebJournal of Functional Analysis, 254
Ronen Eldan, J. Lehec (2013)
Bounding the Norm of a Log-Concave Vector Via Thin-Shell EstimatesarXiv: Functional Analysis
O. Guédon, E. Milman (2010)
Interpolating Thin-Shell and Sharp Large-Deviation Estimates for Lsotropic Log-Concave MeasuresGeometric and Functional Analysis, 21
A. Prékopa (1973)
On logarithmic concave measures and functions
M. Fradelizi, M. Meyer (2008)
Increasing functions and inverse Santaló inequality for unconditional functionsPositivity, 12
Stephen Boyd, L. Vandenberghe (2005)
Convex OptimizationJournal of the American Statistical Association, 100
[An elementary proof is provided of sharp bounds for the varentropy of random vectors with log-concave densities, as well as for deviations of the information content from its mean. These bounds significantly improve on the bounds obtained by Bobkov and Madiman (Ann Probab 39(4):1528–1543, 2011).]
Published: Sep 22, 2016
Keywords: Concentration; Information; Log-concave; Varentropy
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.