Access the full text.
Sign up today, get DeepDyve free for 14 days.
Tong Zhang (2003)
Leave-One-Out Bounds for Kernel MethodsNeural Computation, 15
Qiang Wu, Ding-Xuan Zhou (2005)
SVM Soft Margin Classifiers: Linear Programming versus Quadratic ProgrammingNeural Computation, 17
M. Anthony, P. Bartlett (1999)
Neural Network Learning - Theoretical Foundations
S. Jaeger (2005)
Generalization Bounds and Complexities Based on Sparsity and Clustering for Convex Combinations of Functions from Random ClassesJ. Mach. Learn. Res., 6
S. Smale, Ding-Xuan Zhou (2005)
Shannon sampling II: Connections to learning theoryApplied and Computational Harmonic Analysis, 19
Ding-Xuan Zhou (2003)
Capacity of reproducing kernel spaces in learning theoryIEEE Transactions on Information Theory, 49
(1990)
Complexity regularization with applications to artificial neural networks
S. Smale, Ding-Xuan Zhou (2003)
ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORYAnalysis and Applications, 01
T. Evgeniou, M. Pontil, T. Poggio (2000)
Regularization Networks and Support Vector MachinesAdvances in Computational Mathematics, 13
G. Wahba (1990)
Spline Models for Observational Data
Yuhai Wu (2021)
Statistical Learning TheoryTechnometrics, 41
J. Shawe-Taylor, P. Bartlett, R. Williamson, M. Anthony (1998)
Structural Risk Minimization Over Data-Dependent HierarchiesIEEE Trans. Inf. Theory, 44
Yiming Ying, Ding-Xuan Zhou (2007)
Learnability of Gaussians with Flexible VariancesJ. Mach. Learn. Res., 8
Ingo Steinwart, C. Scovel (2005)
Fast Rates for Support Vector Machines
G. Lugosi, N. Vayatis (2003)
On the Bayes-risk consistency of regularized boosting methodsAnnals of Statistics, 32
E. Vito, A. Caponnetto, L. Rosasco (2005)
Model Selection for Regularized Least-Squares Algorithm in Learning TheoryFoundations of Computational Mathematics, 5
F. Cucker, Stephen Smale (2002)
Best Choices for Regularization Parameters in Learning Theory: On the Bias—Variance ProblemFoundations of Computational Mathematics, 2
O. Bousquet, A. Elisseeff (2002)
Stability and GeneralizationJ. Mach. Learn. Res., 2
N. Aronszajn (1950)
Theory of Reproducing Kernels.Transactions of the American Mathematical Society, 68
V. Koltchinskii, D. Panchenko (2004)
Rademacher Processes and Bounding the Risk of Function LearningarXiv: Probability
(2004)
Analysis of support vector machine classification, submitted
M. Anthony, P. Bartlett (1999)
Learning in Neural Networks: Theoretical Foundations
Ding-Xuan Zhou (2002)
The covering number in learning theoryJ. Complex., 18
F. Cucker, S. Smale (2001)
On the mathematical foundations of learningBulletin of the American Mathematical Society, 39
S. Smale, Ding-Xuan Zhou (2004)
Shannon sampling and function reconstruction from point valuesBulletin of the American Mathematical Society, 41
(2002)
Learning Res
Ding-Xuan Zhou, K. Jetter (2006)
Approximation with polynomial kernels and SVM classifiersAdvances in Computational Mathematics, 25
T. Mikosch, A. Vaart, J. Wellner (1996)
Weak Convergence and Empirical Processes: With Applications to Statistics
Wee Lee, P. Bartlett, R. Williamson (1998)
The importance of convexity in learning with squared lossIEEE Trans. Inf. Theory, 44
P. Bartlett (1998)
The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the NetworkIEEE Trans. Inf. Theory, 44
Dirong Chen, Qiang Wu, Yiming Ying, Ding-Xuan Zhou (2004)
Support Vector Machine Soft Margin Classifiers: Error AnalysisJ. Mach. Learn. Res., 5
F. Cucker, Ding-Xuan Zhou (2007)
Learning Theory: An Approximation Theory Viewpoint: Index
Qiang Wu, Yiming Ying, Ding-Xuan Zhou (2007)
Multi-kernel regularized classifiersJ. Complex., 23
This paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C∞ and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.
Foundations of Computational Mathematics – Springer Journals
Published: Sep 23, 2005
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.