Access the full text.
Sign up today, get DeepDyve free for 14 days.
Yu. Zhuravlev, O. Senko, A. Dokukin, N. Kiselyova, I. Saenko (2021)
Two-Level Regression Method Using Ensembles of Trees with Optimal DivergenceDoklady Mathematics, 104
This paper was recommended for publication by
Gavin Brown, J. Wyatt, Rachel Harris, X. Yao (2004)
Diversity creation methods: a survey and categorisationInf. Fusion, 6
V. Lalchand (2020)
Extracting more from boosted decision trees: A high energy physics case studyArXiv, abs/2001.06033
Tianqi Chen, Carlos Guestrin (2016)
XGBoost: A Scalable Tree Boosting SystemProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
L. Breiman (2001)
Random ForestsMachine Learning, 45
Dirk Van (2012)
Ensemble Methods : Foundations and Algorithms
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu (2017)
LightGBM: A Highly Efficient Gradient Boosting Decision Tree
J. Elith, J. Leathwick (2011)
Boosted Regression Trees for ecological modeling
J. Friedman, J. Meulman (2003)
Multiple additive regression trees with application in epidemiologyStatistics in Medicine, 22
L. Ostroumova, Gleb Gusev, A. Vorobev, Anna Dorogush, Andrey Gulin (2017)
CatBoost: unbiased boosting with categorical features
A. Dokukin, O. Senko (2011)
Optimal convex correcting procedures in problems of high dimensionComputational Mathematics and Mathematical Physics, 51
J. Friedman (2002)
Stochastic gradient boostingComputational Statistics & Data Analysis, 38
(2022)
Arrhythmia Data Set. https://archive.ics.uci.edu/ml/datasets/Arrhythmia. This paper was recommended for publication by A.A. Lazarev, a member of the Editorial
We consider a new method to improve the quality of training in gradient boosting as wellas to increase its generalization performance based on the use of modified loss functions. Incomputational experiments, the possible applicability of this method to improve the quality ofgradient boosting when solving various classification and regression problems on real data isshown.
Automation and Remote Control – Springer Journals
Published: Dec 1, 2022
Keywords: gradient boosting; decision tree; loss function; machine learning; data analysis
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.