Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

On the Automatic Assessment of Computational Thinking Skills: A Comparison with Human Experts

On the Automatic Assessment of Computational Thinking Skills: A Comparison with Human Experts Late-Breaking Work CHI 2017, May 6­11, 2017, Denver, CO, USA On the Automatic Assessment of Computational Thinking Skills: A Comparison with Human Experts Jesús Moreno-León Casper Harteveld Programamos & Universidad Northeastern University Rey Juan Carlos Boston, MA, USA Seville, Spain c.harteveld@neu.edu jesus.moreno@programamos.es Abstract Programming and computational thinking skills are promoted in schools worldwide. However, there is still a lack of tools that assist learners and educators in the assessment of these skills. We have implemented an assessment tool, called Dr. Scratch, that analyzes Scratch projects with the aim to assess the level of development of several aspects of computational thinking. One of the issues to address in order to show its validity is to compare the (automatic) evaluations provided by the tool with the (manual) evaluations by (human) experts. In this paper we compare the assessments provided by Dr. Scratch with over 450 evaluations of Scratch projects given by 16 experts in computer science education. Our results show strong correlations between automatic and manual evaluations. As there is an ample debate among educators on the use of this type of tools, we discuss the implications and limitations, and provide recommendations for further research. Marcos Román-González Universidad Nacional de http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

On the Automatic Assessment of Computational Thinking Skills: A Comparison with Human Experts

Association for Computing Machinery — May 6, 2017

Loading next page...
/lp/association-for-computing-machinery/on-the-automatic-assessment-of-computational-thinking-skills-a-23sSYc050s

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Datasource
Association for Computing Machinery
Copyright
Copyright © 2017 by ACM Inc.
ISBN
978-1-4503-4656-6
doi
10.1145/3027063.3053216
Publisher site
See Article on Publisher Site

Abstract

Late-Breaking Work CHI 2017, May 6­11, 2017, Denver, CO, USA On the Automatic Assessment of Computational Thinking Skills: A Comparison with Human Experts Jesús Moreno-León Casper Harteveld Programamos & Universidad Northeastern University Rey Juan Carlos Boston, MA, USA Seville, Spain c.harteveld@neu.edu jesus.moreno@programamos.es Abstract Programming and computational thinking skills are promoted in schools worldwide. However, there is still a lack of tools that assist learners and educators in the assessment of these skills. We have implemented an assessment tool, called Dr. Scratch, that analyzes Scratch projects with the aim to assess the level of development of several aspects of computational thinking. One of the issues to address in order to show its validity is to compare the (automatic) evaluations provided by the tool with the (manual) evaluations by (human) experts. In this paper we compare the assessments provided by Dr. Scratch with over 450 evaluations of Scratch projects given by 16 experts in computer science education. Our results show strong correlations between automatic and manual evaluations. As there is an ample debate among educators on the use of this type of tools, we discuss the implications and limitations, and provide recommendations for further research. Marcos Román-González Universidad Nacional de

There are no references for this article.