Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Addressing the assessment challenge with an online system that tutors as it assesses

Addressing the assessment challenge with an online system that tutors as it assesses Secondary teachers across the United States are being asked to use formative assessment data (Black and Wiliam 1998a,b; Roediger and Karpicke 2006) to inform their classroom instruction. At the same time, critics of US government’s No Child Left Behind legislation are calling the bill “No Child Left Untested”. Among other things, critics point out that every hour spent assessing students is an hour lost from instruction. But, does it have to be? What if we better integrated assessment into classroom instruction and allowed students to learn during the test? We developed an approach that provides immediate tutoring on practice assessment items that students cannot solve on their own. Our hypothesis is that we can achieve more accurate assessment by not only using data on whether students get test items right or wrong, but by also using data on the effort required for students to solve a test item with instructional assistance. We have integrated assistance and assessment in the ASSISTment system. The system helps teachers make better use of their time by offering instruction to students while providing a more detailed evaluation of student abilities to the teachers, which is impossible under current approaches. Our approach for assessing student math proficiency is to use data that our system collects through its interactions with students to estimate their performance on an end-of-year high stakes state test. Our results show that we can do a reliably better job predicting student end-of-year exam scores by leveraging the interaction data, and the model based on only the interaction information makes better predictions than the traditional assessment model that uses only information about correctness on the test items. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png User Modeling and User-Adapted Interaction Springer Journals

Addressing the assessment challenge with an online system that tutors as it assesses

Loading next page...
 
/lp/springer-journals/addressing-the-assessment-challenge-with-an-online-system-that-tutors-kEw4Y3aMhH

References (51)

Publisher
Springer Journals
Copyright
Copyright © 2009 by Springer Science+Business Media B.V.
Subject
Computer Science; User Interfaces and Human Computer Interaction; Multimedia Information Systems; Management of Computing and Information Systems
ISSN
0924-1868
eISSN
1573-1391
DOI
10.1007/s11257-009-9063-7
Publisher site
See Article on Publisher Site

Abstract

Secondary teachers across the United States are being asked to use formative assessment data (Black and Wiliam 1998a,b; Roediger and Karpicke 2006) to inform their classroom instruction. At the same time, critics of US government’s No Child Left Behind legislation are calling the bill “No Child Left Untested”. Among other things, critics point out that every hour spent assessing students is an hour lost from instruction. But, does it have to be? What if we better integrated assessment into classroom instruction and allowed students to learn during the test? We developed an approach that provides immediate tutoring on practice assessment items that students cannot solve on their own. Our hypothesis is that we can achieve more accurate assessment by not only using data on whether students get test items right or wrong, but by also using data on the effort required for students to solve a test item with instructional assistance. We have integrated assistance and assessment in the ASSISTment system. The system helps teachers make better use of their time by offering instruction to students while providing a more detailed evaluation of student abilities to the teachers, which is impossible under current approaches. Our approach for assessing student math proficiency is to use data that our system collects through its interactions with students to estimate their performance on an end-of-year high stakes state test. Our results show that we can do a reliably better job predicting student end-of-year exam scores by leveraging the interaction data, and the model based on only the interaction information makes better predictions than the traditional assessment model that uses only information about correctness on the test items.

Journal

User Modeling and User-Adapted InteractionSpringer Journals

Published: Feb 4, 2009

There are no references for this article.