Abstract
Consider a large database of questions that assess the knowledge of learners on a range of different concepts. In this paper, we study the problem of maximizing the estimation accuracy of each learner’s knowledge about a concept while minimizing the number of questions each learner must answer. We refer to this problem as test-size reduction (TeSR). Using the SPARse Factor Analysis (SPARFA) framework, we propose two novel TeSR algorithms. The first algorithm is nonadaptive and uses graded responses from a prior set of learners. This algorithm is appropriate when the instructor has access to only the learners’ responses after all questions have been solved. The second algorithm adaptively selects the “next best question” for each learner based on their graded responses to date. We demonstrate the efficacy of our TeSR methods using synthetic and educational data.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 6th International Conference on Educational Data Mining, EDM 2013 |
Publisher | International Educational Data Mining [email protected] |
ISBN (Print) | 9780983952527 |
State | Published - Jan 1 2013 |
Externally published | Yes |