Testing Reasoning Software. A Bayesian Way
Keywords:
software, reasoning, education, test, efficacy
Abstract
Is it possible to supply strong empirical evidence for or against the efficacy of reasoning software? There is a paradox concerning tests of reasoning software. On the one hand, acceptance of such software is slow although overwhelming arguments speak for the use of such software packages. There seems to be room for skepticism among decision makers and stakeholders concerning its efficacy. On the other hand, teachersdevelopers of such software (the present author being one of them) think the effects of such software are obvious. In this paper, I will show that both positions – skepticism vs. belief in efficacy – can be compatible with evidence. This is the case if (1) the testing methods differ, (2) the facilities of observation differ and (3) tests rely on contextual assumptions. In particular, I will show that developers of reasoning software can, in principle, know the efficacy of certain design solutions (cf. van Gelder, 2000b, Suthers et al., 2003). Other decision makers may, however, be unable to establish evidence for efficacy.
Issue
Section
Articles
tripleC is a peer-reviewed, open-access journal (ISSN: 1726-670X). All journal content, except where otherwise noted, is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Austria License.