{"id":513,"date":"2009-11-11T10:32:04","date_gmt":"2009-11-11T14:32:04","guid":{"rendered":"http:\/\/edutechdebate.org\/?p=513"},"modified":"2012-09-27T10:37:34","modified_gmt":"2012-09-27T14:37:34","slug":"ict-in-education-assessments-are-biased-and-inaccurate","status":"publish","type":"post","link":"https:\/\/edutechdebate.org\/assessing-ict4e-evaluations\/ict-in-education-assessments-are-biased-and-inaccurate\/","title":{"rendered":"ICT in Education Assessments are Biased and Inaccurate"},"content":{"rendered":"
Would accurate ICT4E assessment be great? Definitely. The more we know about education and teaching, the better we can educate.<\/p>\n
However, the most remarkable thing about any ICT4E assessments to decide on the introduction of ICT in education would be their uniqueness in history. One reason such assessments are so scarce is that there are few (if any) historical examples of assessments of any kind done before the introduction of an educational reform. Even less examples where the outcomes of the assessments really mattered in decision making.<\/p>\n
In my country alone, the Netherlands, we have just evaluated decades of sweeping educational reforms. Dutch results<\/a> (sadly, only in Dutch).<\/p>\n One of the conclusions was that indeed, large reforms (e.g., “Het nieuwe leren”, or the new learning) were imposed without scientific support. Another that political prejudices, not any kind of data, were the main motivating factor in the reforms.<\/p>\n Just last week, the results came out about an assessment of yet another reform, in teaching arithmetic in primary education, implemented without “proper” assessment before introduction. Performance in arithmetic had declined and the fight was on which method was better, the “realistic” based instruction which is currently used or the classical, practice based method. The conclusion was that declining standards were caused by the teachers themselves having sub-standard arithmetic skills. So now the teachers will get remedial courses. I am sure every reader can add examples from their own country where sweeping reforms were only assessed long after they had been implemented. The alternative, assessing educational reforms well before introduction, is a form of social engineering. Social engineering seems to always be more difficult than you think. And I think history has shown that education is no exception in this respect.<\/p>\n ICT4E Assessments are always biased<\/b><\/p>\n Historically, educational policies are completely determined by the political and religious believes of the parents and, by extension, the politicians and teachers. Scientific “facts” are never appreciated unless they completely align with the preconceptions of the “stake-holders” (minus the children). We might lament it, but such is the world. This is made worse by the fact that few parents actually understand what their children are learning. So the parents are likely to try to improve upon a schooling model of thirty years ago, to prepare children for a world which does not exist anymore.<\/p>\n On the ICT side, those who are old enough to have experienced the introduction of personal computers in the work place will remember that the introduction was definitely not the result of an assessment on the productivity. Accountants and secretaries were trained on computers because everybody understood the usefulness of Wordperfect and Lotus1-2-3. No questions asked. The same with the introduction of Faxes and Email. This lead to some weird discussions in economic circles “You can see the computer age everywhere but in the productivity statistics<\/a>“.<\/p>\n So why would ICT4E assessments be different? They are not. But they are beautiful handles for political fights.<\/p>\n ICT4E Assessments are inaccurate<\/b><\/p>\n But lets suppose we will do such an assessment for ICT4E. What will be tested is very simple: Does this ICT4E solution improve scores on existing tests. The outcome can be predicted quite accurately:<\/p>\n The problem here is that the tests have been adapted to the curriculum, and vice versa. An illuminating example is mathematics education in the USA. Read A Mathematician\u2019s Lament<\/a> (PDF) by Paul Lockhart. From page 15:<\/p>\n In place of a natural problem context in which students can make decisions about what they want their words to mean, and what notions they wish to codify, they are instead subjected to an endless sequence of unmotivated and a priori \u201cdefinitions.\u201d The curriculum is obsessed with jargon and nomenclature, seemingly for no other purpose than to provide teachers with something to test the students on<\/i>.<\/p>\n No mathematician in the world would bother making these senseless distinctions: 2 1\/2 is a \u201cmixed number,\u201d while 5\/2 is an \u201cimproper fraction.\u201d They\u2019re equal for crying out loud. They are the same exact numbers, and have the same exact properties. Who uses such words outside of fourth grade?<\/p><\/blockquote>\n Students must learn completely useless factoids for the benefit of having something to test and they learn them because they are tested on these useless factoids. Like the distinction between mixed number and improper fraction of the example or (lower on the same page) the equally useless definition for sec x as 1\/cos x.<\/p>\n The ultimate, bad, example of the parasitic relation between teaching and testing was English language teaching at Japanese schools<\/a>. Testing was done exclusively with multiple choice tests. Formally, Japanese students earned high grades in the tests on English, but in reality they were at the rock bottom of proficiency in the world (see The Enigma of Japanese Power<\/a> by Karel van Wolferen).<\/p>\n Assessments and the Chinese exams syndrome<\/b> <\/p>\n
\n(see Evaluation of arithmetic teaching<\/a>, again only in Dutch).<\/p>\n\n