Improving ICT Assessment in Education
In this debate there appears to be a lot of consensus on both sides of the motion and even a spill-over in commentary from one side to the other. Perhaps this is because the motion was more of a question than a statement. We have ended up not quite arguing for and against but rather questioning the status of assessment in education in general and its impact (or lack of) in ICT policy and practice in particular. Therein lies most of our consensus.
We recognize the inadequacies of evaluating the use of a tool and its potential for transformational innovation in education systems that are intent on simply harnessing it for maintaining the status quo. As Rob (on the other side) observed ‘any real assessment of educational reform requires a new reflection on what skills and knowledge the children are supposed to acquire at school’.
And so in my response I would like to revisit the question presented by Wayan and reflect a little more on its parameters. I would also like to draw on commentary from both sides of the discussion (quite a lot of comments on your side Rob) to tease out some of the issues.
Do we need to assess ICT4E initiatives?
Rob notes that most reforms have historically been imposed without scientific support but rather on political prejudices. However the sense of fatigue with the failure of education reform syndrome is perhaps changing as we migrate into a 21st century information age. And as we do so, we are witnessing a growing discrepancy between school and the ‘outside world’ – where information, knowledge, innovation and creativity are replacing the traditional sectors of commerce and industry – and where new technologies are changing the way we interact, communicate, socialize and network.
It is a world where mobile connectivity is becoming commonplace and where digital literacy is a critical tool for social interaction, knowledge exchange and construction. If schooling fails to transform itself, it may be transformed albeit haphazardly by the technological transformation outside its gate – and perhaps in a way that may be detrimental to learning.
There is also the challenge of digital divides, both between societies and within societies – with access denied to the poorest and most marginalized. ICT is seen as bridging such major divides. There is thus a renewed sense of urgency, despite the fatigue, for systemic ICT investment and reform to provide all learners with skills they will need for meaningful participation in the economic, social and culture life of new knowledge-based economies and societies.
In such scenarios of massive large-scale investment and reform, assessments are needed to hold systems accountable. Assessments can also provide policymakers with the gateway they need to direct systemic change. As Clayton observes in his comments ‘evaluations are necessary to demonstrate to the local officials and national policy makers that ICTs are worth the investment’. They can help them to identify factors to best influence ICT impact (changes in curriculum, pedagogy, assessment, teacher training) and well as the barriers to ICT use, such as lack of skilled support and adequate infrastructure.
If we assess, how do we do it?
Juan in his commentary describes his skepticism as to the relevance of some of the ICT evaluations he has come across over the years – in particular studies on proprietary software where the emphasis is more on evaluating technology than learning. He notes the lack of comparison with alternative activities and for cost effectiveness. John also comments that effective assessments have not been designed. On cost effectiveness John finds shocking a US study illustrating a lack of empirical research in an area where billions of dollars have been invested. What is particularly ‘unsettling’ is the notion that politicians don’t seem to care.
Yet I wonder John if it is a question that politicians don’t care or that there is a sense of exasperation with the lack of defined mechanisms for informing decision making for such a massive scale of investment and change? Scheuermann, Kikis and Villalba (2009) discuss the lack of clear information in most studies about the multifaced effects and impact of ICT on the learner and learning. It is a situation that is ‘especially unsatisfying for policy-making stakeholders that aim at defining evidence-based strategies and regulatory measures for effective ICT implementation and efficient use of resources’ (ibid. p. 1).
There are calls for more widely accepted indicators and methodological approaches to assess inputs, utilization and outcome/ impact of ICT integration initiatives in order to address this gap (Trucano, 2003; Blanskat et al. 2006; cited in ibid.). Yet there still remain limitations in these approaches on measuring the impact of ICT use – as they often represent a snapshot – a one time, one level approach. Ian comments on the ‘imperfection’ of the data collection in such evaluations more often conducted to appease funder insistence for seeing ‘educational’ results. He also draws attention to the difficulty in attributing the said results to the ICT intervention.
A more powerful approach is the use of indicators within development models of ICT integration in education – to study the progressive phases through which teachers and students adopt and use ICT. Morel’s Matrix is an instrument that can be used for evaluating the degree to which ICTs have been integrated in an educational system through four distinct successive phases: a) emerging, b) applying, c) integrating, and d) transforming.
In GeSCI we have developed an ICT-Education matrix to assist our partners in focusing on what teachers and learners actually do when they use ICTs in schools and institutions through each of the four successive phases (Figure 1). Such models when used to guide Planning, Monitoring & Evaluation (PME) in combination with the indicators approach can offer clearer outcomes on what the integration of ICTs in education should look like at each development stage.
Figure 1: GeSCI ICT- Education Matrix
I like Mark Beckford’s observation in that the key to assessment is to keep it ‘simple but useful’. We hope in developing PME tools as the ICT-Education matrix for our partners that we can do just that.
Scheuermann, F., Kikis, K. & Villala, E. 2009. A framework for Understanding and Evaluating the Impact of Information and Communication Technology in Education Available online & accessed 23 November 2009