Where is the Focus of OLPC in Peru and ICT4E in General?
A fascinating discussion has emerged on one laptop per child programs – much has been said in the previous articles and there have been many posts in the past several days on the topic of OLPC in Peru and similar efforts elsewhere. Research has been cited, and personal experiences and opinions shared. Where do we go from here?
I want to specifically address the broader ICT for Education (ICT4E) question of whether or not a single intervention can have an impact on student achievement.
In looking at the discussions and research to date, I suggest an ICT4E intervention can have (and should have!) an impact, but only through focus. Specifically, focus on:
- Specific problems and explicit objectives
- Student achievement
- Monitoring and evaluation
To start, let me clarify what I mean with ICT4E initiatives for the purpose of this article: I am talking about comprehensive ICT interventions at the classroom/student level such as OLPC (but not only one-to-one programs). Such efforts would at a minimum combine elements of providing electronic teaching and learning resources (content), devices to access the content, and related teacher training. Under the paradigm of focus, I also propose that we get more precise when we talk about ICT4E, as clearly one intervention type or implementation model does not necessarily compare to the other.
Focus ICT4E interventions on specific problems and formulate explicit objectives
The OLPC Peru program aims “To improve the quality of public primary education, especially that of children in the remotest places and in extreme poverty, prioritizing multi-grade schools with only one teacher”. The program description does not mention specific subjects or explicit skills in any of its objectives, and there are plenty of programs out there with similarly vague statements.
I think we need to get more evidence-based, specific, and realistic when talking about anticipated impacts and avoid overgeneralization in our program descriptions and general ICT4E rhetoric.
In a focused approach to ICT4E intervention planning and program design we would unpack the broader goal, identify the key problems area(s) on the classroom level, and address them in a systematic manner. Children’s early reading skills, e.g., are low “…most children read badly, with poor fluency and limited comprehension”. Oscar Becerra in his contribution also mentions the challenges of teachers’ reading levels. Further, the effort to add 200 e-Books to the other content provided on the XOs, and the fact that the section on how to use these provides the most descriptive and detailed pedagogical guidance in the OLPC manuals, is an indicator that reading has already been recognized, although not explicitly and systematically, for specific attention.
Continuing using early reading as an example, a focused ICT intervention could draw on research to date on how to improve reading in the early grades even in low-resource environments. From this work, we can learn that a focus on the classroom-level and the very specific, underlying skills of early reading (such as phonemic awareness and letter recognition), with simple-to-implement and direct approaches to teaching (and teacher training), low-cost materials and a focus on continuing assessment to monitor progress can make a significant difference in students’ early reading outcomes already within a relatively short period of time.
A role for an ICT intervention in this scenario could then be how to get to such results in a more effective and efficient manner and/or for specific groups of students whose needs are not yet fully addressed through existing, though working, approaches.
In an effort to respond to the (very valid) sustainability and scale up requirements, to adhere to best practices in institutionalization and sector-wide approaches, and in dealing with the significant challenges related to procurement – of equipment and content – ICT4E initiatives get caught up in investing resources and efforts on too many fronts already at the very beginning, without having had a chance to even get to the classroom and validate their impact where it should matter most: at the student level.
Focus on student achievement
A lot of discussion related to this month’s topic raised the question of whether or not ICT4E initiatives should be assessed in regards to their contribution to improved student learning outcomes and specifically academic achievement.
In environments where resources are extremely limited, the choice for implementing a certain program is de facto a decision against an alternative approach. I question how we justify not expecting comprehensive classroom-level ICT initiatives to focus on learning outcomes when even the most foundational skills critical for future learning, such as early reading, are not being acquired by large proportions of children.
Clearly, other skills, including cognitive skills, critical thinking skills, etc., are important for learning and child development. Thus it is important to move the body of knowledge on ICT for Education further into exploring such new domains. However, should this really be done on the expense of education systems that already struggle with basic service delivery and ensuring universal learning of the most foundational skills? As many of us remember, there has already been much discussion around the issue of added-value of technology in good schools, versus challenge schools (systems).
I propose to go back to the drawing board and try to get more specific about where we expect ICT can make an impact, define impacts more narrowly, and develop a roadmap of the steps including outputs and outcomes to get there.
A revised ICT4E initiative objective could under such an approach read like: “… aims to improve early reading skills, specifically fluency, among students in grades 1 and 2 of public primary schools” within an average timeline of a donor-funded project of some 5 years. The OLPC Peru program sets a good example in regard to focusing on a specific target: “…children in the remotest places and in extreme poverty, prioritizing multi-grade schools with only one teacher.”
While ICT is still not a magic bullet to resolve this issue, such an objective is at least measurable as there are internationally validated tools, and – as we can learn from reading curricula around the world – results should be detectable within a reasonable timeframe of 1-2 years for most languages and scripts. Other objectives may take longer to achieve and may need to be broken down into intermediate results and steps to find the balance between being ambitious and realistic.
Establishing priorities and validating impact on specific aspects of larger objectives and expanding on these over time will go a long way to building credibility for the ICT4E and the larger ICT for Development practice area, and rallying the national support needed for a program.
With such a focused objective in hand, program implementers can then align activities (provision of content, equipment, training, etc. ) and design monitoring and evaluation frameworks that provide relevant information to the project internally and to the larger body of knowledge.
Focus on rigorous Monitoring and Evaluation in ICT4E interventions
Implementing good M&E is difficult. It is hard to design relevant and appropriate frameworks with coherent indicators, realistic data collection methods, and validated tools. Thus, M&E costs money, which can be more or less depending on the complexity of the program and/or the objectives and questions driving program implementation. And it is even harder to focus on M&E when at the same time available resources and people engaged in the program are tied up with the enormous tasks related to the logistics of getting the hardware devices into children’s hands.
In spite of this, however, absence of systematic M&E deprives any program of the opportunity to tell a story much beyond anecdotal evidence once it comes to evaluation.
Furthermore, without systematic monitoring, there is no opportunity to implement corrective measures to program parameters and to continually check back whether activities and resources are moving the initiative towards project objectives before scale up/institutionalization. Monitoring can take many faces and each of them can be strategically employed at different times in a project life-cycle to help improve the program. Monitoring may involve low-effort usability tests with target group users on ease of use of equipment, user acceptance tests for new content and approaches, pilots for new curricula, or comprehensive randomized controlled trial methodologies that allow for a more precise look at the effect of the equipment versus that of the content or the teacher training on the overall project objective before scale-up.
Although no written M&E framework was found on the OLPC Peru website, some monitoring- type activities have taken place. From an outsider’s glance it seems, however, as if these were not yet fully capitalized on to improve the implementation model.
In 2007, in advance of the main roll-out, a pilot was conducted in Arahuay with 48 students for several months. The pilot unearthed a number of hardware design and technical installation issues that were subsequently fixed. However, there seems to be little documentation of any change in the pedagogical model related to the educational resources or teacher training provided as a result of the pilot.
Further, the first IDB study was conducted in 2009 and found that “Concerning use, it is worth making note of what looks like a decreasing utilization of computers in the classroom, which could be a reflection of the need for more technical and pedagogical support for the teachers, as well as of the lack of planning sessions, activities and digital resources appropriate for educational use.” Although in other sources, an iterative improvement process to the roll-out is being stated, the IDB 2012 report does not indicate implementation design adjustments towards more pedagogical support or resources appropriate for educational use between 2009 and 2010.
The OLPC Peru program has been given a potentially huge opportunity through this present IDB study, which, given the rigor of its methodology should be taken very seriously. For this, however, the community would need to agree that the current study is conceptualized as a monitoring effort, providing relevant feedback at a strategic point in time for the program, rather than constituting an “end of program” type evaluation effort. On the side of project implementers, it will require some potentially hard strategic decisions about the need for and focus of future efforts and resources.
In general for the ICT4Ed practice area, I propose we adopt a more focused approach to establish the added-value ICT provides in low-income countries and:
- Avoid generalizing results from studies done on specific models/intervention types (and their particular choices for equipment, content and implementation) to “ICT for Education” as a whole.
- Focus on very explicit problems and measurable objectives for ICT interventions first of all and avoid general, overambitious objective statements.
- Get more systematic and rigorous in monitoring ICT for Education initiatives to find out first, on a small scale, what works (or not) for specific contexts before scale up.
- Increase expectations on comprehensive classroom-level ICT initiatives to contribute to measurable results – specifically in key learning areas – instead of expecting different standards for ICT4Education compared to other educational improvement efforts.