What Do OLPC Peru Results Mean for ICT in Education?
One Laptop Per Child (OLPC) has been a part of a larger ICT4E discussion, which has included ongoing debate over the effectiveness of the XO and its various deployments. Since its inception, OLPC has relied mainly on aspirations, visions, and projections to support investment from various partners across the globe. Pilots programs were conducted at various levels of deployment, programming, and stakeholder engagement. More recently, larger, more longer-standing deployments have reached a point where assessments are now coming to fruition.
Concurrently, as the OLPC offering developed and evolved, so too did a variety of education technology initiatives and device platforms. More specifically, the presence of similar programs and other form factors (tablets, mobile phones) increased the channels through which a variety of activities for education, instruction, communications, and business could be conducted.
These processes contribute to shifting expectations, which despite the support or critique of one assessment or another, highlight the importance of setting clear objectives that are cognizant of both the array of available tools as well as the surrounding systems that have an indirect, but no less significant affect on final outcomes.
A recent Technology Salon in Washington, D.C. touched on various topics including the current landscape of ICT developments around the world and the findings and characteristics of the 2011 IDB randomized control experiment in Peru XO deployment.
ICT Landscape
When the XO concept first hit the market in 2005, laptops had just passed desktop machines in overall retail sales in a personal computing. The personal computing device landscape at the time was characterized by the mobility of laptops and the power of the desktop computer. Netbooks would surface several years later, accentuating the value of mobility over power.
Seven years later, that landscape has changed with the introduction of tablet PCs, not to mention the prevalence of smart phones. With laptops displacing laptops and netbooks (sort of) and tablets displacing laptops, all while smart phones provide prevalent and complementary channels of information, resulting in more diverse market than that which the XO first entered in 2005.
What are the implications of this mixed learning environment for ICT4E? The answer will no doubt vary. In higher disposable income environments, it’s completely feasible for a primary school child to check out an iPad at school, own a smart phone, and return to a home equipped with a tablet, netbook, and/or a laptop. On the other hand, in the lesser developed areas, this is not necessarily the reality for individuals, NGOs, or state education authorities on the ground.
For the education sector, the result is an increased variety of tools and overlapping ability to engage educational learning goals and objectives. Whether the tablet proves to be an effective tool in ICT4E remains to be seen in projects like the Aakash tablet and OLPC’s more recent release of the XO 3.0.
Assessment in Context
For the XO deployment in Peru, the IDB’s randomized control experiment sampled five students per grade per school out of 320 schools (2/3of which were in a treatment receiving the intervention) at intervals of 3 and 15 months. A 2010 assessment brief was released based on the 3 month data, and last Thursday’s assessment brief covered the 15 month data. The findings showed little effect in national assessments in math and language test results, with slight advancement in cognitive abilities based on IDB methodologies. A natural question follows: Was 15 months long enough of a period to observe significant effect on child learning?
Part of the answer may rest in the level of technical and pedagogical support implicated in both briefings. In the 2010 assessment, only 10.5% of teachers reported receiving technical support (with 7% receiving pedagogical support). Thursday’s discussion revealed that only 1/3 of the schools had received pedagogical support and only 1/3 had actually used technical training and manuals provided them. Further discussion revealed that training consisted of a total 40 hours of training, while the percentage of teachers having received training dropped from 80% in 2010 to 71% by the 2011 assessment.
Considering the role teachers play in the traditional education systems of instruction and assessment, a lack of technical and pedagogical support could, at the very least, delay integration of the XO as a significant tool in those systems. It can be argued that the use of XOs outside the classroom could produce other benefits to complement classroom learning. However, given that almost half the students were prohibited from taking the XOs home and that half of all teachers didn’t use the XO in the classroom, would a larger assessment time frame produce significant difference in findings?
Outputs vs. Outcomes
Core to program logic is that program theory helps identify a problem for which a program is designed, complete with inputs, activities and outputs. In assessing a program’s effectiveness, it’s important to distinguish the difference between outputs and outcomes as well ensure alignment with measurement and evaluation criteria. The two largest XO deployments in absolute numbers and percent of population are Peru and Uruguay.
Thursday’s discussion pointed out Uruguay’s clear objective of social inclusion, which produced a near 100% primary school penetration rate through a national 1-to-1 program. The Uruguay assessment focused on access, use, and experience, reflecting a focus on social inclusion as an outcome. In the case of the assessment of Peru, math, language, and cognitive test results showed outputs, but no clear connection to Peru’s 2007 stated objectives which targeted pedagogical training and application. If objectives and outcomes are not clearly aligned with assessment criteria, can “effectiveness” be appropriately measured?
Objects vs. Processes
It is important to be clear about what is being measured before measurement begins. Is it the insertion of an object or introduction of a process? Thursday’s discussion touched on the lack of clarity over this question. Was it the sheer presence of laptops that would dramatically “empower children to learn?” Placing a laptop next to one’s head to demonstrate expectations of the XO, of course, is an exaggeration of the self-learning model.
An analogy that better demonstrates the situation could be a teacher being given a chalkboard and chalk at the beginning of the school year. There is an inherent assumption that the teacher knows how to write on the board and has been trained with curriculum content to write on the board. The point being, the intervention is much more complex than the introduction of a singular object, be it a chalkboard or an XO. In some sense, this is recognized by the stated objectives of the Peruvian program, targeting both training and pedagogical application surrounding the XOs. The realization, however, based on the numbers seems distant.
The statement, “Computers will happen” brought up an interesting idea of shifting focus away from the question of “whether the XO’s presence had an effect,” and towards evaluating the effect of the content and overall experience. Assessing the experience, rather than the hardware, would more specifically target the processes designed to target pedagogical use and curriculum development. The XO does not stand alone, but rather depends on an ecosystem of processes that affect child-learning. More specifically, outcomes related to XO use would more appropriately be measured not by only looking at groups that have received the XO and those who haven’t, but by looking at what surrounding processes have contributed to child-learning.
Walter Bender spoke at a mEducation event where, despite the XO 3.0 being the theme of the event, he emphasized the capabilities of the Sugar platform and the capabilities of it and the applications surrounding it. Perhaps focusing on the evaluation of training, content, and support surrounding XO deployments can provide more insight to measured effects of the processes that sit on top of the object?
One-to-What Computing
The name “One Laptop Per Child” itself explicitly assigns itself to a “one-to-one” computing model as an education solution. The discussion over instructional format is not new and reflects the complex reality that program design. The right “solution” will be considerate of not only training and curriculum, but also the resources and capacity constraints of school programs and the communities surrounding them. Curriculum can be designed for individual or collaborative learning environments. In my three years of experience teaching computer studies in Western Samoa, I used different classroom formats, employing both a one-to-one model and a one-to-many model. Admittedly, applying the one-to-many model was a compromise between limited resources (20 computers) and overwhelming demand (300 students).
However, I observed just as much “learning” happening in both models, in conjunction with adjusted lesson and activity plans. The value of collaboration cannot be discounted, especially when students in a one-to-one model often still talk amongst themselves. The issue then becomes a matter of how much “face-time” the student has with the computer and how much value that provides for their overall learning process. Is one hour of use a day necessarily less valuable than 24 hours a day? The reality is that each student will have different circumstances that affect their ability to use the device.
This variability in circumstance highlights how classroom designs differ from home use design where the primary influencer, at least for the class period, is the teacher. The debate between one-to-one and one-to-many computing formats is important and will no doubt be expanded by studies in the more measurable and controllable classroom environment. But to be clear, I definitely value having a computer at home (as a child, I was fortunate enough to have a Tandy 1000EX), but I am unsure how the effect of such exposure can be accurately measured, when the “home” environment can vary greatly.
Shifting a Paradigm
In 2007, Uruguay’s Plan Ceibal and Peru’s XO program designs looked very similar, but Plan Ceibal evolved with input from the community that helped shape the activities, enabling Uruguay to achieve its objectives. Perhaps larger geographical, socio-economic, or political factors held Peru back from achieving its goals? Or perhaps, as one attendee asked, Uruguay’s success was a reflection of a social structure that existed prior to OLPC.
The reality is that the technological landscape is rapidly evolving, challenging both the “form” (netbook) and “fashion” (1-to-1 deployment) of OLPC deployments. Moving forward, ICT4E projects will need to focus on alignment between a growing array of tools, clear objectives, and assessment criteria in order to ensure measurable effectiveness and consider cost-effectiveness in the presence of alternative measures. A focus on the curriculum and pedagogical experience may provide a better understanding of process interventions rather than object insertions.
The cases of Uruguay and Peru could serve as first steps in appreciating that the success of the program does not lie solely in a single machine, but rather in engaging the stakeholders and conditions surrounding it.
In 2007 it was clear to Miguel Breschner and myself that our plans were very different beacuse the realities we faced were completely different.
Don’t jump into the conclusion we (Peru) failed. Just consider the following facts:
1. We succeeded in giving access to technology to 100% of one teacher school children and teachers (220,000) who otherwise, would have not have any opportunity at all of getting in touch with ICT. Mot of them had the option to take machines with them home.
2. We succeeded in connecting over 5,000 schools and almost 3 million children to Internet.
3. We succeeded in training over 25,000 teachers (40 hours course which we know were enoough for well educated teachers but not for all)
4. We suceeded in designing an offline application giving 100% of Peruvian school children access to thousands of educational webpages
etc. etc. So, we may have failed in some issues but I don’t agree with the sattement “the program failed”
I agree with you in several points:
1) The OLPC program in Peru, or any other place, has to be evaluated according to its initial goal. "math, language, and cognitive test results showed outputs, but no clear connection to Peru’s 2007 stated objectives which targeted pedagogical training and application. If objectives and outcomes are not clearly aligned with assessment criteria, can “effectiveness” be appropriately measured?"
2) Why do we put the effect in the technology alone, and not in the people? Of course, some technologies are more conducive to powerful learning than others, but it is really what people do with technology what matters.
And I do disagree with you in your final point about Uruguay and Peru. I don't know how much information and details you have about the programs, but they were extremely different from the beginning, and continue to be. I am sure there are many lessons to be learned from those two program, as well as many others, among the great ecology of OLPC and technology programs around the world.