Building the Knowledge Base in Education and Technology
.
During the past few weeks, a remarkable discussion has arisen on the results and implications of the OLPC Peru trial. As members of the team that produced this study, we are honored and grateful for all the comments and suggestions. We would also like to emphasize that this study is the result of a close collaboration among many individuals at the Ministry of Education in Peru, the think tank GRADE and the Inter-American Development Bank. In particular, the project would not have occurred without the support and commitment of Oscar Becerra, the director of the program at the time of the study.
Having responded to the points raised in previous posts in the corresponding comment sections, we will not address them here. Instead, we wish to address the implications of this discussion going forward. One of the main points of our paper, and our guiding principle for future work, is that a pedagogical plan is needed for the incorporation of computers into educational activities so that computers can be a useful tool.
In this post we wish to offer the personal view of Julian Cristia (IDB), Santiago Cueto (GRADE) and Eugenio Severín (IDB) regarding why solid evidence is needed in the area of education and technology—and how such knowledge can be produced. Specifically, we will argue the following:
- We have little solid evidence about what works in education and technology.
- This lack of knowledge is very costly.
- We can and should produce this knowledge, but
- We need to convince decision-makers to support efforts in this area.
We now lay out our arguments.
Little solid evidence about what works
We know little about what works in education and technology. For example, even though some research so far has focused on one-to-one programs, the main questions regarding the effects of such programs remain largely unanswered. (Although we hope our study has contributed to this literature, its results cannot be directly extrapolated to other contexts or other implementation models). This paucity of evidence is surprising, given that many countries around the world are embracing programs of this type, as Christoph Derndorfer discussed in the previous post. It is also particularly surprising when compared with what is known about other social programs. For example, conditional cash transfer programs represent another intervention that has been adopted in many developing countries. However, there are hundreds of studies analyzing those programs, compared with the handful of existing studies on one-to-one programs.
This lack of knowledge explains why technology and education is an area of heated debate. There are no other corresponding popular blogs called “agriculture and technology debate” or “health and technology debate,” even though technological advances have significantly aided striking improvements in both areas. Then again, because so much has been learned in these two fields, there may not remain much debate regarding their central issues. We do recognize that education involves more complex social processes than those involved in technology’s relationships with agriculture and health. Still, those processes and related outcomes can be analyzed, and significant knowledge can be generated regarding the effectiveness of particular interventions in specific environments. (Robert Slavin, from John Hopkins University, makes a compelling case in adopting the evidence-based approach in education as it has already been adopted in other fields such as agriculture and health).
The use of technology could be expected to translate into improvements in the educational process and thus in learning outcomes. However, it remains a challenge to know what technologies are most appropriate for a given setting, what complementary interventions should be added, and what educational areas may benefit the most from using technology.
This lack of knowledge is very costly
While researchers may relish the opportunity to find an important topic in which large and answerable questions remain open, policymakers find such an environment distressing if not outright painful. Being forced to make policy decisions in a vacuum of information is problematic because the risk of not choosing the right option is high, and this is especially true for public programs that involve spending significant resources. This is definitely the case for one-to-one laptop programs, which require important investments, and governments with limited budgets face tough decisions on whether to launch them or not. The risks associated with such decisions are considerable. If evidence later arises showing that those programs were effective, governments that did not implement them can be criticized for failing to seize an opportunity. But if the evidence suggests lower returns for laptop programs than other policy options, then governments that implemented those programs could be attacked for misallocating resources.
We can and should produce this knowledge
There are many researchers around the world who are willing and able to help determine the most effective education and technology programs for various contexts, and the basic methodology for this research has already been developed. Moreover, the most powerful method for producing solid quantitative evidence (i.e., implementing randomized controlled trials) is easy to implement, and the resulting data are quite simple to analyze. A substantial part of the research effort needed involves designing promising technology in education interventions and clearly defining the components of interventions (as Carmen Strigel detailed in a previous post). Only after these models are defined and tested on a small scale should large (and expensive) randomized controlled trials be implemented.
The question, then, is whether we should set aside enough resources to produce this knowledge. As mentioned before, taking policy decisions under uncertainty can be very costly, but to underscore this point we will present some back-of-the-envelope calculations.
First, suppose that in the roughly 36 countries where OLPC has been implemented, an accompanying randomized controlled trial (RCT) had been undertaken. Assuming that a high-quality RCT had been implemented costing $1 million in each of these countries (roughly what was spent on the OLPC Peru trial), the total cost of research would have been $36 million. In comparison to this estimated research cost, the estimated cost of OLPC programs around the world could be on the order of 800 million dollars (2 million laptops x $200 per laptop x 2 for the ratio of total to laptop costs). This means that, if a sum of less than 5% of OLPC expenditures had been devoted to RCTs, we would now have a wealth of data to further our understanding on expected impacts in different contexts and implementation models. This percentage further shrinks to about 1% when compared with an estimate of how much countries in Latin America alone have spent or will spend under current plans on one-to-one programs (about $2.8 billion = 7 million laptops x $200 x 2).
This argument can be made even stronger if we consider a plausible scenario in which these programs are indeed effective. To make a rough estimate, suppose that implementing one-to-one programs could produce a gain in human capital that generates a net present value benefit of about $900 per student more than other policy interventions. (Assumptions: increase in permanent salary of one percent, average monthly wage $440, individual works between ages 20 and 55, discount rate 3%). Now, if we consider implementing this intervention for 50% of primary school children in Latin America and the Caribbean, the intervention will produce an aggregate increase in present value earnings of about $30 billion, an amount that makes the cost calculated above pale in comparison.
Finally, we can put these R&D expenses in perspective by comparing them with other fields. (These numbers are indicative because it is difficult to obtain estimates of specific investments related to technology). In the health sector, about $95 billion are spent annually on R&D worldwide by governments and pharmaceutical companies. But given that these companies generate large revenues in developing countries, it is reasonable to say that these countries are currently devoting large resources to R&D (on top of direct public funding to research centers). For the case of agriculture, the amounts spent on R&D are astonishing; Argentina alone annually devotes about $120 million to advancing knowledge in this area.
But we need to convince decision-makers
Even if the preceding arguments make sense, nothing will change until decision-makers, who can provide funding for R&D, are convinced that this is money well spent. To change the current status quo, a significant communication effort is needed to make clear that, while there is a great opportunity that lies ahead, we will need to invest resources to reap potential benefits.
Global organizations and NGOs are in the process of realizing that investing in generating greater knowledge in education and technology could have high returns. Clearly showing which types of programs work best in specific environments, could induce government decisions to scale up those programs. This kind of research can be conducted by devoting resources to the design of promising interventions and their later implementation (and evaluation) on a small scale.
Nonetheless, because global organizations and NGOs’ combined budgets equal only a small percentage of public outlays of developing countries, governments in these countries will need to play a prominent role. Hence, a special effort should be undertaken to convey to governments around the world that R&D investment in the area of technology in education has large returns. Clearly, evaluating large public programs in a country involves political risks, especially if the results are not carefully analyzed or if they raise questions regarding the government’s decisions. But implementing small and promising pilot programs together with rigorous evaluations could provide significant benefits (and even political ones) if positive results create pressure for subsequent governments to keep or even scale up the evaluated programs.
Following examples in other fields such as health and agriculture, we believe that that independent, well-funded long-term research and development centers will need to be established in order to generate significant improvements in knowledge. Careful consideration should be given to the governance structure of such centers, which will need to focus on generating relevant knowledge that could increase learning through technology as well as participate in the system-wide adoption of this knowledge. A significant discussion will need to take place in order to determine whether and how investments of this type should be made.
Producing useful research will require the participation of a variety of stakeholders and professionals in order to maintain academic rigor as well as derive policy implications directly from the studies that are produced. All of these actions will be needed to improve our efforts in helping children learn.
Thanks for this great article!
I totally agree that as a field and its community we should move towards an evidence-based approach rather than heavily relying on faith as it's commonly done these days. This would also go a long way in reducing the number of long and painful faith-based discussions many of us are having on a regular basis:-)
Establishing independent, well-funded long-term research and development centers would be a great step into that direction. Now the question is how do we make it happen?
We propose establishing R&D centers in Education and Technology and @random_musings asks a great question: how do we make it happen? I think this question could be a nice topic for discussion.
Some thoughts on this issue. I imagine there could be two lines of action. First, it will be useful to develop and refine this idea further. What would be the goals of these centers? How they will be structured? What about governance? How they could identify and leverage on effective educational entrepreneurs in the field? How the research agenda will be set? How it will be designed so there is a clear focus on identifying relevant solutions to be implemented in practice and not just doing research for the sake of publication in journals? Second, there should be an effort to engage policy-makers in this topic. Part of this will include communicating the benefits of establishing such centers, but also a honest discussion should take place to really assess the merits of this proposal. It will be interesting to pose this question in high-level policy dialogues to hear reactions and arguments.