{"id":2092,"date":"2011-10-06T08:45:09","date_gmt":"2011-10-06T12:45:09","guid":{"rendered":"https:\/\/edutechdebate.org\/?p=2092"},"modified":"2012-09-27T10:39:02","modified_gmt":"2012-09-27T14:39:02","slug":"ict-and-the-early-grade-reading-assessment-from-testing-to-teaching","status":"publish","type":"post","link":"https:\/\/edutechdebate.org\/reading-skills-in-primary-schools\/ict-and-the-early-grade-reading-assessment-from-testing-to-teaching\/","title":{"rendered":"ICT and the Early Grade Reading Assessment: From Testing to Teaching"},"content":{"rendered":"
The science of early literacy acquisition and proven techniques for teaching reading are both backed by years of experimental research, as well as practical experience implementing programs to improve reading.<\/p>\n
Experts agree that measuring reading progress early offers the benefits of informing remediation, taking a snapshot in time or showing progress over time of children’s reading abilities and informing stakeholders and policy makers about what programs or methods work. <\/p>\n
Frequent diagnostic testing at national or classroom levels can serve to establish benchmarks; and monitoring progress against these benchmarks can be a key factor in motivating schools, teachers, students, and families (Davidson, Korda, & Collins, 2011).<\/p>\n
The Education for All Fast Track Initiative<\/a> recently set two indicators related to reading skills:<\/p>\n These indicators are considered an effective measure of a school system’s overall health as well as a specific diagnosis of reading performance that can inform policy and implementation of curriculum and teacher training, among other things. According to Gove and Wetterberg (2011),<\/p>\n “The Early Grade Reading Assessment (EGRA) is one tool used to measure students’ progress toward learning to read. It is a test that is administered orally, one student at a time. In about 15 minutes, it examines a student’s ability to perform fundamental prereading and reading skills” (p. 2).<\/p><\/blockquote>\n Over the past five years, we at RTI International, various donors, and experts in the field of early reading have worked to “develop, pilot, and implement EGRA in more than 50 countries and 70 languages” (p. 2). Assessments like EGRA help teachers focus on results<\/em>, by describing what children know or do not know, and where instruction must focus in order to change that. For example, in Egypt, the first Arabic EGRA survey showed very clearly that children who knew letter sounds<\/em> performed better on reading a short passage than children who only knew letter names; yet 50% of children tested could not identify a single letter sound. These findings signaled that a fundamental shift in instructional methods was required, and after schools adopted a phonics-based approach using letter sounds, performance increased nearly 200% over baseline one year later (Cvelich, 2011).<\/p>\n That said, to measure for results, teachers and their supervisors must find the tools accessible and easy to use to inform their own instruction. It also helps if the results underpin communication with parents and communities, as well as national politicians. (Crouch, 2011). Too often, results from national standardized tests remain at the national level, with teachers rarely getting feedback on performance, much less feedback that is more specific than classroom averages. Furthermore, it can sometimes be months, if not years, before the results of large national assessments are made available, at which time it is too late to change instructional practices – at least for that set of children.<\/p>\n How can ICT play a role?<\/strong><\/p>\n Systematic use of mobile devices to assess early literacy and numeracy, especially in developing countries, remains limited to date. Reasons include:<\/p>\n As we state elsewhere (Pouezevara & Strigel, 2011), there are several ways in which information and communication technologies (ICT) may be applied to the assessment process to make implementation and use of the results more accessible:<\/p>\n Among these, the greatest added value is in using electronic devices for data collection and rapid analysis in place of paper-based assessments.<\/p>\n <\/strong><\/p>\n What solutions are available<\/strong>? <\/strong><\/p>\n In theory, there are many potential ways to transform paper assessments into an electronic equivalent, but a custom solution is required because of differences between oral reading assessments like EGRA and other standard surveys. For example, data have to be entered at the child’s pace on the subtasks, not that of the assessor. Therefore, survey data collection applications on the market for phones, PDAs, or portable computers typically are not appropriate.<\/p>\n After investigating a wide range of potential hardware and software platforms, we developed Tangerine\u2122, a digital assessment interface for touch-screen tablet computers running the Android operating system (see photographs). It can be used for the standard EGRA approach, or customized for other types of surveys such as early math diagnostics or school information surveys.<\/p>\n Other organizations are also exploring a variety of solutions. Prodigy Systems, an organization that has partnered with RTI in Yemen, successfully developed iProSurveyor for use with Arabic assessments on the iPad. Its first large-scale implementation in Yemen in early 2011 confirmed many of the benefits of the digital approach.<\/p>\n <\/strong><\/p>\n Cost-Benefit Analysis<\/strong><\/p>\n At RTI we recently conducted a preliminary cost-benefit analysis using approximate costs from recent EGRA implementations in four different African countries. The analysis aimed to identify the point of cost recovery at which the digital approach would actually yield cost savings. We modeled not one, but three data collection rounds for each country, because it is common to repeat assessments \u00a0– e.g., for program baseline, midterm, and post-intervention evaluation, or annual monitoring of student outcomes.<\/p>\n In our cost calculation for the digital approach, we assumed hardware costs of USD300\/enumerator plus a 10% contingency for spares and accessories, such as a wireless access point for field-based data back-up for the first data collection (e.g., baseline). For the cost of a second digital data collection, we assumed re-use of the tablets from the first data collection, but factored in a 15% contingency just in case replacements are needed.<\/p>\n To calculate the cost of a second paper-based data collection we multiplied the paper-related costs by two, as the same costs for printing, data entry, and data cleaning would incur again. We followed the same process for adding a third data collection to the calculation (assuming baseline, mid-term, and post-intervention assessments).<\/p>\n As shown in Exhibit 1, for most small-sample data collections or one-time assessments, the cost of the hardware may not be offset by the eliminated paper-related costs. The return on investment in repeated implementations, however, is clear in terms of cumulative costs.<\/p>\n Exhibit 1: Cost of EGRA implementation, paper vs. electronic, for three administrations<\/b><\/p>\n <\/p>\n In addition to making large national assessments more efficient, the same devices can be adapted for use as classroom-based continuous assessment tools, or as data entry interfaces for situations that still require paper-based tests. With such devices in their hands, teachers or school supervisors can do regular mastery checks more frequently, and capture the results at student and classroom levels. <\/p>\n The resulting data set is a rich one, and if it is supported by built-in computer-based analytics, it can be analyzed in multiple ways to indicate not only whether the methods in place are improving reading ability, but also what areas of the curriculum need more attention, and which children or groups of children are falling behind. For example, detailed item analysis at the classroom or individual level might show a recurring problem with vowel sounds, or decoding. This subsequently provides clear instructional recommendations to focus on.<\/p>\n Limitations and pitfalls<\/strong><\/p>\n However, electronic administration is not necessarily a cure-all:<\/p>\n Obviously, using electronic data collection at either national or classroom levels does not solve all the limitations of print-based testing; indeed, doing so might introduce new challenges. For example, although a digital solution would eliminate the risk of environmental damage to paper forms during difficult transport situations, it might pose a great risk that all assessment data could be lost at once through loss, damage, or theft of a single device, if proper backup procedures were not in place. Likewise, handling of the new device might prove to be more challenging than handling the timer and all associated materials. [\u2026] Thus, strong electronic quality control and supportive supervision during data collection would be crucial. (Pouezevara & Strigel, 2011, p. 188)<\/p><\/blockquote>\n Furthermore, the EGRA approach is intended to be a simple solution that can be adopted by countries with minimum technical assistance. An electronic solution should be flexible enough that it does not create dependency of users on software programmers or hardware technicians to change test items and configuration as needed.<\/p>\n In terms of costs, clearly, initial investment costs for specialized hardware may be prohibitive in some situations, but our preliminary cost-benefit analysis indicated that over time the investment will pay off if used for multiple large-scale implementations. Additionally, implementers can leverage the initial investment by choosing tools that can be used for other purposes when not in use for assessment\u2014for example, by loading tablet computers with other instructional materials, training resources, or literacy materials.<\/p>\n We can also foresee assessment software being linked not only to automatically generated analysis of results, but also to suggested instructional resources tailored to those results and a record of day-to-day time on task. It is also possible, using the same technologies that power Tangerine\u2122, to adapt the assessment methodology to more common and less expensive handheld devices, such as mobile phones. These smaller devices might be particularly useful for the most rapid types of literacy assessments, such as Pratham’s yearly literacy and numeracy surveys<\/a>, which involve fewer subtasks than EGRA and fewer items per test.<\/p>\n Another potential pitfall related to making national or continuous assessments more readily accessible is that they could be used for excessive assessment, and focus on “teaching to the test” at the expense of other higher order or student-centered activities. Too much focus on averages or aggregated results can draw attention away from the achievement of specific subgroups. Additionally, care must be taken that classroom-level results are not misused by aggregating small samples and reporting them up to the national level or attempting to generalize them.<\/p>\n This is a rapidly evolving field, with new technologies arriving on the market almost daily, and prices falling significantly, so it is expected that it will become increasingly feasible to implement electronic methods for literacy assessments in developing countries. Meanwhile, we are piloting various solutions and collaborating with other institutions that have similar goals. Further interest and ideas from the international development community are welcome.<\/p>\n References<\/strong><\/p>\n Crouch, L. (2011). Motivating early grade instruction and learning: Institutional issues. Ch. 7 in A. Gove & A. Wetterberg, The Early Grade Reading Assessment: Applications and interventions to improve basic literacy <\/em>(pp. 227\u2013250). Research Triangle Park, NC: RTI Press. Available from http:\/\/www.rti.org\/pubs\/bk-0007-1109-wetterberg.pdf<\/a><\/p>\n Cvelich, P. (2011, September\/October). Egypt shakes up the classroom. Frontlines.<\/em> Washington, DC: United States Agency for International Development (USAID). Available from http:\/\/www.usaid.gov\/press\/frontlines\/fl_sep11\/FL_sep11_EDU_EGYPT.html<\/a><\/p>\n Davidson, M., Korda, M., & White Collins, O. (2011). Teachers’ use of EGRA for continuous assessment: The case of EGRA Plus: Liberia. Ch. 4 in A. Gove & A. Wetterberg, The Early Grade Reading Assessment: Applications and interventions to improve basic literacy <\/em>(pp. 113\u2013138). Research Triangle Park, NC: RTI Press. Available from http:\/\/www.rti.org\/pubs\/bk-0007-1109-wetterberg.pdf<\/a><\/p>\n Gove, A., & Wetterberg, A. (2011). The Early Grade Reading Assessment: An introduction. Ch. 1 in A. Gove & A. Wetterberg, The Early Grade Reading Assessment: Applications and interventions to improve basic literacy <\/em>(pp. 1\u201338). Research Triangle Park, NC: RTI Press. Available from http:\/\/www.rti.org\/pubs\/bk-0007-1109-wetterberg.pdf<\/a><\/p>\n Pouezevara, S., & Strigel, C. (2011). Using information and communication technologies to support EGRA. Ch. 6 in A. Gove & A. Wetterberg, The Early Grade Reading Assessment: Applications and interventions to improve basic literacy <\/em>(pp. 183\u2013226). Research Triangle Park, NC: RTI Press. Available from http:\/\/www.rti.org\/pubs\/bk-0007-1109-wetterberg.pdf<\/a><\/p>\n","protected":false},"excerpt":{"rendered":" The science of early literacy acquisition and proven techniques for teaching reading are both backed by years of experimental research, as well as practical experience implementing programs to improve reading. EGRA testing in Ethiopia Experts agree that measuring reading progress early offers the benefits of informing remediation, taking a snapshot in time or showing progress […]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1071],"tags":[317,1101,1099,1102,212,1098,1100,1096,1075,795,1097,593,633,65,51,1087,811],"_links":{"self":[{"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/posts\/2092"}],"collection":[{"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/comments?post=2092"}],"version-history":[{"count":9,"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/posts\/2092\/revisions"}],"predecessor-version":[{"id":2688,"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/posts\/2092\/revisions\/2688"}],"wp:attachment":[{"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/media?parent=2092"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/categories?post=2092"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/edutechdebate.org\/wp-json\/wp\/v2\/tags?post=2092"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}\n
\n
\n
\n
\n