Do We Really Need to Assess ICT4E Initiatives? And If So, How?
Back when One Laptop Per Child started, they made an interesting point around evaluations of computer usage in schools. Their core belief was that all evaluations were flawed because we don’t have the right tools to assess the impact of ICT in education, and therefore talking about testing the efficacy of 1:1 computing was wasted effort.
I’ve heard this refrain repeated often since then, and not just by those promoting technology in schools. Its a equal thought from those that feel geek lust is clouding our judgment and we should focus on teachers, not technology. Its also promoted by those that point out changes to educational methodologies have often happened by force of will, not empirical results.
Now, Nicholas Negroponte is putting forth the idea that one computer per child is like electricity – such an accepted benefit for society that we’ve moved on from discussing its impact to just looking for the right models to fund it.
While we may have differencing opinions on OLPC or its benefits, the basic questioning of ICT4E evaluations is compelling. Starting with the simple question of “Do we need assessments?” we can branch into related questions that examine the basic assumptions we hold dear, like:
- Are ICT4E assessments effective in measuring outcomes?
- Do we even have the tools to tell if they are effective?
- What tools are those?
- Are we really using these assessment tools correctly?
- And regardless of the outcomes, should we really wait for long-term results, or should we implement ICT4E deployments now, as the case is compelling enough already?
For November, the Educational Technology Debate will focus on assessments of ICT initiatives in education – how we can both validate them and use them correctly to improve ICT4E overall. For discussants we’ll be joined by the following experts:
-
Mary Hooker
Mary Hooker is an education specialist with over 30 years experience working in the educational sector in Ireland and Africa. Since 2007 Mary has been working with the Global eSchools and Communities Initiative. Mary is currently engaged in studies for a Doctorate in Education with Queen’s University Belfast, Northern Ireland. -
Rob van Son
Rob van Son was a subject in early Computer Supported Education experiment in the 1980’s, and since worked on everything from small 8088 PCs and the first Mac to modern multi-core file and web servers. Rob is a linguistics expert with a focus on integrating information in spoken communication for Universiteit van Amsterdam. Rob has a PhD in linguistics.
Please join us for what we all expect to be a lively and informative conversation exploring assessment validity and tools for ICT4E. Your input can start right now in the comments below, and Mary and Rob will post their opening remarks beginning Monday, November 9.
.
Assessment has been very much on my mind, lately. And I've been thinking that a high-level "More, Better, Faster, Cheaper" framework could potentially be a very simple yet effective means of evaluating ICT4D across all sectors, including education (and I wrote a little bit about it recently @ http://linearityofexpectation.blogspot.com/2009/1… I look forward to what this month's discussants have to share!
Assessment is important at it provides a snapshot of the current state of affairs and provides guidance for future action. Just because we don’t have a perfect assessment tool, doesn’t mean that we shouldn’t employ them. With experience, they will be refined to meet the needs of all stakeholders including learners, instructors/facilitators, parents, government officials, employers, and commercial interests.
The case is compelling for us to use and modify existing tools to assess the effective implementation of ICT4D. These assessment tools may need to be contextualized to ensure that it takes into account the needs and circumstances of learners particularly those in rural areas or those who live in foreign countries and may have different life experiences than those who live in urban areas or in more developed nations in which many of the current assessment tools have been developed. Measuring long term outcomes is a challenge for all educators not just for those involved in international development work.
I look forward to following this discussion.
As Negroponte makes clear in his talk, olpc is not about ICT, it is about giving laptops to children who will use them in the classroom, but also the rest of their lives. The laptops are not education computers, they are learning computers, family computers, village computers.
The key mistake people make is to think of olpc as an educational initiative. It is really a development project. As Negroponte said at one point, "It still works, it still makes economic sense, it still makes development sense."
Assessment is important at it provides a snapshot of the current state of affairs and provides guidance for future action. Just because we don’t have a perfect assessment tool, doesn’t mean that we shouldn’t employ them. With experience, they will be refined to meet the needs of all stakeholders including learners, instructors/facilitators, parents, government officials, employers, and commercial interests.
The case is compelling for us to use and modify existing tools to assess the effective implementation of ICT4D. These assessment tools may need to be contextualized to ensure that it takes into account the needs and circumstances of learners particularly those in rural areas or those who live in foreign countries and may have different life experiences than those who live in urban areas or in more developed nations in which many of the current assessment tools have been developed. Measuring long term outcomes is a challenge for all educators not just for those involved in international development work.
I look forward to following this discussion.
Evaluation? YES! Tests, NO!
Many people believe that Quality Control is the same as Quality Assurance. While the first one
just measures parameters and conditions at a certain point of a process, one test, the second one sets up these measurements as a continuous and iterative system to secure achieving a pre-defined
quality, continuous examination. In other words, the only way to implement Total Quality Assurance (TQA) is by having continuous Quality Control.
In other words, executing the Quality Control (testing) does not guarantee the achievement of Quality Assurance. One tests is not enough.
Most of the links related to quality in education talk about the achievement of “quality” based on the scores of a standardized test (FCAT, SAT, PISA, etc.). I hope that the authorities know that measuring the quality doesn't improve the quality by itself. In fact, the students who already failed on the test, they will not improve unless a teacher takes the responsibility to review the results of the test, and help them in the areas where they have failed.
Some years ago, Mr. Edward Deming (one of the fathers of the TQA theories) proposed that for any continuous process or industry, like education, the managers' role must change in order to achieve better quality products and services: “The people work IN the system, the job of the manager is to work ON the system, to improve it continuously with their help”. This could be paraphrased to the educational industry as “The students learn IN the system, the job of the teacher is to work ON the system, to improve it continuously with their help”. In other words, teachers need to become managers of the learning/teaching process; analyzing data and supervising students to achieve the proposed quality levels.
Of course, the word “Education” is too extensive, so I would like to circumscribe Total Quality Assurance of the cognitive/constructive steps of the learning/teaching process. I believe that if teachers know the weaknesses of each student, they would easily help them to surpass them and achieve these steps faster and better. Achieving theses steps faster allows more time for additional steps of the education process. The main problem is those teachers do not know individual performance until the exams. They do not have the data to be managers during the process.
Deming's P-D-C-A Cycle (Plan-Do-Check-Act) is a very well known algorithm in the industry as one of the standard methods for implementing quality assurance programs. Deming proposed a cycle: a continuous and iterative process that never ends. The cycle induces corrections during the “Act” phase, so the process or product can be improved.
ICT tools would allow to collect the necessary data from homework books and assignments which must be completed continuously during the period of school. Every day, every week, every month. It is not just a single test. If these tools are well designed, teachers will be able to become managers of the process. Simultaneously, administrators and chiefs of departments could compare groups, classes, and schools, just by reviewing the students’ performance, at any time; much before the standardized tests take place.
The essence of this concept is that the accumulated data will allow teachers (and administrators)
to act before students fail a test, guaranteeing that they will achieve the proposed “quality” of
learning. Total Quality Management must work on metrics to evaluate alternatives for reducing cost and maximizing resources.