Evaluation Models
In reading about the five evaluation models in our text, and then following up with research online, I found that most models developed since Kirkpatrick's refer to his framework of four levels. One example of this is Brinkerhoff's Success Case Method, which the text compares to Kirkpatrick's model. However, the two additional models I will summarize do not draw any such comparisons. They are the Ciao! Framework by Eileen Scanlon, et al., of The Open University in the United Kingdom, and Ellen B. Mandinach and associates' Data-Driven Decision Making framework.
The Ciao! Framework evolved over a 25-year period from the collaboration of five women who developed a comprehensive set of working principles and practices for evaluating learning technology. Their abstract written in 2000 details the model they developed in 1996 (see table, below), and concludes that the aim of the framework, to encourage the use of a variety of methods rather than one approach to evaluation, has served them well. The columns represent three dimensions of the learning program that must be evaluated. The context should include how the technology is used in the course and where and how it is used. Interactions focuses on the learning process and shows ways students interact with each other and with the technology. Outcomes refer to changes in students resulting from the technology use. The Rational row gives the basis for evaluating each aspect, and the remaining two rows highlight the type of data to be collected and the methods to be employed.
I like this framework because it is a general model that can be customized for individual needs, as the authors suggest. I would use it to evaluate student projects that use technology.
CIAO!
Framework
|
|||
Context
|
Interactions
|
Outcomes
|
|
Rationale
|
In
order to evaluate technology, we need to know about its aims and the context
of its use.
|
Observing
students and gathering process data helps us to understand whether or not some
elements work and why and/or how they work.
|
Attributing learning outcomes to technology when it is one part of a multifaceted course is very difficult. It is important to try to assess
both cognitive and affective learning outcomes such as changes in perceptions
and attitudes.
|
Data
|
Designers'
and course teams' aims
Policy
documents and meeting records
|
Records
of student
interactions
Student
diaries
On-line
logs
|
Measures
of learning
Changes
in students' attitudes and perceptions
|
Methods
|
Interview
technology designers and course team members
Analyze
policy documents
|
Observation
Diaries
Video/audio
and computer recording
|
Interviews
Questionnaires
Tests
|
Adapted from:
Scanlon, E., Jones, A., Barnard, J.,
Thompson, J., & Calder, J. (2000). Evaluating information and communication technologies for learning. Educational Technology & Society, 3(4),
101-107.
|
A Theoretical Framework for Data-Driven Decision Making, presented at an annual meeting in 2006, assumes that informed decisions can only be made based on accurate data, and the model depicts decisions made within local school districts. This model also evolved over time from the collaboration of the authors, and was informed by the work of their colleagues, including R.L. Ackoff's earlier work. The 2006 research paper states, "According to Ackoff (1989), data, information, and knowledge form a continuum in which data are transformed to information, and ultimately to knowledge that can be applied to make decisions." The district, building, and classroom each use different data in different ways to make decisions. The technology tools facilitate decision making by stakeholders in the model. For example, a classroom teacher (stakeholder) might give students an assignment that highlights a particular learning problem. The teacher collects and organizes results from the classroom lesson, and analyzes the results. A principal may examine results across classes for a particular grade level, and a district administrator may analyze trends in performance for various student groups, possibly to predict the goal of reaching AYP for state accountability. It is vital to synthesize the information into concise and targeted summaries of usable knowledge and prioritize it--the final stage of the continuum.
This model is also very adaptable for various needs and various types of users. I would use it like the example above of the classroom teacher who evaluates an assignment that was given specifically to target a learning problem, such as a TEKS objective students had trouble mastering on a test.
Mandinach, E. B., Honey, M., & Light, D. (2006, April). A theoretical framework for data-driven decision making. In EDC
Center for Children and Technology, paper presented at the Annual
Meeting of the American Educational Researchers Association (AERA), San
Francisco, Calif.
Scanlon, E., Jones, A., Barnard, J.,
Thompson, J., & Calder, J. (2000). Evaluating information and communication technologies for learning. Educational Technology & Society, 3(4),
101-107.
Other Questions for Evaluation
In Chapter 11, our text discusses treating evaluation of learning programs in schools as they are treated in business, with value determined by results--a "Show Me the Money" approach stressing the program's return on investment (ROI). Any time money and resources are expended by a school, they must be accounted for, and that should be a consideration when evaluating learning programs. Whatever evaluation method is used should show how well the program performed, including the ROI, so the benefits of the program are easily apparent. These worth of the program should also include the intangible benefits, such as student satisfaction and teamwork, and increased student engagement.
Performance Problem/Non-Instructional Solution
A performance problem for many freshmen is related to the lack of self-discipline that typifies that age student. It is common for students to have gaps in their instruction due to poor attendance, not completing homework, or apathy in general. My department has used Moodle to provide opportunities for students to fill in those gaps. They have access to PowerPoint presentations of class discussion topics or that focus on major learning concepts that are often difficult for students to master. There are also interactive games, video clips, and links to helpful study aids.
Other Questions for Evaluation
In Chapter 11, our text discusses treating evaluation of learning programs in schools as they are treated in business, with value determined by results--a "Show Me the Money" approach stressing the program's return on investment (ROI). Any time money and resources are expended by a school, they must be accounted for, and that should be a consideration when evaluating learning programs. Whatever evaluation method is used should show how well the program performed, including the ROI, so the benefits of the program are easily apparent. These worth of the program should also include the intangible benefits, such as student satisfaction and teamwork, and increased student engagement.
Performance Problem/Non-Instructional Solution
A performance problem for many freshmen is related to the lack of self-discipline that typifies that age student. It is common for students to have gaps in their instruction due to poor attendance, not completing homework, or apathy in general. My department has used Moodle to provide opportunities for students to fill in those gaps. They have access to PowerPoint presentations of class discussion topics or that focus on major learning concepts that are often difficult for students to master. There are also interactive games, video clips, and links to helpful study aids.
I like that you researched your information online where you found that many of the new evaluations are based on the four levels of Kirkpatrick’s framework. I chose to do the two models in the book. I think I could have found better models of evaluation for education if I had searched the internet. The models in the book were more geared toward business, they were training models. However, the models in the book can definitely be adapted to education. It is interesting that you found a model that evaluates learning technology. I would like to find ways to evaluate technology use at our school. We are always trying to add technology pieces to our geometry curriculum. This class had made me think strongly about evaluation. We evaluate students with quizzes, tests and projects but our curriculum does not have an evaluative process.
ReplyDeleteI liked the way you described the Ciao! Framework model, it was concise and to the point. Your graphic is excellent and I liked that you cited your reference. I idea for using the model also to evaluate student projects is a good one. With your second model, A Theoretical Framework for Data-Driven Decision Making, you said that the model evolved over time. I believe that would be true about any model. I’m sure that any model would need tweaking over time to fit your needs. It’s like a budget that gets better over time. Your graphic for the second model is excellent.
When you discussed “other questions for evaluation,” you stressed from the book that evaluation of learning programs in schools should be treated like a business where the value is determined by the results. I do believe that ROI is going to be important because the results have to be weighed against the costs since schools do not have a lot of money to throw toward programs. Evaluation would be key so that schools would not waste their time on programs that are too costly or ineffective.
The CIAO framework is very interesting. More districts are pushing the use of technology, and this would be a good way to measure and critique it. I want to hold students accountable and a lot of the work they complete using technology is of good quality. Using this framework to customize to each student as needed allows me to measure how much of the lesson they have processed.
ReplyDelete