Thursday, June 27, 2013

Week 4: Trends & Issues in IDT


Similarities and Differences--Health Care Education, P-12 Technology Integration, & Post-Secondary Education

http://farm9.static.flickr.com/8375/8425570776_da0fe9efd4.jpg 
I found many similarities in IDT trends and issues between the health care industry, P-12 technology integration, and the roles of instructional designers in university settings. The trend in all three of these contexts is toward problem-based learning (PBL). In health care education, a 1986 report called for using technology information systems for problem solving, keeping physicians current, and to create lifelong learning. P-12 education saw the development of the NTeQ model for technology integration in 1998, which is problem-based and deals with student use of technology as a tool. The university instructional designers mentioned using IDT to change teaching methods toward more problem-based approaches in distance learning courses. All three areas faced similar issues, too, such as the importance of research, although this was probably most vital in the health care arena, where it makes technology even more important. Research has been slower in P-12 tech integration, and one of the Australian university designers stated that funding priorities have shifted from research to teaching and learning in that country. The issues of professionalism and service to the professional community are common to all three of these contexts. Health care education has the added issues of risk and sensory perception in designing instruction for students, necessitating the need for life-like simulations and multimedia lessons for instruction and practice.
http://blog.hawaii.edu/uhmednow/files/2012/07/MS-PBL.jpg

I expected the P-12 technology integration context to be the most like my own work situation, when comparing mine to these three, since it’s more of an apple-to-apple comparison, but was a little surprised that the others share so many trends and issues. My district has not overtly pushed PBL, though some workshops I have attended have those leanings. I have seen some IDT trends in professional development workshops, also. 

Global Trends and Issues
How and can we prepare our youth to address the problems of living in a world with 9 billion people when the earth’s resources cannot sustain that many?

This is a question I discuss with my freshmen in World Geography, and an overarching concept that is spiraled throughout the course. I think the answer lies in creating problem solvers, and students absolutely must have practice in lessons that promote problem-solving. They need to learn about global efforts toward sustainability, and to carry out efforts of their own to find solutions in simulated cases. Since technology is changing so quickly, it makes sense that it would fit with preparing students for changes that we can't really anticipate. Related to this question, I have shown a video called "Did You Know?" about technological shift in education at the beginning of the year in geography classes, and it makes an impact on the kids. It has been around since 2006, but if you've never seen it, it is entertaining and well worth your time to check it out: http://www.youtube.com/watch?v=C1LiJuUGpyY


  Creative Commons License by Karl Fisch and Scott McLeod

Does our current education system, curriculum, and instructional practices help learners foster the complex problem-solving skills necessary to tackle these issues? 

I think we are moving in the right direction to tackle these issues in this country, but we have a lot of work to do to before most classrooms in most schools will fit that bill. In my experience, the trend toward higher stakes testing has hurt the progress of producing more higher order thinking. Teachers are so focused on covering the material for the test that more time-consuming, but engaging activities are eliminated. I've reached the conclusion that projects and student-centered activities are what help students make connections and grasp the important concepts of the subject.

Are there methods and practices used in European and Asian countries that we should use here in the US? Why or why not? 

I love the tradition from Japan of sharing the wisdom of older teachers with younger ones in creating subject-area-specific instructional design (p. 242). Though it has been practiced for decades, it seems on target for IDT that uses technology for enhancing the curriculum. I was surprised at how negative the depiction of IDT in Europe was, and that it has a "skills and credibility gap" (p. 252). It sounds like the U.S. has been more successful, at least in gaining support and credibility for technology integration and good IDT.


Reiser, R. A. & Dempsey, J. V. (2012). Trends and issues in instructional design and technology, 3rd ed. Boston: Pearson. 

Friday, June 21, 2013

Week 3: Evaluating Programs & Human Performance Technology


Evaluation Models

In reading about the five evaluation models in our text, and then following up with research online, I found that most models developed since Kirkpatrick's refer to his framework of four levels. One example of this is Brinkerhoff's Success Case Method, which the text compares to Kirkpatrick's model. However, the two additional models I will summarize do not draw any such comparisons. They are the Ciao! Framework by Eileen Scanlon, et al., of The Open University in the United Kingdom, and Ellen B. Mandinach and associates' Data-Driven Decision Making framework.

The Ciao! Framework evolved over a 25-year period from the collaboration of five women who developed a comprehensive set of working principles and practices for evaluating learning technology. Their abstract written in 2000 details the model they developed in 1996 (see table, below), and concludes that the aim of the framework, to encourage the use of a variety of methods rather than one approach to evaluation, has served them well. The columns represent three dimensions of the learning program that must be evaluated. The context should include how the technology is used in the course and where and how it is used. Interactions focuses on the learning process and shows ways students interact with each other and with the technology. Outcomes refer to changes in students resulting from the technology use. The Rational row gives the basis for evaluating each aspect, and the remaining two rows highlight the type of data to be collected and the methods to be employed.

I like this framework because it is a general model that can be customized for individual needs, as the authors suggest. I would use it to evaluate student projects that use technology.

CIAO! Framework

Context
Interactions
Outcomes
Rationale
In order to evaluate technology, we need to know about its aims and the context of its use.
Observing students and gathering process data helps us to understand whether or not some elements work and why and/or how they work.
Attributing learning outcomes to technology when it is one part of a multifaceted course is very difficult. It is important to try to assess both cognitive and affective learning outcomes such as changes in perceptions and attitudes.

Data
Designers' and course teams' aims

Policy documents and meeting records
Records of student
interactions

Student diaries

On-line logs

Measures of learning

Changes in students' attitudes and perceptions
Methods
Interview technology designers and course team members

Analyze policy documents
Observation

Diaries

Video/audio and computer recording
Interviews

Questionnaires

Tests
Adapted from:

Scanlon, E., Jones, A., Barnard, J., Thompson, J., & Calder, J. (2000). Evaluating    information and communication technologies for learning. Educational Technology & Society, 3(4), 101-107.
 

A Theoretical Framework for Data-Driven Decision Making, presented at an annual meeting in 2006, assumes that informed decisions can only be made based on accurate data, and the model depicts decisions made within local school districts. This model also evolved over time from the collaboration of the authors, and was informed by the work of their colleagues, including R.L. Ackoff's earlier work. The 2006 research paper states, "According to Ackoff (1989), data, information, and knowledge form a continuum in which data are transformed to information, and ultimately to knowledge that can be applied to make decisions." The district, building, and classroom each use different data in different ways to make decisions. The technology tools facilitate decision making by stakeholders in the model. For example, a classroom teacher (stakeholder) might give students an assignment that highlights a particular learning problem. The teacher collects and organizes results from the classroom lesson, and analyzes the results. A principal may examine results across classes for a particular grade level, and a district administrator may analyze trends in performance for various student groups, possibly to predict the goal of reaching AYP for state accountability. It is vital to synthesize the information into concise and targeted summaries of usable knowledge and prioritize it--the final stage of the continuum. 

This model is also very adaptable for various needs and various types of users. I would use it like the example above of the classroom teacher who evaluates an assignment that was given specifically to target a learning problem, such as a TEKS objective students had trouble mastering on a test.



Mandinach, E. B., Honey, M., & Light, D. (2006, April). A theoretical framework for data-driven decision making. In EDC Center for Children and Technology, paper presented at the Annual Meeting of the American Educational Researchers Association (AERA), San Francisco, Calif.