The Problem with Learning Objectives

Introduction

At some point considering the learning objective can be tiresome. We want to do exciting things and make learning happen in our online courses, but we have the requirement of good housekeeping to attend to. “Inform the learner of the objective.” In too many cases, this information just takes up space and earns the instructor a meaningless check on a quality rubric.

It doesn’t have to be that way. We can look at learning objectives in a different light.

In fact, we’ve muddied the water with our use of learning objectives. We’ve confused the role of the objective as a design tool versus a communication tool. The learning objective can be both – but we must be intentional about it.

Principles of Instructional Design

Principles of Instructional Design was once a staple in university curricula

Instructors who have had any pedagogical training likely will have been introduced to Robert Gagne’s Nine Events of Instruction. The events include gaining the learners’ attention, informing them of the objectives, stimulating recall of prior learning and so forth. All too often, following the 9 events like a recipe card leads to the obligatory screen that dutifully lists all of the course’s objectives.

We see it all of the time in online learning – the topic that lists all of the objectives, sometimes well written, sometimes not.

A study conducted by Florence Martin, James Klein, and Howard Sullivan (Martin, Klein, Sullivan, 2007) and published in the British Journal of Educational Technology looked at a computer literacy course that was designed with various treatments with one key element of instruction removed. Elements included statement of objectives, examples, review and practice. The treatment that removed the statement of objectives did not show a drop in scores. It didn’t matter if the objectives were left out or in. So why the obsession with objectives? By comparison, the treatment that removed practice showed a significant drop. (1)

The treatment that included objectives included one screen per section. The screens, we’re told by the researchers, ranged from 79 to 82 words per section, not dissimilar to how we use objectives in online courses today.

In an attempt to improve on the use of objectives, we remind ourselves that objectives should meet the conditions of audience, behavior, context and degree. The advice is good, but only in the context of design. When objectives are used as design tool, it makes sense to think about audience, behavior, context and degree. But when objectives are a communication tool, we need to question whether learners want to read a technical objective rather than a statement that excites and motivates them to engage in the online course. In short, technically correct instructional objectives are the tool of the designer – what students need is quite different.

In at a least a couple of popular online evaluation processes, as reviewers, we look for objectives, judge if they are well written according to something like R.F. Mager’s Preparing Instructional Objectives, and evaluate whether or not they are aligned to the courses’ assessments and activities.

Clearly, the practice of writing objectives is important to the design of a course – but what should we communicate to students?

Research points us in the right direction – and the research uncovers a very clear problem with objectives. To explore objectives further, let’s separate our concerns. We’ll look at them from the designer’s point of view and then from the student’s.

From the designer’s point of view

Well-written objectives help instructors design courses well. They spell out the type of knowledge and the level of learning. We know that we need a very different type of learning activity to teach how to perform an angioplasty versus how to choose the type of coronary stent to use in a given situation. To state the obvious, the first objective requires observation, practice in a non-life-threatening situation (e.g. mannequin), and repeated practice under the observation of an experienced physician. The second requires knowledge of the critical patient attributes that favor one type of stent and procedure over another and practice with decision-making in increasing complex situations.

The design of instruction also improves when we specify the audience, the condition and the degree. Are these first year students with little surgical experience or quite a different group? Are the conditions optimum or do they simulate a more stressed setting? Related to degree, what measurement do we need in order to indicate that the student is performing the task well enough. Is an outcome of four out of five successful procedures or decisions good enough?

Specificity is important to the designer. Establishing specific objectives helps us choose the right assessments and activities. The principle of backward design requires us to start with well written outcomes and work backwards to activities.

As an aside, I would concede that, in the ‘wild’, instructors often start in the middle. We collect content; create activities – all in the process of discovering what we really want to do and what is important. Designing eLearning can be a discovery process and in that process we refine objectives, write new ones…toss out a few. This may be heresy to many – but it is an admission that designing instruction is a creative process. In the end, however, it is important for us to arrive at the objectives and then shine their light on everything in the course. In other words, review whether or not activities and assessments belong in the course and how strongly aligned they are to the outcomes.

Flavia Vieira, in her blog “Learning to Teaching” underscores the importance of objectives.

The way you choose to define them affects all that you do as a teacher, because objectives stand for what you believe is the goal of your and your students’ actions; they show your personal perception of the teaching-learning situation; they reflect your teaching and testing priorities; they determine your choice of activities and materials; they influence your teaching procedures, your attitude towards learner errors, even your teaching pace; ultimately, they determine the kind of learning that occurs in your classroom.  (3)

From the learner’s point of view

Some of the research shows that stating learning objectives does make a difference – but only when used correctly by both the instructor and the student. One interesting source is the Debunkers club and includes several targets (common misunderstandings) that the site uses research to expose.

The Debunkers club is curated by Wil Thalheimer and Paul Kirschner. Wil Thalheimer reviews educational research and distills their findings for the benefit of practitioners who either don’t know how to digest research or simply do not have the time.

Thalheimer states that:

The research that has been done on learning objectives has shown that presenting learners with learning objectives produces benefits because it helps learners focus attention on the targeted aspects of the learning material (Rothkopf & Billington, 1979). To be more specific, if a learning objective targets Concept X, then learners are more likely to pay attention to aspects of the learning material that are relevant to Concept X, and are less likely to pay attention to aspects of the learning material not relevant to Concept X.

Simply, if learning objectives are to be useful at all to the learner, they must be written in a straightforward manner that communicates to the learner what he or she should pay attention to. The learner should know clearly that the intention of this course is not to memorize the historical dates, for example, but state the significance of a specific event in history. Flashcards with dates won’t help the student. Remembering the details of a military campaign won’t help the student. Understanding the root cause of an event and its effect on the social-political environment of the time may be of paramount importance. The student should concern herself with analysis and not sweat the small details.

Sal Khan in his videos on permutations and combinations stresses that memorizing the formulas may impede understanding.  Rather than memorizing the formula for a permutation, students should be able to reconstruct the formula from their understanding of how it works.

In short, the statement of objective helps us to focus students on what is important. Thalheimer goes on to recommend against generally worded objectives. The more specific, the better. And he recommends against the multi-part objective (audience, behavior, context and degree) when communicating to students. Thalheimer summarizes research that underscores the importance of learning objectives in helping students set goals, focus on relevant information and to evaluate their learning against the stated objectives – all important meta-cognitive activities.

If we don’t communicate objectives to focus students’ attention on what is important, then we should, at least, excite students about the subject. In corporate training, we often see the WIIFM replace the listing of objectives. The WIIFM or ‘What’s in it for me’ stresses the relevance of the learning to the learner. Research does support the role of motivation in learning.

Finally, instructional objectives can be dangerous. If we get complacent with good test results and declare ‘mission accomplished’ on our objectives, we have missed the whole darn point. The purpose of training and education is the transfer of learning. In training we want to see business results. In education, we want to see the online course contribute to the development of the student and success in future courses and beyond. Just as the ill-conceived learning objective takes up space on the page, so too the badly designed course in the student’s life. The course is part of a meaningless exchange of dollars for credits.

In contrast, the meaningful objective contributes to student learning and plays a part in a well-articulated curriculum that promotes student’s growth in the course and beyond. We need to be intentional about our use of objectives. And then move to more interesting stuff.

Resources

  1.  Martin, F., Klein, J. D., & Sullivan, H. (2007). The impact of instructional elements in computer-based instruction. British Journal of Educational Technology, 38(4), 623-636. doi:10.1111/j.1467-8535.2006.00670.x
  2. http://www.debunker.club/
  3. http://aliancistatlv.blogspot.com/2012/08/language-learning-objectives-do-make.html
Advertisements

Interactive Case Studies

Introduction

The limited research on interactive case studies supports their use in higher education.  The use of interactive case studies contributes to student motivation, sense of relevancy, higher course grades and overall satisfaction.  One research study, “A Usability Study of Interactive Web-based Modules” looked at the use of interactive case modules in a Principles of Marketing course.  In their literature review, the authors observed that:

Case studies are typically used by marketing educators to help students gain real world knowledge and learn marketing concepts (O’Connor and Girard 2006) and are important tools for students to develop their analytical thinking and problem-solving skills through applied construction of reality (Henson, Kennett, and Kennedy 2003).

But developing an interactive case study may seem daunting.  Instructors might feel the need to master all of the nuances of this genre before attempting to make one of their own.  The interactive case study (as distinguished from the face-to-face experience) adds the complexity of the technology.  There are however small steps one can take and templates that make interactive case studies easier to generate.

An example

Dr. Debra Eardley, a nursing professor at Metropolitan State University, recently completed an interactive case study in support of nursing informatics and a standardized classification system.  She storyboarded the case study in PowerPoint and received help from the university’s Center for Online Learning to make it interactive.

She started with the basics.  The objective of the case was to help students ‘experience’ the role of a standardized classification system in documenting the problem, intervention and outcomes of a patient diagnosed with an infection.   The case followed a public health nurse as she interviewed a patient and followed the procedures of Directly Observed Therapy (DOT) and the administration of medication.  The student participant in the case study observes the interview, makes notes and then charts the problems, intervention and outcomes, as would a public health nurse using a standard classification system and an electronic health record system.

The case study was a simple one…with one set of right answers and not many gray areas.  The case study was a stepping stone to more sophisticated cases that will follow.  But despite its simplicity, the case study introduced knowledge that public health nurses need to know.  It introduced the concepts of Latent Tuberculosis Bacterial Infection (LTBI), Directly Observed Therapy, the role of the public health nurse, and the role of a standardized system with its relationship to evidence-based practice.  Rather than simply being told about these things, the student observes a public health nurse in action and practices charting using the Omaha Classification System, which is evaluated with immediate feedback.

Interactive case studies, of course, can be more complicated – but that should not deter any instructor from getting started with simpler cases.    The key is recognizing some of the basic benefits of the case study approach.  For example, Harvard Business School (HBS) case studies involve students in reading the case, discussing the findings with classmates, reflecting on alternative approaches, answering the professor’s questions and deciding on a course of action based on the case.  The basic case study attributes make them far more compelling than text-laden pages all too common in typical learning management systems.

The benefits of case studies existed long before the use of electronic media.  Again, in the area of health informatics, university pathology departments across the United States implemented interactive case studies with little electronic help – simply text and discussion.   The designers of The Healthcare Pathology Informatics Fellowship Training program patterned their case studies on the business case study method with the following attributes:

  • The scenario was based on a real life situation.
  • The participants must analyze the situation, decide on one or more courses of action and provide evidence to support their decisions.
  • Participants must read the case beforehand, understand the issues involved, and come prepared to provide answers for whatever the facilitator might ask.
  • At the conclusion, a narrative described what actually happened in the real-life situation.

The electronic interactive case borrows a lot from the traditional case study approach.   First, the case study scenario places the learner in a role and a setting.

image_1

Interactive Case Study Based on Instructional Design Methodology

In the screenshot above the learner is placed in the role of a faculty member asked to design an online course by her dean.  The interactive case study challenges the learner to pick the right questions in the right sequence that model the backwards design approach to online course design.  In short, the learner selects questions that probe the situational factors that define the context of the training, selects appropriate outcomes, designs assessments aligned to the outcomes and then develops activities that will help students fare well on the assessments.  This is the backward design approach.  A simple case study, represented by the screen shot above, could assess whether or not the learner understands the backward design approach.  A more sophisticated case study might lead to several options that can be equally right but that require the learner to explain the choice and back it up with data, citations, and/or evidence.

In Dr. Eardley’s Latent Tuberculosis Bacterial Infection (LTBI) case study, the learner observes the public health nurse and her patient and must take notes for a clinical summary.  The instructions for the Clinical Summary Exercise are a click away.  A tool used by learners to take notes is also a click away.

image_4

Interactive Case Study related to Healthcare

The learner’s clinical summary is assessed in two ways.  The learner must submit the clinical summary, which is then evaluated by an instructor, as well as answer a list of questions related to the clinical summary, which are machine scored.  The exemplar clinical summary is only shown to the student who has made the effort and correctly answered questions about the patient.

The Design of a Template

In another project, we’re reflecting on the necessary functions to build into a generalized case study template.

2017-08-02_2157.png

A Proposed Interactive Case Study Template

The screenshot above labels some of the key functions.  Violating Richard Mayer’s principles of multimedia design, an explanation for each label is found below.

  1. A content area that will define the role of the learner and goal of the study, introduce background information and present key decision points in the case.
  2. A set of tools that enable the learner to take notes and review a transcript of all key decision points and feedback.
  3. Resources and tips that are context-based.  As the decision points change so too the resources.  Some resources persist; others appear and disappear as needed.
  4. Not pictured, the template supports branching.  Optionally, content can be shown based on user preference and user performance.  Again, optionally, learners can be taken down different paths based on how the story unfolds and the choices the learner makes.

Conclusion

The interactive case study is an effective instructional design pattern that has deep roots in traditional text and face-to-face classes.  The interactive case study may seem challenging to create but simple case studies offer instructors a good starting point.  Finally, the template approach simplifies the construction of case studies so that instructors need not rely on textbook publishers but can generate their own.


  1. Tulay, Girard., & Pinar, Musa. (2011). A Usability Study of Interactive Web-Based Modules, Turkish Online Journal of Educational Technology – TOJET, v10 n3 p27-32.

Geolocation Storytelling

Introduction

A new form of storytelling and interactive engagement is unfolding. Location-aware storytelling enables educators to untether students from the computer and let them roam about the world freely….to hear stories and learn in new ways.

Today’s smart phone can connect to the internet and get its location from a GPS satellite. Educational apps (both native and browser-based) can read the location and display interactive content matched to the location.

The obvious applications are history and the natural sciences – but with a little ingenuity, geolocation storytelling can serve students from a broad range of disciplines.

Inspiration for a new kind of storytelling comes from a group of history enthusiasts, led by Robert Molenda. The group has taken on the name of Lens Flare Stillwater with the tagline ‘The future of Stillwater viewed through the lens of the past.’ Stillwater is a river town located on the Saint Croix River, which borders the states of Minnesota and Wisconsin. To view this town through the lens of the past, the group has combined the arts of storytelling and photography with the new technology of mobile phones and geo-location-aware applications.

Robert Molenda is a retired chemist and business executive from 3M. He and a motivated group that includes John Paul Moore, John Buettner, Dick Marlow and many others, set out to tell Stillwater’s story through photography and narrative. They use the LodeStar eLearning authoring tool, which includes a geolocation-aware template called ARMaker — an abbreviation of Augmented Reality Maker.

To tell Stillwater’s story, they select historical sites of interest and related photographs from the John Runk collection of historical photographs and combine then with their own photography and narrative. They use Google maps to identify the latitude and longitude of a location, and then input that location into LodeStar. They match the location with both audio and text narrative, select the photographs and work out the details – details such as: how many sites should be included in a tour; where should the invisible geo-fence be located that triggers the display of text and graphics; and how much information is sufficient.

This is their story as told by Robert Molenda, which we hope to inspire both formal and informal educators around the world across the disciplines:

The story of Lens Flare Stillwater

LensFlare

Screenshot of Lens Flare Stillwater, a site dedicated to revealing the history of Stillwater through location-aware applications.

The idea of this project started in May of 2015 when I sent a number of ideas for Stillwater to the Mayor of Stillwater. Among the ideas was the idea for Lens Flare Stillwater.

Imagine that you are a visitor for the first time to Stillwater, standing in front of Terra Springs Apartments. The Terra Springs location is active with a geolocation marker and your smart phone knows when it is inside the “geo-fence” of that location range. When this happens, a photo of this same location at an earlier historical time, appears on your smart phone along with pertinent historical information, an audio narrative and other digital photos that are part of that location story. In this manner, you as a visitor can experience “Augmented Reality” in an active location tour of Stillwater. You can touch, feel, read, listen to information pertinent to the actual location that you are near. As you move along in Stillwater and enter other active “geo-fences” your smart phone will trigger other information pertinent to these different locations.

The theme was to use the Historical Photos of the John Runk Photo Collection with today’s digital technology to put the history of Stillwater in everyone’s pocket or purse.

That was the basis for the idea. Since that time, we applied for a grant from the Stillwater Foundation, made contact with software developers, started a web site that provides a “Virtual Reality” tour of Stillwater and were fortunate to make contact with Lodestar Learning Systems, another software developer involved with educators.

The really difficult work of software development has already been accomplished by people like Sami Jitan of Pivot the World and Robert Bilyk of LodeStar Learning Systems. The job of our team of volunteers is concentrated on providing content consistent with software design legal requirements and visitor needs.

In summary, we are taking advantage of some truly great, high quality historical images, narratives/audio and combining them with geolocation information and software to provide an “Augmented Reality” tour of Stillwater, Minnesota.

Robert Molenda

An Example

Here is an example that can be experienced from the comfort of your office or home, but is best experienced on foot and in Stillwater.

https://lodestarlearning.site44.com/Stillwater/index.htm

Note several features:

  • Responsiveness
  • Location-aware
  • Media Support

 If you can’t visit our LodeStar Learning’s hometown of Stillwater with your smart phone, do the next best thing: Shrink the browser window down to the size of a smart phone. Notice the responsiveness. Students who access your learning management system from their smart phones will appreciate LodeStar’s ability to adapt to any screen size. Click on ‘Show Map’. If you are in Stillwater looking at this map, an info window pops up when you cross a geo-fence. Play the audio on a page. View some of the John Runk Collection from one of the image sliders.

All of this functionality combines with LodeStar’s other features: branching, quizzing, interactions, SCORM conformance, and accessibility.

For related articles from past web journal articles, visit:

Augmented Reality for Educators
https://lodestarlearn.wordpress.com/2016/10/23/augmented-reality-for-educators/

Mobile Learning
https://lodestarlearn.wordpress.com/2017/01/03/mobile-learning/

Strategies and Tools to Promote ‘Reading to Learn’ in Higher Ed

Introduction

In higher education, assigned readings challenge students in ways that we may not fully anticipate: culturally, linguistically and cognitively. Assigned readings challenge students if, on any given day, students complete the assigned reading at all!

The statistics on reading compliance are disheartening but not surprising, given students’ time constraints, divided attention and the inherent challenges of reading to learn.

Readings may require a cultural literacy to understand the references or analogies. They may require a highly developed vocabulary or a specialized vocabulary. They may also demand of students a prior knowledge, or a knowledge of specific principles, rules, and concepts. Instructors depend on students to complete the readings and understand them in order to participate in class or in online discussion groups and perform well on assigned papers and projects.

In their report on “Increasing Reading Compliance of Undergraduates: An evaluation of compliance methods” authors Sarah Hatteberg and Kody Steffy report that “studies have shown that no more than 30 percent of students complete a reading assignment on any given day.” In their study, they evaluate the effectiveness of strategies to get students to complete the assigned reading. Most effective were 1) announced reading quizzes, and 2) mandatory reading guides and questions. Least effective were pop quizzes and optional reading guides.

Getting students to read is a first step. Getting students to understand the reading and read deeply and critically is challenging.

In higher education, one can easily take the position that we simply assign readings to students and expect them to complete the readings and understand the readings sufficiently to participate in activities. A more enlightened approach might be to prepare students with motivators, advanced organizers, inquiry style questions, practice on critical concepts, self-checks and more. In other words, we can build activities that help student derive the most benefit from assigned readings.

Motivate Students

The most critical piece to getting students to read is motivation. Instructors need to address motivation head on by answering the following questions: After completing the reading, what will students know that they didn’t before? What will they be able to do that they could not do before? What relevance is the reading to the world beyond academia? If instructors can address these questions directly, students will prioritize the reading accordingly.

I recently heard an instructor say that students regard assigned activities (including readings) as a transaction. ‘I do this; you give me points.’ Students are given loads of stuff to read and to do. Selective reading – including skimming – is a survival skill.  Reading without a perceived direct reward gets lower priority.

So we can certainly quiz students ahead of or at the start of class. But that probably doesn’t encourage deep reading. We can be selective and give some of the readings the full ‘treatment’. By that, I mean, we can underscore the importance of the reading with a personal recording pleading the case. If a problem is central to the readings, we can look for a TED Talk or a short YouTube video that introduces the problem to students.

I’ll use a recent example that I experienced. In Minnesota, we generally enjoy a high standard of living and benefit from a good educational system – but that standard of living and access to good education is not equally open to all. Currently in Minnesota, families of color have median incomes half of those of their white neighbors. In a sociology class, students might be assigned an anthology of perspectives on what it is like to live in Minnesota for a person of color. Ahead of that reading, an instructor can use headlines, video clips, testimonials and other things to ratchet up interest in the issue of economic disparity in our state.

In my experience, inattention to motivation is prevalent in online education. Instructors put up course documents on grading policy and schedule of assignments – but neglect to get their students jazzed on the significance of the course to them. Michael Allen, in his Guide to e-Learning, laments that “Although outstanding teachers do their best to motivate learners on the first day of class and continually thereafter, many e-Learning designers don’t even consider the issue of learner motivation.” He is primarily writing about corporate eLearning designers, but I would venture that the same holds true in higher education. Examine the most popular rubrics for evaluating online education. Motivation is hidden in the rubrics and its importance is overshadowed by the rubrics’ attention to the issues of alignment, organization and communication. Michael Allen’s book goes on to reveal seven magic keys to enhancing learning motivation. His first magic key relates to helping learners see how their involvement in the course will produce outcomes that they care about.

Prepare and Engage Students

Prepare students for difficult readings with pre-training. Pre-training is one of the principles of multimedia learning featured in Richard Mayer’s research (co). Ruth Colvin Clark describes it as such: “The pre-training principle is relevant in situations when trying to process the essential material in the lesson would overwhelm the learner’s cognitive system. In these situations involving complex material, it is helpful if some of the processing can be done in advance”. Assigned readings can present essential material that may induce a cognitive overload. Pre-training may involve an advance organizer, graphical chart, an infographic, glossary or other aid to reduce the cognitive challenge of a reading.

One method of engaging students in assigned readings is to help focus students on the critical parts of the reading. Inquiry-based learning provides us with strategies that help focus students’ attention on the essential parts of the reading. Inquiry-based learning has many antecedents in educational practice, but the common theme is in helping students to think in advance of the reading by posing a burning question that needs to be answered; or asking students to consider what they know about this topic and what they not know; what do they anticipate that the reading will reveal to them (and then how does the actual reading differ). Inquiry-based learning can take on multiple forms. Instructors can generate questions for the students to answer. This is the most structured level of inquiry-based learning. Students can generate their own questions based on their interest. This is the most open and purest form of inquiry. There are several shades in between. Instructors can adapt the best approach and level of inquiry based on the students’ sophistication and need. The overall goal is the same. Deliberately select strategies to prepare and engage students in the readings.

Provide Direct Instruction on Concepts

We can choose to assume that students will complete the readings and understand concepts. That may, however, be a dangerous assumption. Sarah K. Clark in her post on “Making the Review of Assigned Reading Meaningful’ assumes differently. She asks her students to create a ‘top ten’ list of important concepts. This illuminates what students judge to be important and helps to uncover misconceptions about concepts. If we accept that student understanding of key concepts is essential, we can plan activities that directly address concept learning.

A learning object can be tremendously useful in promoting concept learning. A learning object, in this sense, is simply a learning activity that is authored with the help of any one of dozens of eLearning authoring tools and uploaded to the learning management system. The activity could help students categorize the examples and non-examples of a concept. For example, the concept of a ‘chemical reaction’. A chemical reaction occurs when the chemical composition of matter changes from one thing to another. An example is found when an acid is mixed with a base, resulting in the formation of something new: water and a type of salt. Many things, however, appear to change physically, but don’t change in chemical composition. These are non-examples.

A learning object can not only help students sort out examples from non-examples but identify attributes of a concept and engage in the elaboration of a concept. The elaboration model (in instructional design parlance) starts with simple examples that can be easily categorized and progresses to more challenging examples that are more difficult to categorize. We can help students to generalize (apply the attributes of a concept to unknown cases) and not to over-generalize. The key here is direct instruction. We are not assuming that students have understood the concepts presented in a chapter in either simple or complex form, but we are engaging them with the concept and helping them to think about it.

To further promote concept learning, we can ask students to create concept maps, Frayer models (which include concept definition, association, examples, and non-examples) and create analogies in their own words.

2017-03-07_2040

The LodeStar eLearning authoring tool was used to create learning activities that challenged workshop participants’ understanding of declarative knowledge and concept learning based on the reading of Patricia Smith and Tillman Ragan’s book titled Instructional Design.

Use the Reading

The literature consistently refers to the strategy of ‘using the reading’. Concepts learned in a chapter can be immediately put to use in an activity that involves analysis. Students in a political science course who read about federalism versus republicanism can apply their understanding to the analysis of a case study. They can be asked to judge whether or not the case is an example of the ideology of a Jefferson style republican or Hamilton style federalist. A timeline could show the change of meaning of the concept of republicanism over the decades.

Readings are important towards understanding the content, performing well on assessments and writing papers. In some courses, the assessments, papers and projects may be summative in that they are the culminating activity and not the building activity. As an alternative, we can design shorter activities that require students to use the reading. We can ask students to cite the readings in their discussion forum. We can ask students to create timelines or concept maps from the reading. We can ask students to produce charts related to what they already knew, what they now understand and what they don’t understand. We can ask students to produce an outline of the reading …. and the list goes on. Once we have students produce something, we can provide feedback. In that way, we have engaged students in a ‘building’ activity. We are helping students to build their skills.

Conclusion

The key to all of this is the attitude that we are going to do something deliberate and strategic. In higher education, we can no longer put the onus on students to complete the assigned readings, understand the readings and apply the concepts and principles appropriately. Students noncompliance with reading assignments is one reason; college dropout rate is another. A variety of strategies and tools helps us in this cause. Strategies and tools range from inquiry-based learning to motivating videos to learning objects that promote understanding of concepts. Online instructors can use strategies and tools to flesh out their courses and transform them from an assigned reading/high stakes assessment paradigm to one that directly addresses student learning.

Mobile Learning

Mobile Learning means much more than easy access to responsive educational applications from a smartphone or tablet.   It is an amazing confluence of technologies that represents a new era in technology-assisted instruction.  Researchers have a name for technologies that bring us new capabilities.  They call it affordances.  I once hated the word.  But now I embrace it. Recent advances in technology afford designers new opportunities to engage students.

New technologies bring new capabilities and help us redefine what is possible. When we had our shoulder to the wheel, working with computer-based training, floppy disks and stick figures, we looked up and saw the approach of interactive video disc players, and imagined the possibilities.  We worked with videodiscs for a time and then saw the virtue of CDROMs.  We gave up full-screen full-motion video for the ease of use of the CDROM and bought our first single speed burners for $5,000.   The CDROM gave way to the internet and the web application.  Flash based applications on the web gave way to HTML 5.  And now, the desktop is making room for the mobile app and the mobile browser experience.

We always lose something – but gain something more important in return.  New technology affords us new capabilities, new opportunities

Organization

To make best use of these capabilities, mobile learning demands that we think about old ideas in new ways.  To use a simple example to start, our current projects may have forward and back buttons that chunk the content in nice bite-sized pieces.  We recognize that chunking can be useful to learners.  But mobile users are in the habit of swiping up and down and sideways.  Content is laid out for them in one long flow or in slides.  Chunks on the screen are the result of how aggressively users swipe their fingers. It challenges us to think about organizing content in a new way.

Responsiveness

Mobile apps, whether run in a browser or natively on the mobile device’s operating system, must conform to all sorts of device shapes and sizes.  They call that form factor.  The iPhone alone comes in multiple sizes ranging from 4 to 5 ½ inches.  There are smartphones, phablets, mini-tablets and large tablets.  There are wearables and optical displays. An application may be run on anything from a multiscreen desktop configuration to the smallest smartphone.  An application may be viewed in portrait mode (vertical) or landscape (horizontal). The ability of a single application to conform to all of these display configurations is called responsiveness.  Responsively designed applications automatically size and scale the views, pick readable font sizes, layout components appropriately and provide for easy navigation.

mobile_learning

Responsive Application Created with LodeStar Learning FlowPageMaker

 

Designing Mobile Learning Experiences

But the challenge of mobile is not just in screen sizes and navigation.  It is in the appropriate design of applications pedagogically.  When we moved from computer-based training to videodisc we considered the power of full motion video and the ability of the learner to make decisions and indicate those decisions by touching the screen and causing the program to branch.  When we moved to CDROM we made use of 640 megabytes of data – which seemed massive but afforded us embedded encyclopedias and glossaries and other information and media at our fingertips.  When we moved to the web, suddenly WebQuests harnessed the full power of the internet and sent learners on inquiry-based expeditions for answers.

But what now?  What are the opportunities that mobile devices give us – in exchange for extremely small screen sizes, slower processors and slower connectivity?

Part of the answer lies in student access to resources when they are on a bus or on lunch break – spaces in their busy lives.   The more interesting answer is access to resources and guidance from environments where learning can happen: city streets, nature trails, museums, historical and geographical points of interest – in short, from outside of the classroom and the home office.

This is what mobile learning – M-Learning – is all about.  M-Learning requires much more from applications than being responsive.  They should support students being disconnected from the internet. They should support a link back to the mother ship – the institutional learning management system – once students are reconnected.  They should report on all forms of student activity.  They should report on not just quiz scores – but what students have read or accomplished or what a trained person has observed in the performance of the student.

Responsiveness is an important start – but this added ability to report remotely to a learning management system is facilitated by one of several technologies that are somewhat closely related.  You may have heard these terms or acronyms:  Tin Can, xAPI, IMS Caliper and CMI5.

To really appreciate the contributions of these standards to the full meaning of mobility, we need to do a deeper dive into the standards.  Bear with me. If you haven’t heard of these terms, please don’t be disconcerted.  They represent a tremendous new capability that goes hand-in-hand with mobile devices that is best explained by the Tin Can telephone metaphor.  If you haven’t heard of these terms, you are in good company.  We’re only on the leading edge of the M-Learning Tsunami.

Tin Can

Tin Can was the working title for a new set of specifications that will eventually change the kinds of information that instructors can collect on student performance.   To explain, let’s start with the basic learning management system.  In the system, a student takes a quiz.  The score gets reported to the grade book.  The quiz may have been generated inside the learning management system.  The student most likely logged into the system to complete the quiz.   But quizzes are just one form of assessment and no learning management system has the tools to generate the full range of assessments and activities that are possible.  Not Blackboard.  Not Moodle.  Not D2L.   Hence, these systems support the import or the integration of activities generated by third party authoring tools like Captivate, Raptivity, StoryLine, LodeStar and dozens and dozens of others.  With third-party tools, instructors can broaden the range of student engagement.  Learning management systems support tool integration through standards like Learning Tool Interoperability (LTI), IMS content packages and a set of specifications called SCORM. SCORM has been the reigning standard since the dawn of the new millennium. SCORM represents a standardized way of packaging learning content, reporting performance, and sequencing instruction.  SCORM is therefore a grouping of specifications.  Imagine packages of content that instructors can share (Shareable Content Object) and that follow standards that make them playable in all of the major learning management systems (Reference Model).

But SCORM has its limitations.  The Tin Can API is a newer specification that remedies these limitations.  A SCORM based application finds its connection (an API object) in a parent window of the application.  That’s limiting.  That means that the application has to be launched from within the learning management system. Tin Can enabled applications can be launched from any environment and can communicate remotely to a learner record store.  Imagine two tin cans linked by a string.  One tin can may be housed in a mobile application, and the other tin can in a learner record store or integrated with a learning management system.  The string is the internet.

SCORM has a defined and limited data set.  An application can report on user performance per assessment item or overall performance.  It can report on number of tries, time spent, responses to questions and dozens of other things but it is ultimately limited to a finite list of data fields.  (Only one data field allowed arbitrary data, but it was really limited in size.)

Tin Can isn’t limited in the same way. Tin Can communicates a statement composed of a noun, verb and object.  The noun is the learner.  The verb is an action.  And the object provides more information about the action.  Jill Smith read ‘Ulysses’ is a simple example.  Imagine the learner using an eBook Reader that communicates a student’s reading activity back to the school’s learner record store (housed in an LMS).  Tin Can is M-Learning’s bedfellow.  The mobile device gives students freedom of movement.  Tin Can frees students from the Learning Management System. Any environment can become a learning environment. Learning and a record of that learning can happen anywhere.          

lodestar_lrs

LodeStar Learning (LodeStar 7.2) Ability to configure an Learner Record Store Service (LRS) and Export to a Tin Can API enabled Learning Object

The next acronym, xAPI, is just the formal name for Tin Can.  Tin Can was a working title.  When I was at Allen Interactions working on ZebraZapps our team provided early comment related to this evolving specification – which became xAPI.  The eXperience API is a cool term for a cool concept, but Tin Can has stuck as a helpful metaphor.

The openness of Tin Can, however, presents its own challenge.   If one application reports on student reading performance in one way, and another application reports on a similar activity but in a different way, it is hard to aggregate the data and analyze it effectively.  It’s hard to compare apples and oranges.

IMS Caliper attempts to solve this problem.  IMS Global is the collaborative body that brought us standards for a variety of things, including learning content packages and quiz items.  IMS Caliper is a set of standards that support the analysis of data.  They define a common language for labeling learning data and measuring performance.

Which leads us to the last standard: CMI5.   CMI5 bridges Tin Can with SCORM.  Applications still benefit from the grade book and reporting infrastructure built around SCORM – but are free to connect remotely outside of the confines of the LMS — once again supporting M-Learning.

Had I written this entry a year ago, I would have found it difficult to try out various learner record stores.  Today, they abound.  The following link lists tools and providers:  http://tincanapi.com/adopters/

The following two LRS providers give you an inexpensive service in order to test out this technology for yourself.

Rustici SCORM Cloud

https://cloud.scorm.com

Saltbox Wax LRS

http://www.saltbox.com/

So what?

Now that we’re free to roam around the world, what do we do with that?  Mobile applications, even browser based mobile applications, use GPS, cell towers and WIFI to locate our phone geographically.   We can construct location-aware learning. We can guide students on independent field trips. They can collect information and complete assessments of their learning.  All of that can be shipped back to the institution through the learner record store.  Mobile devices have accelerometers and gyroscopes that help the phone detect orientation (e.g. horizontal and vertical) and the rate of rotation around the x, y and z axes.  With that we can create applications that assess the coordination of a learner in completing a task that requires manual dexterity.  Devices have cameras and microphones, both of which can be used to support rich field experiences.

The smart pedagogy for M-Learning is one that recognizes these affordances and uses them – rather than shrinking a desktop experience into a smaller form factor.

An Example

Aside from our work at LodeStar Learning and at the university, my most recent encounter with this technology came from a serendipitous meeting with a local community leader who introduced me to Pivot The World.

Pivot The World  http://www.pivottheworld.com represents an example of a good starting point.  It is a start-up company interested in working with universities, museums, cities, towns and anyone interested in revealing the full richness of a location in terms of history and cultural significance. It combines the freedom of movement of a mobile device with its ability to detect location, overlay imagery and geographical information, and match what its camera sees to a visual database to retrieve related information.   The combination of camera, maps, imagery, audio, location, and other services engage learners in a new kind of experience.

The Pivot The World founders and developers started in Palestine, have since applied their technology to a tour of Harvard University and are currently working with a volunteer group of history buffs to create a Pivot Stillwater experience in our own hometown.  At the north end of town, where there are condominiums, a simple swipe of the finger can reveal the old Stillwater Territorial Prison with elements of the prison preserved in the design of the new site.

If a university or museum wished to keep a record of student or visitor experiences with the application, then an integration with the Tin Can (xAPI) would add that dimension.  As users engaged with the content, statements of their experience could be sent to a Learner Record Store.

Conclusion

LodeStar Learning’s mission is to make these technologies and capabilities accessible to instructors. We have done that with the addition and improvement of our templates.  We have incorporated the ability to export any learner object with Tin Can capability.  Now instructors can choose between SCORM 1.2, SCORM 1.3, SCORM CLOUD, SimpleZip (for Schoology and other sites) and, most recently, TinCan 1.0.

We have improved Activity Mobile Maker and added ARMaker (for geographically located content) and FlowPageMaker for a new style of mobile design.

We’ve already gone global.  Now we’re going mobile.  We’re embracing M-Learning and all of its amazing affordances.

 

Augmented Reality For Educators

Introduction

The New Media Consortium predicts the sharply rising use of Augmented Reality (AR) in higher education over the next five years. As with any new technology, I am always interested in how AR can be made viable for busy instructors – so that a reasonable effort yields a commensurate return. I’ll introduce a prototype project that can be replicated by instructors. But first, let’s take a broad look at AR.

Augmented Reality covers a wide spectrum of applications, which is reflected in the consortium’s description of AR as “the incorporation of digital information including images, video, and audio into real-world spaces. AR aims to blend reality with the virtual environment, allowing users to interact with both physical and digital objects.” (NMC, Horizon Report, 2016 Higher Education Edition)

In this article I walk through the making of a simple AR application with the LodeStar authoring tool, which now includes the ARMaker template. Any intrepid instructor can create something similar for his or her own course.

Our use of AR fits closely with a common use that is defined by a research article that appeared in Computers and Education in March 2013, titled “Current status, opportunities and challenges of augmented reality in education”

First, AR technologies help learners engage in authentic exploration in the real world, and virtual objects such as texts, videos, and pictures are supplementary elements for learners to conduct investigations of the real-world surroundings (Dede, 2009). One of the most prevalent uses of AR is to annotate existing spaces with an overlay of location-based information (Johnson et al., 2010a).

AR supporters make claims of deeper engagement of students, connection of academic content to ‘real world’ and deeper levels of cognition. TechTarget’s definition of Augmented Reality is that it is the “integration of digital information with the user’s environment in real-time. Unlike virtual reality, which creates a totally artificial environment, augmented reality uses the existing environment and overlays new information on top of it. “

You have already seen AR applications outside of education:

In watching football, you’ll notice the yellow first down line painted across the television screen. That has stuck as a useful and accepted addition to the game. Other ideas were not so well received. Fox Sports glowing, streaking hockey puck was the culmination of a $2 million R&D project that got hockey fans…well, glowing mad.

More relevantly, in education, teachers use technology to create their own “auras” around, for example, works of art that suddenly come to life when scanned with the mobile phone camera. An aura can cause music to play, or a video to show, or an animation to display. Math students can point their smart phone at an equation and watch it jump to life on the screen (Aurasma).

The QR tag is a simple form of Augmented Reality. Special QR reader apps enable museum visitors, for example, to scan a QR tag and launch a web site devoted to the art exhibit and its interpretation. JISC, formerly the Joint Information Systems Committee and now a non-profit company, describes a project in England where students scan rare manuscripts with their smart phones and have digital facsimiles appear so that they can turn the pages and get supporting videos, text and images to help them interpret the old texts.

Finally, the University of Oklahoma library created a smart phone app that guides visitors by sensing their physical location, and revealing information about nearby content resources. They placed Bluetooth beacons in strategic places. The beacons are set to transmit data at regular intervals. The smart phone receives the beacons’ unique id and as a result knows precisely where it is and what content should be displayed. Out of doors, the application uses GPS and the smart phone’s location services.

Imagining the Possibilities at a Simpler Level

I recently chatted with an environmental science professor at our university. Near our main campus we have a wonderful natural treasure called Swede Hollow. Swede Hollow is a wooded ravine at the foot of Dayton’s Bluff in East Saint Paul. Poor immigrant families settled in the hollow starting in the late 1800s. Phalen Creek once ran through it in full force. At the top of the bluff stood the Hamm’s Mansion until it burned down in the 1950s. At one end of the hollow stood the Hamm Brewery.

Swede Hollow is rich with historical, geological and natural interest. Of course, the environmental science prof had the knowledge to uncover the layers of significance of this area. We discussed a mobile application that would do just that. Students could visit the area with their cell phones and be presented with location-specific information that may not be readily apparent to the casual observer. For example, Phalen Creek is now “entombed’ in an underground tunnel that has attracted a following of urban adventurers.

The instructor has led student tours through Swede Hollow. On her tour, she mentions the changing appearance of trees during the seasons or the tunnel underneath and promises to show the imagery of urban adventurers when students return to the classroom. It is difficult to replace her personal touch with a digital application, but in terms of information and the display of digital assets, in an augmented reality application, the instructor’s expertise could be captured and presented to the students at specific locations. Students would be able to take the tour at their leisure – in a sense, asynchronously — spending more or less time at each location according to their interest. The dependency on the instructors’ availability would be removed.

About twenty miles from Swede Hollow is my home town – Stillwater, Minnesota. That’s where the story of our first prototype begins.

A working prototype

Stillwater is also rich in history, geography, plant and animal life, and politics. The same is true of many areas, and yet we pass through them at fifty miles an hour oblivious to the layers of interest that surround us or… remotely contemplate them from our computer terminal – perhaps in the context of an online learning class.

In Stillwater, we have the history of the saw mills, the bursting of a dam that sent tons of mud and debris down a ravine to reshape the downtown, the sandstone and limestone bluffs, the restoration of prairie grasses and oak savannas along the river, the wildlife, the reign of the lumber barons and the Victorian architecture. As in any area, all of this can be lost on the casual observer.

A walking tour can get us out of the car or away from the computer and into the world – aided by a smart phone and the captured knowledge of an educator like our environmental scientist.

Educators know the points of interest. Depending on their discipline, they know the civil rights history of an intercity area; they know the trees, and plants and shrubs featured in a tucked away ravine; they know the source and destination of streams. With the help of technology, they can now tell their story to all who are interested in a manner unprecedented.

Of course, education aside, Pokemon, portals and anomalies have gotten people out of their chairs and into the world. The company Niantic created Ingress and Pokemon Go to get people away from the game consoles and wandering about their neighborhood and cities in search of game features that are tied to locations through latitude and longitudinal coordinates. In the case of Pokemon Go, gamers are in search of uncaptured Pokemon that are found at specific locations. Gamers must physically go to those locations. In the case of Ingress, gamers find portals that they try to either destroy or restore. In both games, people move about with their smart phones, going to locations, causing the app to display something of interest.

In contrast, the type of interaction that we propose is simpler but rooted in the richness of a particular discipline. We propose something that instructors can create with the help of a template and a little creativity. Students are led on a guided tour of an area where they are introduced to the history or geography of that area or whatever matches the discipline. They are guided from point to point. Their instruction comes from observing the physical thing and hearing or reading about its significance or challenged to take notes and draw conclusions from their observations or any variation thereof.

In the project that we are building as a proof of concept, we explore the history of Stillwater. The City of Stillwater has already produced a walking tour. It is well done with vetted historical content and professionally produced media. Currently, visitors can access the Historic Downtown Walking Tour website and view each location from the convenience of their computers.

We propose that students travel to the location and experience all of the sights, sounds and smells of the location in addition to learning about its significance.

The current tour is concentrated in downtown Stillwater both east and west of Main street.

In our prototype, students are guided to a location and then given information on how to find the next location. In the following screen shots from the prototype, students start at the pergola by the river. Once there, they can access an audio presentation on the preservation efforts at the turn of the last century and the resulting Lowell Park. They are then guided to a mill, old freight house, caves that stored beer kegs, and more.

We created the prototype by launching LodeStar and selecting the ARMaker template.  For each page we put in the precise location with the help of Google Maps and a Google Earth overlay.  For each page, we inserted images, typed text and imported audio that was matched to the location.  In the future, you will see the results of this project.  We are awaiting  permission from the city council for this ‘proof of concept’. In the meantime, we can tell you some of the benefits and challenges of designing this prototype.

2016-10-23_1842

Matching content with Latitude and Longitude Coordinates with LodeStar

Lessons Learned

The theme of the Stillwater walking tour is the ingenuity of humans to eke out their livelihoods from the natural resources of the area: lumber, wheat, and beer, to name a few. The walking tour covers the triumphs and the trials of the various local businesses and enterprises. It’s a sneak peek into the past.

To date, we learned several things from creating this walking tour. We’ll list some of the more important lessons:

  • Stay out-of-doors. Accurate locations come from GPS satellites. The results indoors will vary greatly depending on the location. When GPS is unavailable, locations are achieved through other, less reliable means. Whereas the GPS signals can give us coordinates that are two or three meters off target – in other words, fairly precise – alternative means may give us imprecise coordinates, which may be dozens of meters off target.
  • Add a fudge factor. Set the location with a proximity of 40 feet. That means, when the students are within forty feet of the target, the content will display/play. 40 feet may seem like a wide radius, but once students are on a field trip and approaching landmarks, 40 feet is not a large distance at all.
  • Make it easy for students to know where the next location is. Have students follow a street or a path or a riverbank. Alternatively, give precise directions to the next stop.
  • Use text, images and audio. Video can pose a problem. Students will be connected through 3G or 4G. The data rate for 3G is 2 Megabits per second. The data rate for 4G is 20 megabits per second or higher. 10 times faster. The experience will be quite different for the two users.
  • Use simple questions to check students’ understanding at a site, with feedback.
  • Be careful of making students walk great distances without frequent points of interest.
  • Consider visual and hearing impairments when designing the application
  • Be mindful of students who can’t walk great distances. Distances are short on a map, but not in the field. Consider, an alternative, shorter tour.
  • Instruct students to first load the project website into their browser when they have a good connection to the internet so that images and audio can get cached, resulting in a better playback experience for students.
  • When producing a self-guided tour, use Google maps on the desktop to set locations with at least six digit precision. For example, 45.094156. Google maps will allow you to zoom into a location and click to set a marker. Overlay Google maps with Google Earth to know where you are and get very accurate locations. Copy the coordinates of the marker into your application. If you must walk the tour to set locations, download an app that gives you good coordinates. An example app would be LocMarker Lite, which allows you to add and record locations with six digit precision. The compass on the iPhone, conversely, gives you coordinates in degrees, minutes and seconds, which is not enough resolution. A second of latitude is 80 feet.

Why it works

When we hear, see, read, discuss and reflect upon things we are encoding information and experiences in semantically rich ways that help in the retrieval of this experience and relating it to other knowledge. We experience the moment, the sights and smells. We note the texture of the object, its placement, its size and we ponder the relationship of some newly presented content to this tree or building or river way.

Augmented Reality can also challenge us to think critically about what we are seeing. I remember when I was a boy going on a technology-assisted field trip that I will never ever forget. The technology was the orienteering compass. We moved from location to location by being given a directional bearing and a number of paces. One of the locations was a tree that was obviously diseased. We were challenged to identify the disease and then introduced to Dutch Elm disease. I had never known the devastating effects of disease on trees ….and recalled the experience later in life when our own woods were ravaged by oak wilt.

Conclusion

This is a first attempt at AR. We have already published the ARMaker template with the latest release of the LodeStar eLearning authoring tool. You can download the trial version and immediately access the ARMaker template. Try it for your own class and give us feedback on how you designed your walking tour. Eventually, we will propose an AR assisted walking tour design pattern that reflects best practice.

Download LodeStar at http://www.lodestarlearning.com   Look for the Try link at the top for the trial version.  Select the ARMaker template.

Happy exploration.

The Explore – Validate Design Pattern

Introduction

As online instructors, we recognize that students benefit from interacting with content in a manner that truly makes them think.  And yet we find the task of creating interactive, meaningful content to be extremely challenging and time-consuming.

For some subject matter, interactive content that lets students manipulate the data and see different outcomes can be highly effective.  Marketing students can test the principles of the marketing mix by adjusting the amount invested in the quality of the product versus its advertising.  Civil engineering students might control the amount of ammonia in a wastewater treatment pond or the food to microorganism ratio.  Sociology students might explore the consequences of unequal distribution of wealth.  Health care students might explore the implementation variables of chronic care management.

To tease out the benefit of interactive content, let’s find a good example.  Suppose we pick the principles of composting.  That seems like an odd place to start, but we all understand composting at some level. How would an online instructor design an interactive lesson on composting that is effective and teaches the underlying principles?

Composting is bug farming.  Effective composting results from the right combination of carbon and nitrogen-rich material, water, and heat.  Students can learn composting by doing, but that might take weeks and without careful measurements and some guidance, they may not come to understand the underlying relationships and their effect.  They can learn from a handbook that teaches procedures,  or from a science text that teaches principles.  In either case their readings  may or may not lead to real understanding.

In contrast, in an online environment, the principles of composting can be taught through interactive models.  Students could be presented with an interactive model and challenged to generate the most compost in the shortest period of time.  In response, student might add more carbon-rich materials such as dry leaves to the compost.  Or change the moisture content.  Or change the ambient temperature.  Once students tweaked and played with the parameters, their instructor could assess their understanding – do they truly understand the relationships, the principles, the cause and effect — and then invite students to apply their knowledge to building a compost of their own.

As mentioned, students could follow the procedures of composting without understanding the underlying principles.  Students could recite textbook statements without really thinking about them. Online instructors must constantly ask the question:  how much thinking are my students actually doing in my course.  Not reading.  Not quizzing.  Not reciting.   But thinking.

When we write about time-worn concepts such as interactivity and engagement, that is what we are driving at.  Interactive engagement affords us the opportunity to get students to think.   Discussions, projects, group projects, online examinations can certainly challenge students to think, but how can we, without computer programming knowledge, facilitate interactive engagement between students and the content in a manner alluded to above and in a manner that fosters curiosity, promotes genuine interest in the content and puzzles students?

The Explore – Validate Design Pattern

The Explore – Validate Design Pattern gets students to think.  It is a form of interactive engagement that has, as one element, intense student-to-content interaction.

Interaction is a key word in online learning. Successful, effective online learning happens through students interacting with each other, their instructor and the course content.  Each type of interaction demands of the instructor special skills and intention.  With respect to student to student and student to instructor interaction, instructors can draw from their ability to foster interpersonal communications.  Good teachers know how to facilitate group discussions and engage students in Socratic dialog.  Although instructors must learn how to adapt their strategies to an online environment,  many of them have a good starting place. The third type of interaction, however, student-to-content, may arguably be the most challenging for instructors new to online learning.

Not all student-to-content interactions are equal. At the lowest level, passive eLearning involves very little interaction. Clicking buttons to page through content does not constitute interaction.  Clicking through a presentation on composting, for example, constitutes a very low level of interaction.  A higher level of student-to-content interaction might involve multimedia in the form of animations and video, drag and drop exercises and other basic forms of interaction.  A moderate level of interaction might involve scenarios, branched instruction,  personalized learning, case studies, decision making and the instructional design patterns that have been the basis of our past web journal articles.   The highest and most technical level of interaction might involve virtual reality, immersive games, simulations, augmented reality and more.

That said, the highest level of interactivity is not necessarily the best level for students. Interaction is essential insofar as it helps students achieve a cognitive goal, whether that relates to remembering, understanding, or applying. Interactions are useful only if they help students remember better, or understand a concept or a principle or apply their learning. One can’t categorically say that fully immerse interactive games are better than animated videos or drag and drop interactions. If the objective is that students will remember essential medical terms, then a fully immersive environment may hinder that accomplishment. Richard Mayer refers to extraneous processing. Extraneous processing is the attention that the learner must give to features of the learning environment that do not contribute to learning goal achievement.  If extraneous processing is too high then it impedes the student’s ability to focus on relevant information.

How it works

Considering the type of learning that students must activate is critical in determining whether or not instructors should plan on higher levels of interaction. In my second example, students are introduced to Isle Royale. Students examine data related to the wolf and moose population. They must draw inferences on how the rise and decline of one population affects the other. If this were a declarative knowledge lesson, students would simply need to recite the critical facts. How many moose were introduced to Isle Royale? How many wolves? What are the population numbers today? What were they at any given point? Students can simply recite those numbers without understanding the true nature of the interaction between the wolf and moose population on the island. The real objective of the lesson is to understand feedback loops in ecological systems. Students arrive at this understanding not by reading facts and figures, but by asking what-if questions and manipulating the inputs on a simple simulation.

Asking what-if questions is an inductive approach.  Rather than being given a description of a law, for example, or a principle or concept, students infer the needed information from a simulation or a set of examples.

The deductive approach is the opposite.  Perhaps an overly negative view is that instructors who use a deductive approach simply state a principle or concept.  All of the students’ cognitive work is in listening and, perhaps, taking good notes.

Faculty may be skeptical or wary of inductive learning. It takes considerable time to set up; it seems less efficient. Conversely, in my experience, faculty commonly engage students in deductive learning. The instructor presents on and explains a concept. Students take notes. Lectures are often characterized by the deductive learning approach.

The inductive method makes use of student inferences. Instead of explaining concepts, the instructor presents students with a model or examples that embody the concept. The student manipulates inputs and ‘infers’ what the underlying rules are.

Instructors who are critical of inductive approaches fear that students will make incorrect inferences. In my experience, inductive learning is more challenging to facilitate.  It is easier to state facts than to set up examples for students to infer facts.  Especially, given the hazard that students could infer the wrong facts.

In recognition of this, the instructional design pattern called Explore and Validate features a check-for-understanding activity. Explore and Validate is one form of interactive engagement.

An example

Explore and Validate offers an environment in which students manipulate models or examine examples, draw inferences and check their understanding in some manner in order to validate their conclusions.

For example, students may read cases in which victims express feelings toward their oppressors.   In a deductive approach, the instructor can simply define the Stockholm syndrome.   The instructor may explain that hostages afflicted with this syndrome express feelings of empathy toward their captors.  An assessment might ask students to define Stockholm syndrome.  An inductive approach might involve students with reading brief summaries of cases in which they “notice” that the victims become empathetic or sympathetic toward their oppressors.  Students can describe the syndrome, offer explanations and even label the syndrome.  The instructor would then contrast the students’ descriptions with a more formalized, clinical description.  The first part of the activity is the explore phase.  The second part is the validate phase.

In our example below, students are told about Isle Royale.  In the early 1900s moose swam to Isle Royale from Minnesota.  50 years later a pair of wolves crossed an ice bridge to the island from Canada.  In a lesson designed with the Explore-Validate instructional design pattern, an optional strategy is to ask students to think about and predict the outcome of a given scenario.  In this example, what happens when a pair of wolves are introduced to an island with a finite number of moose.  Students might conclude that the moose population would eventually be annihilated – but that is not what happened historically.  As the students contrast their original predictions with the simulation results, they may be struck by the difference between their prediction and the simulation results. As I’ve written many times before, this is cognitive dissonance – and when applied correctly may stimulate learning. When applied correctly, students will say ‘I didn’t know that“ and want to probe more.  When applied incorrectly, students will simply be overwhelmed and shut down.

The key exploration in the moose-wolf example is with a model.  The model was generated by Scott Fortmann-Roe with a tool called InsightMaker.  InsightMaker is a free simulation and modeling tool.  It is easy to use and yet powerful.  It is cloud-based and works with the LodeStar authoring tool as either embedded content or linked content.   Models created with InsightMaker can be used to promote critical thinking in students.  The model can expose input parameters as sliders.  Students can change the value of an input and see the change in the output after they click on the ‘Simulate’ button.  InsightMaker is made up stocks, variables, flows, converters and more.  Stocks are simply containers for values such as population.  Variables can hold values such as birth rate, death rate and interest rate.  Flows are rules that can perform arithmetic operations on variables and affect the value in stocks. Students can click on the flow affecting the value of a stock and see the rules.  They can explore all of the relationships.  In the case of a feedback loop where the output is combined with the input to affect a new output, students can study the relationships and gain insight into dynamic systems.   Instructors can also simulate the spread of diseases through populations.  They can control the probability of infection and the degree to which the population can migrate away from the infected.  They can control the length of infection and the transition to a recovered state.  The instructor can model one person and then generate a population of such persons.

Models are an excellent way to engage students – to get them to explore, to ask what-if questions and notice patterns.   In public health, students can change the parameters of specific disease like the Zika virus.  In economics, students can increase supply or demand.  In engineering, students can work on wind resistance models.

With the LodeStar authoring tool, instructors can link to or embed an InsightMaker model.  They can then insert a series of questions to check students’ understanding and provide feedback.  The link below shows a simple example of the Isle Royale model and the Explore-Validate pattern.

 

LodeStar_Screenshot

Screenshot of an activity built with the LodeStar eLearning authoring tool and the ActivityMaker (Mobile) template

www.lodestarlearning.com/samples/Isle_Royale_Mobile/index.htm

 

Conclusion

We have been listening to students. The way they describe their online learning experience seems pretty humdrum.  Instructors don’t need to rely on publishers to create stimulating interactive lessons.  They can take matter into their own hands with tools like InsightMaker.  InsightMaker fulfills the Explore part of the activity.  LodeStar fulfills the Validate phase.