Learning Experiences in the 3rd Dimension.

 

Introduction

Great learning experiences can be crafted from 3D technology. The simplest form of 3D technology is the photosphere. It is accessible to teachers and trainers and can be used quite effectively. In this article, I’ll show off a demonstration project and describe the use of 3D models, a photosphere, text and graphics, video, and audio.

Two years ago, I wrote about using photospheres in online courses. Today, ‘interactive’ photospheres are a critical strategy that designers of every stripe should master. Currently, the use of photospheres is supported by the proliferation of 3D models, photosphere projects, new services, improved technology, and new features in our own authoring software.

So, let me parse this mix media approach. To start, a photosphere is a 360-degree panoramic image that can be displayed in a viewer. Learners can ‘navigate’ the image by dragging the view in any direction and zooming in and out.  Google Street View is the best-known example, but photospheres abound in art museums, tourist bureaus, real estate sites, and social media.

The photosphere is deceptively simple and hides a more profound change in the web.  As we all know, browsers support the trinity:  HTML, JavaScript, and CSS.  All three technologies have been evolving.  Recently, JavaScript began supporting a variety of new technologies, including WebGL.  WebGL makes 3D rendering possible in a browser without the need for plug-ins.  In short, WebGL (Web Graphics Library) displays 3D and 2D. Because of WebGL, browsers can benefit from hardware graphics acceleration to display (render) complicated graphics.  The key is hardware acceleration. The processing of graphics in a dedicated graphics process unit is many times faster than in the main CPU. 

The photosphere uses WebGL or hardware acceleration.   To display a photosphere, a distorted image is mapped on the inside of a 3D sphere.  Our perspective is from the center of the sphere with a narrow field of view.  By dragging the image, we pan the sphere and bring hidden parts of the image into view.

With the help of LodeStar, an eLearning authoring tool, we can add interactivity.

To best illustrate interactive photospheres, I created this demonstration project based on one of my loves, the Group of Seven.

A little background:  I went to school in Canada. Until the thirteenth grade, the study of history was the study of British, American, and Russian history.  The study of literature was primarily European and British literature. The study of art was primarily of British and French art.  In grade 13, that all changed.  We studied Canadian history, literature, and art.  For me, that was transforming.  Central to Canadian art was the Group of Seven.  Their subject was primarily the Canadian landscape. Until recently, I could find Group of Seven paintings only in the McMichael Canadian Art Collection in Kleinberg, Ontario. So, I decided to create a gallery of my own.  Just a small one for demo purposes, featuring two of the artists associated with the Group of Seven.

Visit this link and if your curiosity is piqued, I will share the details of how I made this learning experience. Launch the demo and on the second page drag your mouse across the scene.

Art Gallery (lodestarlearning.github.io)

Virtual Group of Seven Gallery Demonstration Project

The details

First, I needed a model of an art gallery. I went to TurboSquid and bought one for $19.  I could have found a photosphere from Flickr or elsewhere, but I wanted control of the objects in my gallery.  I could have built a 3D model from the ground up – but wanted a short cut.

The model came in the form of a DAE, which is a 3D interchange format.  The DAE format is based on the COLLADA (COLLAborative Design Activity) XML schema.  (This is a standard format that can describe 3D objects, effects, physics, animation, and other properties. All the major 3D modeling tools can import it.)   I then brought the model into Blender.             

Blender is a free 3D modeling tool and it is quite incredible.           

3D Model in Blender

In Blender, I edited the model and added my own camera.  To render a photosphere, I made the camera panoramic and then equirectangular. Equirectangular is a projection type used for mapping spheres onto a two-dimensional plane. This results in a very distorted image when viewed normally.  Viewed in a photosphere viewer, the image looks spectacular.

Next, I imported the image into LodeStar. With the help of the LodeStar’s interactive image editor, I drew hotspots over the doors and imported images of paintings that I positioned in the art gallery. Technically, the images become image overlays. As the viewer moves the image up and down and across, the imported images adjust accordingly by scaling, skewing, and repositioning.

Interactive Image Editor in LodeStar

In the scene above, the imported images appear above the benches.  A hotspot sits over the doorway.  When a learner clicks on the doorway, LodeStar executes a branching option.  In this case, that means a jump to the next gallery.

In the example, two gallery rooms are featured. The first gallery exhibits two paintings of Lawren Harris.  The video icon displays a YouTube presentation on Harris’ work. The second gallery exhibits two paintings of Emily Carr, and a wonderful YouTube presentation on her work.

Conclusion


Photospheres are but one part of 3D technology.  Browser support for WebGL makes it possible for us to use 3D models interactively. Students can view 3D models from any perspective and manipulate them. The possibilities are endless. LodeStar and other tool makers must make it easier to load these models and make them useful for educational and training purposes.  Just as we support functions that can change an image or element’s rotation, position, opacity, and color, we must provide functions that can manipulate 3D objects.

We are currently working on some prototypes and would love to hear from you and what would most benefit students. Please send us your comments.

Meeting the CCAF Challenge

By Robert “Bob” Bilyk

Introduction

I recently watched Ethan Edwards present ‘Cracking the e-Learning Authoring Challenge’.  This post is my attempt at cracking the e-Learning authoring challenge.

But first a little background.

As many of you have the privilege of knowing, Ethan Edwards is the Chief Instructional Strategist for Allen Interactions.  Cracking the challenge is all about building interactivity in an authoring tool – specifically, CCAF interactivity.  CCAF is an acronym for Context-Challenge-Action-Feedback.  The four components of CCAF are part of Michael Allen’s CCAF Design Model for effective learning experiences.  Michael Allen is the founder of Allen Interactions, the author of numerous books on eLearning, and the chief architect of Authorware and ZebraZapps.  Both authoring systems were designed for people with little technical expertise to be able to build – you guessed it — CCAF learning experiences.

In Ethan’s presentation, he demonstrates building a CCAF activity with Articulate Storyline.  In a nutshell, the CCAF learning experience is the experience of “doing”.  Rather than reading or viewing content, the learner experiences first-hand the application of principles, concepts, strategies, and problem-solving in completing a task and succeeding at a challenge.

In Ethan’s demo, his task is to detect a refrigerant leak.  The learner is shown refrigeration equipment and given a leak detector.  The learner doesn’t at first read a pdf or watch a video but performs an action.  In CCAF activities, text and videos might come in the form of feedback to a learner’s action.

Some of the CCAF learning experiences that I designed include running a multiple hearth wastewater incinerator, troubleshooting a cable network, supporting the adoption of a special needs child, designing an online class, assessing risk of recidivism, and, most recently, searching for documents in a document management system.  In all cases, most of the learning came from being immersed in a ‘real world’ setting’, presented with a challenge, and getting feedback because of learner actions. 

Ethan’s presentation piqued my curiosity and a bit of self-reflection.  He lists things that are essential in an authoring tool to enable the design of a CCAF learning experience.  As a toolmaker, I explored each of the items on his list and I applied them to a small project built with our own LodeStar eLearning authoring tool. 

As we explore each item on Ethan’s list, I’ll illustrate with LodeStar.  If you follow along, you’ll see the development of a simple CCAF application.  You’ll learn about the components of CCAF.  And you’ll also learn a little about LodeStar and its capabilities.

But first an important caveat. CCAF comes in all forms, shapes and sizes. Ethan’s example and my example happen to be very simple simulations. The principles of CCAF are not limited to simulations. They can be applied to anything that requires action on the part of the learner — which includes making a decision, crafting a plan, analyzing and solving a problem — a host of things.

This is but one example of CCAF to illustrate its principles and test whether or not our authoring tool is up to the challenge.

Introduction to the Demo Application

The objective of the application is for learners to test an electrical outlet and determine which wires are hot or ‘energized’.  In completing this task, the learner must turn on an electrical multimeter and connect its probes to the various wires in an electrical outlet.  A multimeter is a measuring instrument that typically measures voltage, resistance, and current.  Once someone has learned the difference between these things, the practical skill is in choosing the right setting for the task and safely using the meter to complete the task. 

So that’s the challenge:  find the hot wire with a multimeter.  The context is a simple residential electrical outlet. 

Typical eLearning applications would use text, graphics and video to illustrate the use of the multimeter and explain underlying concepts.  CCAF applications challenge learners to complete the task in a manner that is an educational approximation of the ‘real thing’.  Text, graphics and video can offer explanations but not in lieu of the real-world task and often as a form of feedback. 

A LodeStar Application: Testing an Electrical Circuit

Basic Capabilities

But let’s start with an overview of the basic requirements.  To paraphrase Ethan, an authoring tool must have these capabilities:

  • Complete visual freedom
  • Variables
  • Alternative branching
  • Conditional logic
  • Action/response structures

I’ll elaborate on each of these requirements in my demonstration. 

Complete Visual Freedom

LodeStar combines HTML flow layout and SVG layout.  Images imported into the HTML editor are placed in the HTML flow and are laid out according to the rules of HTML.  Images can also be taken out of the flow and applied with a CSS rule so that text flows around the image.

In addition, LodeStar authors can use the Scalable Vector Graphics (SVG) canvas to layout out graphics freely in any position on the x and y axis. 

LodeStar’s SVG Canvas

In other words, the graphical elements on the SVG canvas are laid out freely.  The SVG canvas itself is just another HTML element.  Depicted below is a flow of HTML elements like text, images, divs, tables, etc.  The SVG canvas is in the ‘flow’ right along with them. Inside the canvas, graphical elements can be positioned anywhere, but the canvas itself follows the HTML document flow. shrinking and expanding as needed.

The visual freedom is that LodeStar combines the benefits of a responsive HTML flow with the precise positioning of an SVG canvas.

HTML elements are laid out on the page in a flow. If the page width narrows, the element isn’t by default clipped. It’s just bumped to the next line. The SVG canvas flows right along with the other elements. Its contents, however, are positioned with local XY coordinates.

I started with a multimeter image that I took from Pexels.com, a repository of free stock photos.  I used Photoshop to cut out the dial and imported it in the SVG canvas as a separate image.  I did this because I wanted the learner to be able to rotate the switch to place the multimeter in the right mode.  I also imported the image of an electrical box so that I could draw wires overtop.

Variables

 As I wrote in the Humble Variable (The Humble Variable | LodeStar Web Journal (wordpress.com)), variables are critical to some eLearning designs.  In this example, I need to store the position of the multimeter switch.  That’s what variables do.  They are storage places in the computer memory.  As the learner clicks on the switch, the dial rotates.  As an author, I must store the value of that rotation.  If the value of the rotation is 40 degrees, the code judges the switch to be in the right position.

To enter the code that uses the variable, I right-click on the switch and select, ‘Select Branch Options’.  Branch Options are basically things that happen as a result of displaying a page or clicking on a button or choosing a multiple-choice option or doing one of many things.

Branch Options can be as simple as turning a page or as complex as executing a whole list of instructions. The following is a basic example of the latter:

The Multimeter code

var rotation = getValue(“dialRotation”);

rotation+= 10;

setValue(“dialRotation”, rotation);

changeRotation(“dial”,rotation ,13,27);

if(rotation % 360 == 40){

changeOpacity(“display”, 1);

appendValue(“actions”, “Turned on multimeter. <br>”);

}

else{

changeOpacity(“display”, 0);

}

This code looks complicated to a non-programmer.  But it is not.  It just takes practice to write.  It’s on the same difficulty level as an Excel formula.

Here is the same code but with an explanation (in italics) underneath:

var rotation = getValue(“dialRotation”);

get the value of dialRotation from long-term memory and assign it to a local or temporary variable named ‘rotation’

rotation+= 10;

add 10 degrees to value of rotation.  In other words, rotation = the old value of rotation plus 10.

setValue(“dialRotation”, rotation);

store the new value in long-term memory in a location called ‘dialRotation’

changeRotation(“dial”,rotation ,13,27);

change the property of a graphic with the ID of ‘dial’.  All LodeStar graphics can be assigned an ‘ID’.

More specifically, change the rotation property by 10 degrees (the value of rotation).  Pivot the rotation at the precise point that equals 13% of the width of the SVG canvas and 27% of the height of the canvas.  That point is the center of the dial in its current position on the canvas.  If the dial were in the dead center of the canvas we would use 50, 50.

if(rotation % 360 == 40){

This line can be simplified to if(rotation == 40)   I used the modulo operator (that is, ‘%’) in case the learner kept rotating the dial around and around.  If rotation = 400, then 400 % 360 would equal 40.  360 divides into 400 once with a remainder of 40.  So, if rotation is equal to 40, then do the following:

changeOpacity(“display”, 1);


change the opacity of a graphic with the id of ‘display’  This is the text box used to show the voltage.

appendValue(“actions”, “Turned on multimeter. <br>”);

store the learner’s actions in long-term memory in a place called ‘actions’

}

else{

changeOpacity(“display”, 0);

if the rotation of the dial does not equal 40, then shut off the display by changing its opacity to 0.

}

The Probe Code

I won’t explain the probe code in as much detail.  Basically, when you drag the red or black probe, then the following code is executed.  It essentially checks whether or not the probes are in the right spot.  If they are, the multimeter display shows 110 volts.

var  condition1 = isOverlap(“RedProbeTip”, “BlackWireBTarget”);

var  condition2 = isOverlap(“BlackProbeTip”, “box”);

if(condition1 == true && condition2 == true){

  changeText(“display”, “110.0”);

appendValue(“actions”, “Moved red probe to correct position. Black probe in correct position.<br>”);

}

else if(condition1 == true){

changeText(“display”, “0”);

appendValue(“actions”, “Moved red probe to correct position.<br>”);

}

else{

changeText(“display”, “0”);

appendValue(“actions”, “Moved red  probe to incorrect position.<br>”);

}

These are the drag branch options that are tied to an object with a specific ID. 

Red probe in place; black probe is not. Therefore the meter shows ‘0’.
Red probe in place. Black probe in place. Meter shows 110 volts.

Alternative branching

Once the learner has tested the wires with the probes, with one probe connected to the wire and the other grounded, then the learner must select A, B, C, or D.  Here’s where alternative branching comes in.  Learners who select the right answer might go on to a more difficult scenario.  The above scenario is as easy as it gets.  Perhaps they must do a continuity test to detect where there is a break in the circuit.  Learners who select the wrong answer can be branched to a simple circuit or given an explanation that one black wire is coming directly from the power source, and the second black wire is passing on that power to the next outlet or switch.

CCAF applications accommodate the differences in learners.  The application can alter the sequence of experiences based on learner performance.  This is a profoundly different thing than typical eLearning applications where every learner reads the same text, watches the same videos, and completes the same quiz.

Conditional Logic

Ethan also lists conditional logic as a basic requirement of CCAF applications.  Conditional logic comes in the form of if-else statements as evidenced by the code.  Conditional logic also comes in the form of alternative branching.  Select the wrong answer and then get help.  In LodeStar, conditional logic is supported by not only its language and branch options but also by logic gates. 

In the display below, we see what happens when the learner reaches a gate.  (Incidentally, learners don’t actually see a gate.  When they page forward, the application checks the gate’s logic and then branches them according to some condition.  In this example, the author might configure the Gate with a pass threshold.  Let’s say 80%.  If the learner meets or exceeds a score of 80%, they are branched to the ‘Results’ page’.  If not, they may be routed to Circuit Basics. Follow the dotted lines.

Branches at the ‘page’ level are visualized in the Branch View.

Action/response structures

In our example, the learner moves the probes around.  If the multimeter is turned on, the learner sees a voltage display.  The action is moving the probe. The response is a voltage display. 

First, this a ‘real world’ action and ‘real world’ response.  I write ‘real-world’ in contrast to what happens in a typical multiple-choice question.  In a multiple-choice question, the learner clicks on a radio button and possibly sees a checkmark.  That’s only ‘real-world’ to an educational institution.  The world doesn’t present itself as set of multiple-choice questions. 

Second, when the learner sees a voltage display, that is feedback in the CCAF sense of the word.  The learner does something and then gets feedback.   Now, in our example, we did choose to combine ‘real-world’ feedback with a multiple-choice question.  Ultimately, the learner is asked to choose the letter next to the ‘hot’ wire.  In our example, we logged the learner’s actions and can unravel how they arrived at their final decision.  Did they connect the red probe to the right wire and did they ground the black probe?  If they selected the right answer but didn’t perform the correct actions that would lead to the right answer, we know they haven’t learned anything at all.

Conclusion

Authoring tools that enable one to create CCAF must have these capabilities: complete visual freedom, variable support, alternative branching, conditional logic, and action/response structures.

The hot wire example is an example of a very simple simulation.  But, as I wrote, the concept of CCAF isn’t restricted to this type of simulation.  CCAF can be found in decision making scenarios, for example. The learner might be placed in a situation and challenged to make the right decision or say the right thing.  That too is CCAF.  CCAF lies at the heart of effective learning experiences.

eLearning Strategies to Support Memory Recall

Introduction

At the university where I worked for eight years, occasionally I observed non-traditional students in class well into the evening, struggling to stay alert, struggling to soak it in, trying to make something better for themselves. Several years earlier, I watched a new employee at a software company resign in utter defeat. Nothing he had studied before in terms of software language, database, and mathematics prepared him for a new domain of knowledge.  It was all foreign, and it was disheartening, and it was delivered in a manner that was all too much.

Late evening classes or eight-hour training days push more and more information at the learners, until they literally break down, quit, or somehow miraculously hang on to fight another day.

The tremendous tax on learners is not unusual in either the corporate or the academic environment.  Both schools and companies place a heavy demand on the learner’s ability to remember things. 

The constraints of human memory!  Our lack of understanding of memory would be almost humorous if it weren’t for the wasted effort of students and employees alike.  In this vacuum of understanding, myths and falsehoods and deceptive practices have filled in.   Fortunately we have people like Will Thalheimer (The Debunker Club : Debunking Resources – The Debunker Club) and the authors of The Urban Myths of Learning and Education to help set us straight.

The Forgetting Curve

What we do know, and what research supports, is that we are wired to forget.  Many of us cite Herman Ebbinghaus’ ‘Forgetting Curve’.  The forgetting curve is real and, in some cases, very steep depending on a number of factors, but as Dr. Thalheimer points out, you just can’t put a number on it.  You can’t say with any certainty, for example, that learners will forget 70% of what they have learned within a day. 

Let’s consider the forgetting curve just for a moment, and then we’ll turn to eLearning.

The forgetting curve was the outcome of research done in the late 1800s by Herman Ebbinghaus.  He scientifically observed his own recall of nonsense syllables.  He made up lists of three-letter nonsense words and committed them to memory.  Once he successfully memorized 100% of a list, he attempted to recall the list.  The forgetting curve shows that he forgot 42% of the words within 20 minutes.  After a day he retained only 33% of this list of nonsense.

Hermann Ebbinghaus’ Forgetting Curve

We know that people forget, perhaps at disheartening rates, but the rate of forgetfulness is based on dozens of factors.  Are these new employees who are being introduced to something new to them, or are they seasoned employees?  Do they have any prior knowledge that will help them organize new information?  Are they paying attention or are they distracted?  Are they motivated to learn – intrinsically or with an external reward?  Is there a threat if they don’t learn?  Is there too much of a threat that inhibits their learning?  Are they just trying to earn CPE credit?  Are they taught how to recall the information in the right place at the right time for the right reason?  Is the material difficult?  Are they asked to recall the information? How many times?

Try placing those variables in a formula.  It’s impossible. 

We know that the forgetting curve is real.  It has been replicated recently (Replication and Analysis of Ebbinghaus’ Forgetting Curve (nih.gov)) and it will accurately mirror our students’ or employees’ rate of forgetfulness if we do not:

  • Help learners recall prior knowledge
  • Help learners organize new knowledge
  • Provide storage and retrieval cues that will help them use the information in the right context
  • Practice retrieval of the new knowledge
  • Space the retrieval over time.
  • Integrate the new knowledge with other knowledge
  • Apply the new knowledge before forgetting

This is where eLearning plays a role. Oftentimes, trainers are busy workers or busy teachers who can’t address deficits in prior knowledge, for example, or even assess prior knowledge, or fit spaced practice or simulated application into their training.

That is where I think eLearning can shine. 

I know, I know.  I’m an eLearning developer and an eLearning authoring toolmaker.  But there are reasons why I chose this field.  This is one of them.

The design of eLearning experiences can help improve the training experience, even if the latter is traditional face-to-face teaching.  As I’ve observed, many people dread eLearning because of the page-turner drudgery they’ve been subjected to.  Medical workers, lawyers, and accountants, and anyone with continuing education demands, have had too many bad self-study experiences.   In my current company, group-live (face-to-face) instruction is preferred over eLearning. That doesn’t, however, eliminate the option of eLearning. As a pre-training preparation or a post-training reinforcement and application, eLearning can still play a role.

Against this backdrop, here are some strategies or designs that can help:

Plan the training or academic curriculum to include pre-training activities and post-training reinforcements.  Make room for recalling prior knowledge in the training or lesson plans of future courses.

Flip the training.  That means, use eLearning (or self-studies) to present the content and use face-to-face training time to observe student performance and provide feedback. Data from 317 studies shows that flipped classroom interventions produced positive gains across all three learning domains (To Flip or Not to Flip? A Meta-Analysis of the Efficacy of Flipped Learning in Higher Education – Carrie A. Bredow, Patricia V. Roehling, Alexandra J. Knorp, Andrea M. Sweet, 2021 (sagepub.com))

Pre-training

Let the post-training assessments for the last course or training session be the pre-training assignments for the new thing — not as assessments, but as highly scaffolded activities with prompts and hints and feedback and textbook references and video helps and whatever.  The point is to help recall and to prepare learners for what lies ahead. 

Design activities that help learners recall vocabulary, basic concepts, laws, principles and procedures.  Activities can help prompt that recall and reduce the cognitive load of the new stuff.  If an accounting teacher makes references to cash or accrual accounting, do you want students struggling to recall the terms or do you want them paying attention to the new information?  It’s hard for them to do both.

Use flashcards, crosswords, matching, categorization, and other activities.  They’re not as sophisticated as things I’ve discussed in past posts, but they can play a useful role in helping recall.

Embed a video or a short Powtoon presentation.

Use quizzes with circular queues (missed questions get repeated) or variable interval queues (missed questions get repeated at spaced intervals).

Make it fun.  Gamify it.

Post-training

All of the pre-training suggestions apply to post-training as well.  But you can do even more.


Interactive Storyboards

This strategy walks the learner through the presented content in a storyboard fashion.  In the interactive storyboard, however, the learner must fill in the missing pieces. Recently our HR department presented on employee feedback and the different roles that in-charges, supervisors and talent advisors play in giving feedback to accountants and auditors.  She talked about a process that included feedback in review notes, one-on-one meetings with supervisors, and regular meetings with talent advisors.  The post-training activity can follow along in the life of an accountant but leave blanks for the learner to fill or questions for the learner to answer. It causes the learner to retrieve important elements of the presentation and become an active participant in reconstructing the information. When the learner gets it wrong, that’s an opportunity for feedback!

An interactive Storyboard, created with the LodeStar eLearning authoring tool

An added benefit to the activity is that we can see how learners experienced the post-training activity through the xAPI statements that the (CMI5-conformant) activity generated.  In the following screenshot from the Learner Record Store, we can see that this employee missed the point that there is a connection between one-on-one meetings and talent development meetings.  We also see that this employee did hit the results page with a decent score the first time around.  The employee satisfied the requirements of the assignable unit (AU) and completed the course. That tells us a lot.  If we were to analyze all of the items that employees missed, we could either improve the presentation or improve the questions.

xAPI statements, generated by an activity authored in LodeStar

Embedded Discussions

Higher education instructors often invite students to discuss topics online after a presentation.  There is a reason for this. At the most elemental level, it forces recall of the presentation. At a higher level, it generates new knowledge as students hear differing perspectives.

In my time in higher ed, I’ve seen this done well and I’ve seen it done poorly.  My poster child for doing it right was a marketing instructor who simulated product advertising pitches in a discussion forum.  My hunch is that online discussion in corporate training environments is rarer.  To my point, our corporate Learning Management System (LMS) doesn’t even offer a discussion board. 

The following screenshot depicts an activity prototype with an embedded discussion board.  For this prototype, we used Tribe from Tribe | A Customizable Community Platform.   Tribe allows you to create and embed your discussion board.   (I’m not necessarily endorsing Tribe.) The strategy is to refresh employees on the fundamental principles of giving and receiving feedback and then ask them to discuss what works for them.

The key idea is to immerse learners in the content with enough information to prompt their recall of the training.  Then we invite them to share their insights or strategies with others.  They don’t need to leave the activity and log in to another service.  They can share their thoughts right there and then. 

This is an important idea in a general strategy that we’ve been working out called 3Di.  That means delivery of interactive content, discussion, and then decision.  Students apply what they have both learned and discussed to make a decision. 

A discussion forum embedded in an eLearning activity

Staged Journals

We first developed this strategy for a literature teacher.  She taught students how to be analytical of fairy tales.  She instructed them on the Propp analysis based on the work of Vladimir Propp.  In the staged journal technique, students would be presented with one step or stage of the analysis.  They would complete the step and go on to the next.  In the end, they had a journal that was compiled of all the steps.

The screenshot below depicts an employee who types in his greatest difficulty when asked to give a subordinate corrective feedback.  The learner brainstorms difficulties, and then brainstorms remedies. 

Here is an excerpt from a journal that compiles it all together in a feedback summary.

A compiled journal

Conclusion

Face-to-face instruction may have its supporters, but even this delivery type should include pre-training and post-training eLearning activities.  We know from research and from our own surveys that students and employees forget too much of what we teach.  The amount and rate of forgetfulness may not precisely follow Ebbinghaus’ curve but unless we address forgetfulness, students won’t achieve the desired outcomes of the training. 

More in-depth activities might include decision-making scenarios and simulations.  I’ve written about those in past articles but, in this post, I have featured activities that can be quickly and easily generated.  All three activities represent strategies that can help in the reflection and recall of training.   

Learner Experience Design

Introduction

Learner Experience Design has captured the attention and the imagination of just about everybody.  Some have cast learner experience design (LXD) as a discipline in direct opposition to instructional design; others consider LXD as a rebranded instructional design.

My own perspective comes directly from my community of practice.  For one, I worked as an instructional designer for creative studios who practiced learner experience design well before it became a thing.  We worked in teams that blended the disciplines of user experience design, cognitive psychology, learning technology, and design thinking, which included ideation and prototyping.  LXD as a discipline captures the very best of the principles that are espoused in the CCAF (Context-Challenge-Activity-Feedback) Model, the processes of design that include situational and user analysis, successive approximations, sketches, quick prototypes, a focus on the user, and a focus on doing.  The process of creating Allen Technology’s ZebraZapps, an eLearning authoring tool, included the best of design thinking and user experience design.

So what is Learner Experience Design?

So for me, LXD is what we’ve being doing for years and that is:

  • Centering on the learner versus the content (Dee Fink)
  • Focusing on the experience of the learner — on the doing (CCAF, problem-based learning)
  • Applying how people learn (cognitive science)
  • Empathizing, defining, idea-generating, prototyping, and testing (Design Thinking)
  • Following the principles of User Experience Design (Human Factors)
  • Collecting and analyzing data (Data analytics with the help of SCORM, and now xAPI, CMI5)
  • Using learning technology as enablers or affordances
  • Recognizing that formal training is but one part of improving human performance

In my view,  LXD is the power of all of these things combined under one label.  To illustrate the interplay of the learner, experience, cognition, behavior, UX, Design Thinking, data, technology, and human performance, I’ll draw upon a current project. 

An Example

The project goal is to help supervisors act more like coaches than formal evaluators.  The context is public accounting.  CPAs require deep technical skills and, as they progress in their careers, a host of success skills that include business development, leadership, supervision, and more.  In Minnesota, for example, CPAs complete 120 credits every three years to maintain their license.  They must also routinely attend trainings and updates related to changes in the law, technology, and business practices. 

In addition to this continuous training, the company seeks to improve employee retention, maintain good morale, and continue to grow rapidly.  To achieve its goals, the company adopted an employee engagement system that, among other things, helps supervisors collect feedback on employees from their tax reviewers or audit in-charges. More importantly, the company is switching from an annual review to monthly meetings that help supervisors and their reports improve their work.

There’s already a lot going on.  Learner Experience Design recognizes that all of these factors come into play:

  • Employees train a lot
  • New technology is in place
  • Industry is experiencing high turnover of staff
  • Company wants supervisors to be good coaches
  • Company is shifting from annual review to monthly meetings

At the heart of all of this lies a set of experiences shared between supervisors and their reports:

  • Requesting, providing, and organizing feedback with the employee engagement platform
  • Delivering effective feedback
  • Receiving feedback effectively

Let’s focus on one experience to illustrate the power of LXD.  Let’s focus on ‘giving feedback’.

There are underlying psychological principles as well as best and poor practices related to the giving feedback.  Giving feedback might elicit a perception of threat in the receiver and can easily be dismissed.  The feedback provider must use concrete examples, remain non-judgmental, draw from different perspectives, work toward a positive outcome and on and on. 

As designers, we can treat the topic of giving feedback in many different ways.  We can explain the function of the amygdala in the human brain and underscore its importance in decision making and emotional responses.  Feedback triggers those emotional responses and evokes a fight or flight response.  We could show video clips of good and bad practice or cartoon strips or excerpts from medical journals or any media that conveys information.  Our design might include this type of information sharing and then some form of assessment – a quiz or essay.

In contrast, LXD tends to favor placing the experience at the heart of the lesson.  In this case, the experience is the giving of feedback.  One design treatment might place the learner in a first-person scenario or simulation.  The context is the office with a new employee who is not performing well.  The learner acts as supervisor and selects the best thing to say in a conversation with the employee.  If the learner’s choices disagree with the principles and best practices of providing feedback, then the instruction may come in the form of an employee thought bubble, a pop-up outlining best practices, references to a text or a video, and other visual indicators of success or failure. 

In the prototype below some of these ideas come together.  The learner has selected one of three options.  The choice causes a change in the employee’s outward expression (full figure on the left), inward expression and thoughts, and in the information that is collected on the interaction.  In this prototype, the learner can access a transcript or review it at the end.  At this point in the scenario, the employee came in with the expectation of being coached only to be confronted by the reality that she is being evaluated (because of what the learner chose).  She outwardly smiles while inwardly expressing her concern about being evaluated.  A meter shows generally how things are going.

At the bottom of the screenshot, the learner has access to feedback given about the employee from two sources.  Just as in real life, the learner can consult that feedback to get different perspectives on the employee’s performance.

Giving Feedback prototype authored with the LodeStar eLearning authoring tool

The Design Thinking that led to this prototype included, to start, an analysis.  We must know something about the audience, their situation and the processes that were in place in the past.  In fact, while thinking about the actual problem we are trying to solve, we placed feedback ‘training’ on the back burner.  Other things needed to be in place first:  clear processes; and role definition between supervisors, audit in-charges and tax reviewers, and other personnel.  We also needed to work out how the workplace engagement platform will be used optimally to solicit and collect feedback in preparation for the one-on-one meetings between supervisors and their employees.

As we continue to think about people and processes, we’ll come up with new ideas, build new prototypes and test them out. 

Well…admittedly, to a point.  For a mid-sized company the return on time and effort is calculated quite differently than for a creative agency that plans training for thousands.  Design thinking still plays a role, but perhaps at a smaller scale.

The cognitive aspects of this training relate to how we can help the learners acquire and retain new knowledge without overload, how they can assimilate that new knowledge, and how they can apply the knowledge to their daily lives.  Human Performance Improvement considers any job aids or prompts that support the learner’s application of the principles and procedures.  User Experience Design challenges us to think about a lot of things on the screen (fonts, colors, layout, flow, navigation,  interactive elements, accessibility, desire paths) and off (cognitive overload, attention, memory, and more).

All of these things interplay and intersect.  Cognitive load might cause us to scaffold or plan out the curriculum differently (instructional design), or create a job aid (human performance), or map out the experience (UX) so that it doesn’t overwhelm the learner.  As we build prototypes or test the product, we collect data and analyze it.  Learning technology (xAPI, CMI5, SCORM) helps us collect the data from the learning experience.  xAPI and CMI5 are standards that are centered on experience.  (As I’ve written in the past, the x in xAPI is ‘experience’.)    Statistical methods help us make sense of the data.  For example, are learners benefiting from one design over another.

Conclusion

Since the term Learner Experience Design was first introduced, it has become part of our vocabulary and a rallying cry against content-centric designs, training-centric human performance improvement, and ineffective user interfaces.  LXD may not be anything new and yet it feels new and it feels exciting.

CMI5: A Call to Action

Introduction

Since 2000 a lot has changed. Think airport security, smart phones, digital television, and social media. In 2000, the Advanced Distributed Learning (ADL) Initiative gathered a set of eLearning specifications and organized them under the name of SCORM. In 2021, in a time of tremendous technological change, SCORM still remains the standard for how we describe, package, and report on eLearning.

However, finally, we are on the eve of adopting something new and something better: CMI5.

We no longer have landlines, but we still have SCORM

CMI5 Examples

To many, CMI5 is another meaningless acronym. To understand the power and benefit of CMI5, consider these very simple examples:


A Learning and Development specialist creates a learning activity that offers managers several samples of readings and videos from leadership experts. The activity allows the managers the freedom to pick and choose what they read or view; however, the specialist wants to know what they choose to read or watch as well as how they fare on a culminating assessment.

CMI5 enables the activity to capture both the learner experience (for example, the learner read an excerpt from Brené Brown’s Daring to Lead ) and the test score. CMI5 can generate a statement on virtually any kind of learner experience as well as the traditional data elements such as score, time on task, quiz questions and student answers. In this sense, CMI5 supports both openness and structure.

Let’s consider another example:

An instructor authors a learning activity that virtually guides students to places in Canada to observe the effects of climate change. She wants students to answer questions, post reflections and observe the effects of climate change on glaciers, Arctic ice, sea levels and permafrost. She sets a passing threshold for each activity. Once students have completed all of the units, then the learning management system registers that the course was mastered.

Let’s go further:

The instructor wants the learning activity to reside in a learning object repository or website outside of the learning management system – but still report to the learning management system. In fact, she wishes that no content reside on the learning management system. Regardless of where the content resides, she wants to know what sites students visited, how they scored on short quizzes, and how students reacted to the severe impact of climate change on Canada.

For students with disabilities, the instructor makes an accommodation and requests that the LMS administrator adjust the mastery score without editing the activity.

As the course becomes more and more popular, she anticipates placing the website and its activity onto CloudFlare or some content distribution network so that students all around the world can gain faster access to the learning activities.

The instructor works as adjunct for multiple universities and wants each of their learning management systems to get the content from a single location. In some cases, she wants the content locked for anyone who circumvents the Learning Management System and in other cases she openly lists the unlocked content with OER libraries like Merlot and OER Commons.


Before CMI5 much of this was difficult to achieve, if not impossible. So, let’s review what CMI5 offers us.


CMI5 captures scores in the traditional sense. But it also records data on learning experiences such as students virtually observing the change in the permafrost. CMI5 allows instructors and trainers to set the move-on criteria for each unit in a course (i.e. passing score before student moving on to the next unit).

CMI5 activities can reside anywhere – on one’s own website, for example, and still report to the learning management system. CMI5 enables an LMS administrator to change the mastery score from the LMS for the benefit of students who need accommodations and essentially trump what is set in the unit.

LodeStar’s CMI5 Implementation allows
authors to indicate where the content resides


CMI5 is a game changer. And yet for many – learning and development leaders, instructional designers, technologists and students – it doesn’t seem that way in 2021. CMI5 seems like a non-event. It feels like something we all talked about – a welcome change of weather on the horizon –and then nothing. Not a drop of rain.


We have been talking about and anticipating CMI5 for a long time – and yet, major learning management systems both in the corporate and academic worlds still don’t support it. CMI5 was envisioned in 2010, released to developers in 2015, and then released to the public in its first edition in 2016. We are now in the waning days of 2021—with limited adoption.


But that is likely to change.


For one, Rustici Software and ADL delivered on their promise of Catapult. Catapult is likely to accelerate adoption of CMI5. It provides many benefits to developers, including the ability to test if a CMI5 package conforms to the standard.

In my view, the learning technology architects have done their part. They brought us a meaningful set of specifications. They brought us the tools to test learning packages and to test the learning management system’s implementation of CMI5. Now’s it’s up to learning and development specialists and the instructional design community to cheer CMI5 on. It is my belief that once the community understands CMI5, spreads the word, and imposes its collective will on the LMS providers, CMI5 will become an important part of our tool bag. I urge you to share this article and others like it.


In the meantime, let’s take a deeper dive into CMI5’s potential.


Benefit One: Freedom to capture and report on any learner experience.


With CMI you can report on scores, completion status, and just about anything else. You can report on standard assessment results, and the not-so-standard learning experiences.


To understand this, we need to re-look at SCORM.


One should consider CMI5 as a replacement for SCORM – an improved specification. Conforming to SCORM was useful because a learning object or learning activity could be imported into just about any modern learning management system. As an instructor, if you created a game, quiz, presentation, simulation, whatever and exported it as a SCORM package, your activity could be imported into Moodle, BrightSpace, Canvas, Cornerstone, Blackboard, and any learning management system that supported SCORM. So, the benefit of SCORM was that it was a set of standards that most LMS systems understood. The standards that fell under the SCORM umbrella included metadata, a reporting data model, and standard methods for initializing an activity, reporting scores, reporting on interactions, and reporting passing or failing and completion status.

The data model included dozens of elements. One example of a data element is cmi.core.score.min. Related to score, SCORM conformant activities reported on the minimum score, the maximum score, the raw score (absolute number) and the scaled score ( a percentage between 0 and 1).


SCORM supported a lot of different data elements. A SCORM conformant activity could report on a variety of things. The limitation of SCORM, however, was that, despite the large number of elements, it was still a finite list. Take a Geolocation Storytelling activity as an example or an eBook reading. If I wanted to capture and report that the student virtually or physically visited location A, then B, and then C, I would have to work around the limitations of SCORM. I could not generate a statement such as, for example, ‘Student visited the Amphitheater in Arles’. If I wanted to capture a student’s progress through an eBook, SCORM would be problematic.


At this point, you might be protesting, but xAPI does that! xAPI? Another acronym! Yes. xAPI, or The Experience API is a new specification that makes it possible to report on a limitless range of things that a learner has experienced: such as, completed a chapter of an eBook; watched a video; toured a museum, and on and on. So, if we have this thing called xAPI, why CMI5?


The benefit of xAPI is that it supports the reporting of anything. The downside to xAPI is that, by itself, it doesn’t have a vocabulary that the LMS understands such as launched, initialized, scored, passed, completed. That is what CMI5 offers. CMI5 is, in fact, an xAPI profile that includes a vocabulary that the LMS understands. In addition, CMI5 can report on any type of learner experience. Here is the definition of CMI5 from the Advanced Distributed Learning Initiative:


cmi5 is a profile for using the xAPI specification with traditional learning management (LMS) systems

(Advanced Distributed Learning).


With CMI5, you can have your cake and eat it too. You can report on learner activity in a way that LMS understands and you can report on just about anything else that the Learning Management System stores in a Learner Record Store. The Learner Record Store or LRS is a database populated by statements about what the learner experienced.

xAPI Statements can capture an
any learner experience, including reading the instructions


Benefit Two: Freedom to put the learning activity anywhere


With CMI5, you can place a learning activity in a repository, in GitHub, on a web server, in a Site44 drop box site, in SharePoint, in a distributed network, wherever….without restricting its ability to connect with a learning management system. CMI5 content does not need to be imported. A CMI5 package can contain as little as one XML file, which among other things, tells the LMS where to find the content.


To appreciate this, we need to look back at SCORM once more (as if it were ancient history).


I’ll start with a pseudo technical explanation and then follow with why it matters.
The way SCORM works is that the learning activity sits in a window. The learning activity uses a simple looping algorithm to find the Learning Management System’s SCORM Adapter. It checks its parent window for a special object. If the window’s parent doesn’t contain the object, the activity looks to the parent’s parent, and so on. In other words, somewhere in that chain of parents, there must be that special object. Typically, the SCORM activity can only communicate to the learning management system if it is a child window of that system or if some server-side technology is used.

CMI5 works quite differently. CMI5 gives us freedom to leave our parents’ home. Whereas SCORM uses a Javascript Application Programmer Interface to communicate, CMI5 uses xAPI to reach across the internet and call a web service’s methods. Loosely, it’s like the difference between a landline and a cellular phone service. To use the landline you must be in the house; to use a cell phone, you must be in the network.

Benefit Three: A simplified sequencing model.

SCORM supported simple sequencing, which many say is not so simple. CMI5’s ‘move on’ property, in contrast, is very easy. A CMI course can contain one or more Assignable Units (AUs). The instructor spells out what the learner must achieve in an assignable unit before being able to move on. The move on property has one of the following values:


• Passed
• Completed
• Completed Or Passed
• Completed And Passed
• Not Applicable


Once the student has ‘moved on’ through all of the assignable units, the LMS notes that the course has been satisfied by that student.


Benefit Four: An assignable unit passing score can be overridden


In SCORM, the mastery score is hard-coded in the activity. In a SCORM activity, the instructor can base completion status on a passing score. But what if that hard-coded score were inappropriate for a group of students, for whatever reason? The specification enables an LMS to pass the mastery score to the Assignable Unit upon launch. So the LMS launches the AU, and sends it student name and mastery score (among other things). By specification, the AU cannot ignore the mastery score but must use it to trump what is hard-coded in the unit or refuse to run.


Benefit Five: Theoretically, CMI5 isn’t hamstrung by pop-up blockers.

When an LMS launches a SCORM activity, it either embeds the activity in an Iframe or launches a window. Both scenarios are problematic. The content may not be well suited for an iFrame and a pop-up blocker can obstruct the launched window.


Theoretically, CMI5 AU can replace the LMS with its own content. It’s not in an embedded iFrame and it’s not a pop-up window. When the LMS launches the AU, along with student name and mastery score, the LMS sends the AU a return URL. When ended, the AU returns the student to that return URL, which is the address of the LMS.


I write “theoretical” because the LMS should not but may ignore this requirement.

Benefit Six: CMI5 activities securely communicate to the Learner Record Store


As I wrote, the activity can send information about learner experiences clear across the internet to the learner record store. But how does the AU have the authorization to do this from, let’s say, a web site? And how does it happen securely?


This is the marvel of 2021 technology versus 2000 technology. Before 2000, we had difficult-to-use protocols for passing information securely across the internet. Oftentimes, special rules needed to be added to internet routers. Then along came a simpler protocol that the first version of CMI5 used (SOAP). Then came an even better way (OAUTH and REST). After launch, the LMS hands the AU a security token (kind of like a key that dissolves in time). The AU uses that key to gain access and to post information to the Learner Record Store.

Conclusion

CMI5 returns power to the instructor and to the L&D specialist. CMI5 allows one to choose where the content resides and to choose what the content reports. CMI5 captures learner experiences more completely and yet it communicates with Learning Management Systems with a vocabulary that LMSs understand. CMI5 supports accommodations for a special group of students without needing to change the code of the Assignable Unit. Finally, CMI5 uses current technology to send data over the internet.

The implications of this emerging specification are tremendous. It is better suited to mobile learning and it is better suited to the learner experience platforms that are emerging (e.g. LinkedIn Learning’s Learning Hub). Soon instructors may be able to organize content from a variety of providers (like LinkedIn Learning, Khan Academy, or OER Commons) but retain the learning management system as an organizer of content, data collector, and credentialing agent. Now instructors, average instructors, may be able participate in that content market from their own GitHub repositories and web sites.

But many LMSs have yet to adopt CMI5. The architects have done their part. Now it’s on us to understand this technology and advocate for it. Start by sharing this article. Thank you.

Appendix A — How it Works (A simplified flow)

For those interested in a deeper dive, let’s walk through the CMI5 process flow step-by-step. (See diagram)

To begin, the author (instructor, L&D specialist) exports content as a CMI5 package. The package can be a simple file that instructs the LMS where to find the content or it can include the content itself.

(1) When a student needs the content, the Learning Management System (LMS) launches the content and sends the Assignable Unit (a course can contain one or more Assignable Units) (2) information that includes student name, a fetch URL and the activity ID.

(3) The Assignable Unit (AU) uses the fetch URL to retrieve a security token. The security token enables the AU to communicate securely to the Learner Record Store (LRS).

(4) As the student interacts with the content, the AU can optionally send Experience API (xAPI) statements to the LRS . (5) At some point, the AU reports that the student passed and/or completed the unit.

(6) The LMS uses the ‘move-on’ information to determine whether or not the student can move on to the next assignable unit. The move-on options are passed, completed, passed and completed, passed or completed, or not applicable.

Finally, when all of the assignable units within a course are completed, the course is marked as satisfied for the specific learner.

A simplified process flow that starts with the
launch of the CMI5 Assignable Unit by the LMS

Geolocation Storytelling Revisited

We’ve observed an uptick in interest in Geolocation Storytelling. We’ll revisit the subject for those who know little about this medium as well as those who either want to design a project on paper (i.e. Word) or who want to go all the way and use the LodeStar Authoring tool to complete a working project.

To reach all audiences at some level, this article starts from the general and ends with the specific. Hop on and off at any point.

Introduction

Every place hides its own unique, rich story. Have you visited an unfamiliar town or area and wondered about its history,  geography, and points of interest? Have you ever wanted to connect to a place on a level deeper than a quick drive-by?

A new form of storytelling—geolocation storytelling—combines technology and traditional storytelling to connect visitors at a deeper level.  With the help of an app, the place where you’ve entered or visited on a map suddenly comes alive with narrative and imagery.  You may hear about the past or be guided to an unusual rock formation or the vantage point of a famous painter.   Geolocation stories can work on-site, guiding you from point to point or they can help you discover a place from the comfort of your home.  Geolocation stories can be both informative and entertaining.  They can involve the visitor in discovering why a place got put on the map, or solving a challenge, or even solving a murder mystery.  In short, geolocation stories can be about anything that piques the visitor’s interest about a place.

The Inspiration

Places inspire people to learn more about them.

A group of history buffs, known as Lensflare Stillwater, were inspired by the many untold stories of Stillwater, a Minnesota river town.  Stillwater was a lumber town with connections to Minnesota and Wisconsin pine lands by river and connections to Saint Paul by stage road and later by rail. 

Stillwater inspired a number of geolocation stories. The first stories were guided  tours of Stillwater’s historical downtown.   A subsequent story helped cyclists learn about the rich history from the vantage point of a bicycle trail.  Even later, another story recovered the lost memory of Stillwater’s streetcars.   

Thousands of miles from Stillwater, a geolocation project told the story of Vincent Van Gogh’s year in Arles, France, and what went horribly wrong for him.   Its authors first visited Arles to learn more about Van Gogh but were disappointed in the local tour booklets, which didn’t sufficiently tell the story. 

If your town or place has points of interest, a rich history, or geographical features, you will want to consider creating a geolocation story to help others see the place from a new point of view.  Visitors can walk to the specific places of interest and hear audio, see imagery, read text, scroll through time lines and learn more about this special place.

How it works

Typically the visitor launches a geolocation story (a web-based application) from a web address on a smartphone. The first page of the story provides instructions and a starting point. When the visitor reaches that point, she crosses an invisible geofence. Geofence is a just a metaphor. Actually, the visitor’s location is calculated from the signals of three or more satellites . Most modern smartphones are equipped with the hardware to detect these signals. Global positioning satellites constantly emit signals. The GPS receiver in the visitor’s phone listens for these signals. Once the receiver calculates its location from these satellites, it provides that information to the application. The logic of the application is constantly checking to see if the location matches a place of interest. If yes, then content in the form of audio, text and imagery is called up and presented.

Getting more specific: Best practices

If you already understand the power of the geolocation story and wish to get started, you’ll want to consider a few things.  These are not hard and fast guidelines.  As we gain more and more experience, we’ll learn about what works and what doesn’t.

  1. First, geolocation storytelling works best when the audience is on foot and out of doors.  Smartphones can’t receive satellite GPS signals from inside of buildings.  The technology works best outside with clear line-of-sight to the sky.
  2. Geolocation projects must be housed on a website that supports HTTPS.   Smartphones don’t reveal their locations to applications that run from websites that begin with http:// The web address must be https:// The ‘s’ means secure.  Information that is transported by HTTPS is encrypted in order to increase security of data transfer.  
  3. There is a limit to the distance that people will walk on a tour or the length of a tour in time.  Limit yourself to two miles completed within one hour.  Of course, this is a very loose rule of thumb.  Consider your audience when setting the limits.  Young adults will have no difficulty with 3 – 5 mile hikes.  Time and attention span, however, will remain a factor.  Senior citizens with mobility issues will find two miles too long.  The steepness of the terrain will be a factor. Use your discretion but keep it as short as possible.
  4. Some people’s interest may wane quickly.  A two mile tour should have at least a dozen points of interest.  Limit the distance and length of time between geolocation points.
  5. Present narrations in audio and text formats.  People like to hear a recorded narration but, without headphones, the narration could easily be drowned out by traffic or a rushing river. On the flipside, audio narration often works in situations (e.g. bright sun) where the screen is difficult to see. You’ll need to use your judgement.
  6. Consider the format of the tour.  Will you guide your audience from point to point or will you cluster points so that the audience will simply wander about and come upon points of interest? 
  7. Audio should be cleanly recorded.  The audience should not hear background noise or a muffled narration.
  8. Text must be spelled correctly, grammatically correct and short. 
  9. Favor more points of interest and shorter narration/text rather than fewer points of interest and narration that drones on.
  10. Have fun creating this story. You’ll learn a lot!

Get your Geolocations

Even if you’re starting with Word to capture your text, find the locations. You can use Google Maps.  This is a very accurate way of finding locations.  For example, if I wanted the location of the intersection of Myrtle and Water Streets in Stillwater, I would do the following:

  1. Go https://www.google.com/maps
  2. Search for Myrtle Street, Stillwater.
  3. Move the map to the location of interest.
  4. Click on the intersection.
  5. Either write down the location coordinates or click on them.  The coordinates will now appear in the address field at the top and can be copied and pasted into your Word document or directly onto a LodeStar page (see below).
Google Maps reveals latitude and longitude

About the Location Coordinates

In the example above the coordinates were 45.056745,-92.805510.  The first coordinate (45.056745) is the latitude.  The second coordinate is (-92.805510) is longitude.  Always use a coordinate with six digits of precision (six digits to the right of the decimal point).  The six digits will ensure an accuracy within a few inches but never rely on that.  In other words, allow the technology a slop factor. Use precise coordinates but allow for imprecision in the ability of device to calculate its location. Never create a geolocation story that relies on an accuracy of a few inches.  You control this by typing in numbers in the latitude and longitude proximity fields. The numbers spell out how close one needs to be to the precise location to trigger an event. In our geolocation stories we trigger something (e.g. show content) when the user is within 25 to 50 feet of a location.  We call that crossing the geofence.   The minus sign is important.  In latitude, the minus sign denotes the southern hemisphere (south of the equator).  In longitude, the minus sign denotes west of the prime meridian (Greenwich) and east of the antemeridian (roughly where the international date line resides).

If you want to grab your location while physically on the spot, use your smartphone’s Google Maps app. 

Current Location Arrow in Google Maps
  1. In Google Maps, click on the arrow to show your current location.
  2. Scroll down until you find the marker and the location.  See screenshot below.
  3. Copy and paste the coordinate into your notes so that you can transfer the coordinate to LodeStar.

Getting a location from Google Maps while on site

Preparing a Geolocation Story in Word

Your role might be to prepare the content. When you’ve completed the preparation, you can hand off the content in the form of a Word file. In Word, each location should be on a separate page. At the top of each page, key in the title and the latitude and longitude coordinates of the location. Add your text, graphics, image and narration. If your version of Word doesn’t support audio narration, use a free tool like Audacity to generate an MP3 audio file.

Even More Specific: Authoring a Geolocation Story with LodeStar

To create a geolocation tour in LodeStar, do the following:

Launch LodeStar and select the ARMaker template.  (AR stands for augmented reality.)

LodeStar’s ARMaker template
  1. Title your project.  The project will now reside on your hard drive in a folder with the same title.  It will be found in the LodeStar/Projects/[your title]  directory.
  2. Add your title to the first page.
  3. Add a page by clicking on the + button at the bottom of the app.
  • Ensure that the new page is a Text Page Type.  Examine the screenshot below.  The page should have a place to enter a latitude and longitude.
  • Add your content.  You can insert a widget (e.g. Image Layout Widget), text, audio, and more.
  • Add a page to add more content.
  • Then Preview in Browser (find button at the top).
  • When you are ready to publish,  Export as a SCORM 1.3 package and import to a Learning Management System or simply copy the LodeStar/Projects/[your title]  directory to a web server.
LodeStar authoring tool with ARMaker template. Click on image to view.

Below is what this page looks like in Preview.  Notice the audio control at top left and the Show Map at the top left.   Notice the navigation buttons top right (depending on layout).  Notice the how the image slider appears, created by the PWG Image Slider Widget.

Previewing a Geolocation story

If your audience clicks on the ‘Show Map’ button, a Google Map appears with all of the locations marked with red markers.  Again, each location represents a separate page in LodeStar. 

Each location (marked by red marker) matches a LodeStar page

Controlling the User Experience

If you allow users both to show map and navigate to content by clicking on a marker, then you need not adjust project settings.    If you want to restrict users’ access to the map and/or their ability to access pages of content from the map, select Tools > Project Settings.  Change the settings according to your needs.  (The important settings are marked with arrows. See screenshot below.)

Project settings in LodeStar allow control of application

Publishing your project

As a SCORM object

If you use a Learning Management System (LMS) and want to control access to your geolocation story, then, with your project opened in LodeStar, click on Export and export to SCORM 1.3.    Go to your LMS and import the story as a SCORM object.

As a website

If you have access to a web server, copy the project folder to the web server and use the index.htm file in your URL.  Once again, location services will only work on web servers that support https://

If you don’t have access to a web server, then read the following article that explains how you can use GitHub as a web server.

https://lodestarlearn.wordpress.com/2020/05/14/seven-steps-that-will-change-how-you-share-elearning/embed/#?secret=5b4inntyGg

Alternatively, you can use Site44 to convert your Dropbox folder to a published website:

See https://www.site44.com/

(We are not endorsing Site44 but LodeStar Learning has successfully used it on a number of projects.)

As an Open Education Resource (OER)

Publish the geolocation story as a web site, then register the URL (address) of that site with OER Commons, Merlot, or whatever OER repository you prefer.

 

Additional Details

If you are new to Geolocation Story-telling to learn more detail, visit:

Geolocation Storytelling: Van Gogh In Arles | LodeStar Web Journal (wordpress.com)

To see an example of a finished product as OER, visit:

https://www.oercommons.org/courses/vincent-van-gogh-s-arles/view

Or view the app at:

‎Van Gogh In Arles on the App Store (apple.com)

Conclusion

Geolocation stories are a great way to help visitors uncover the hidden wonders of place. Google Maps and the LodeStar Authoring tool are indispensable ways of authoring stories and publishing them either to Learning Management Systems or to the web.

If you complete a project, share your project. Drop a comment or drop a line to supportteam@lodestarlearning.com.

Short Sims

Introduction

Some of us aren’t content with simply presenting information in a linear fashion in an online course.  We have dozens of words to express what we wish to achieve: interactive, game-like, thought-provoking, challenging, problem-based….   We are also hard-pressed to find the time or the budget or the design that will fulfill our highest aspirations for eLearning. 

It’s easy to get discouraged – but occasionally we’re offered a strategy that works within our budget and time constraints.  One such strategy is the basis of  Clark Aldrich’s recent book, “Short Sims” (Aldrich, C. (2020). Short sims: A game changer. Boca Raton: CRC Press.)  

In his book, Clark Aldrich discusses the methodology of the short simulation.  He begins by lauding the virtues of interactivity.  Interactivity allows learners to experiment, customize their experience, role-play, make decisions and apply skills. He writes that game-like interactivity is expensive to build.  We all recognize that.  Short Sims, on the other hand, can be built in the “same time frame as linear content”.  Short Sims engage students in making decisions, doing things, meeting challenges, solving problems, learning from mistakes and so forth.  Essentially Short Sims offer us a strategy – a methodology – to do things differently and more effectively.

The hook comes from this excerpt: 

“From a pedagogical perspective, the more interactivity the better.  Connecting user action with feedback has long been proven to be critical for most neuron connections”. 

Aldrich, 2020

Aldrich credits the Journal of Comparative and Physiological Psychology for that insight.  But again, in Aldrich’s words, “game-like interactivity is expensive to build.  It is time-consuming.”  Aldrich offers a new Short Sim methodology as an antidote to linear-style presentation the death-by-PowerPoint approach.

Short Sims

                Show, not tell

                Engage learners quickly and are re-playable

                Are quick to build and easy to update

Short Sims square with the Context-Challenge-Activity-Feedback model that we’ve heard so much about from Dr. Michael Allen, Ethan Edwards and the designers at Allen Interactions.  They are a solution to M. David Merrill’s lament that so much learning material is shovelware.  ShortSims are not shovelware.  They are a cost-effective means of engaging students.

Quite frankly, the LodeStar eLearning authoring tool was made for the Short Sim.  Instructors have used LodeStar for years to produce Short Sims but never used that term.  We called them Simple Sims, which sometimes included decision-making scenarios, interactive case studies, problem-based learning and levelled challenges.  We solved the same problem.  We made it easy for instructors to create Short Sims quickly. 

Our design methodology has a lot in common with Aldrich’s methodology as described in his book.   The following ten points outline our approach to creating a simple decision-making scenario, which, in our view, is one form of Simple Sim.  To avoid mischaracterizing Aldrich’s methodology, I’ll use our own terms in this outline.

  1. Select Challenge
  2. Pick Context
  3. Determine the Happy Path
  4. Determine Distractors
  5. Pick a setting – background graphic
  6. Choose a character set
  7. Produce the Happy Path
  8. Add the Distractors
  9. Add Branches
  10. Add Randomness                                                                                                                                                                                                                                    

Select Challenge

Selecting the right problem and the right scope is, in itself, a challenge for the instructor or trainer.  Straightforward processes that present clear consequences for each decision are easy to simulate.   Processes like strategic planning that are influenced by dozens of variables are much more difficult.   The Short Sim methodology itself would be good candidate for a Short Sim.  Another example would be the backwards design method of instructional design.  In my early days at Metro State, a decade ago, we discussed the backwards design approach with instructors.   We then used a Short Sim to rehearse instructors on the key questions to ask during each phase of the backwards design process.  We based a lot of our thinking on Dee Fink’s “Creating Significant Learning Experiences” and  Grant Wiggins’ “Understanding By Design”.  Our objective was to help instructors design with the end in mind.  In Backwards Design, outcomes and assessments come before the development of activities.   The Short Sim did the trick.  Planning instruction is complicated business.  A simple and short simulation is not, in itself, transformative.  But we just wanted assurance that instructors understood the basic principles of backward design by the decisions they made.

Pick Context

In the Backwards Design example, a dean asks an instructor to design an online class to help K12 teachers use educational technology in their classrooms.  So, in this context, the learner is playing the role of online course designer.  The learner is challenged to make the right decisions at the right time.  If the learner holds off on designing activities until completing an analysis, defining outcomes and creating assessments, then the learner succeeds in the challenge.

Determine the Happy Path

The happy path is all the right decisions in the right order.  Situational Analysis -> Learner Outcomes -> Assessments -> Activities -> Transfer.  It is all of the right answers with no distractors.  It’s like creating a multiple choice test with only one option: the correct answer.

Determine Distractors

Now come the distractors.  What are the common pitfalls to Backward Design?  What might tempt the learner to go astray.  If we were designing a Short Sim on the Short Sim methodology, the pits and snares might be what Aldrich calls the Time Sucks:  choosing the wrong authoring tool, too many decision-makers on the project, custom art, and so on.  The learner might be tempted with “the medium is the message.  Invest in the medium.  Commission a graphic artist to create a compelling interface.”  The point of Short Sims is to not invest heavily in artwork or graphic design.  The focus is more on describing the context, presenting choices to the learner, and showing the consequence of learner choices.

Pick a Setting

A background photo helps to set the context.  Images that display settings without people can be found on sites like Pexels, Wikimedia Commons, in the public domain section of stock image services and, of course, on stock image sites. Because one image often suffices in a short sim, authors can snap their own photos and not waste too much time.

Alternatively, vector artwork can serve as an effective background.  Vector art can be found and  downloaded from such sites as https://publicdomainvectors.org/.    (LodeStar Learning doesn’t endorse any of these sites – but we have used them all.)

In either case, if the scene is relevant to the learning context and not just a vain attempt to gamify, it might actually contribute to content retention and recall. 

Choose a character set

A popular approach to Short Sims is the use of cutout characters with different poses and expressions.  Cutout characters can be photo-realistic images with transparent backgrounds or illustrations.  To see examples, please google ‘elearning interactive case studies’, select ‘images’ and you’ll see thousands of examples.  Despite their popularity, finding cutout characters cheaply can be frustrating.  Several authoring tools offer a built-in catalog of characters.  These tools tend to be expensive.  Many stock photo sites offer character packs but usually one must subscribe to these sites for a monthly charge.  Some sites offer pay-as-you-go services, meaning that you pay for the character pack once, without signing on to a monthly subscription.  The character pack can be as cheap as $4.  One such site is eLearning Templates for Course Developers – eLearningchips.  A complete character pack purchased from eLearningChips with more than 137 poses costs as little as $54. No subscription.  No additional fee.  (Again, we’re not endorsing eLearningChips, but we have used their service.)

Produce the Happy Path

With the LodeStar authoring tool, we had several options for producing the Happy Path.  We used the ActivityMaker template and, after the title page, added a sequence of Interview Pages.  The ActivityMaker template offers a range of page types. The Interview Page is one of them.  In an Interview Page, we dropped in a character and filled in the best choice.  We didn’t concern ourselves with the distractors (the wrong options) quite yet.  Again, we were focused on the Happy Path.

Here is the author view:

Authoring a short sim happy path

Here is what the student sees:

A short sim happy path

Add the distractors

Once we sorted out the happy path – a sequence of perfect, well-informed choices, we thought about the pits and snares—the problems and challenges.

In our course design example, a common problem is that we think too early about the content–that is, what topics should the course cover.  We anticipated those problems when designing our Short Sim.  If a learner unwittingly falls into our trap, we have the opportunity of providing feedback. It’s a teachable moment.

A short sim

An alternative to the Interview Page type is the Text Page.  In a text page, we can add images and widgets.  These give us a bit more flexibility than the Interview Page Type.  On a Text page, we can add an image (left or right aligned), then a Text Layout Widget.  Here you can see the page with image and the Text Layout widget.  The image was composed in our SVG editor. 

Authoring View

Here is what the student sees.

Student View of a LodeStar Activity

Add Branches

In one sense, a branch is a place where we get sent based on our decisions.  If this were a customer service sim and we made poor choices, the customer would appear more and more irritated and ultimately we lose his or her business.  Programmatically, the place where we get sent is a page that shows an irate customer and choices that represent a difficult situation.  The branches could lead us down a path of destruction but we may also have the opportunity of winning back the customer’s trust with a string of good decisions. 

Branching adds variety to the sim.  It gives us a customized experience or allows us safely to ‘test’ bad choices.

Branching can also be viewed as the consequence of a decision or choice.  In LodeStar, branch options include going to the next page, last page or jumping to a page.  They also include bringing up a web resource, adding an instructive overlay, setting a variable value, etc.  It could also mean the execution of a script or series of commands to make a lot of things happen simultaneously, such as setting a variable (that tracks our failings), sending us down a path, changing the image of a happy customer to an unhappy one, showing feedback, marking the choice with red, and more.

It’s probably most effective to show the learners the natural consequence of their decisions–an unhappy customer for example.  As designers, we might also need to be explicit and display feedback, or introduce a coach who provides feedback.  As Clark Aldrich writes, the sign of a good Short Sim is one that is played over and over again.  Branching helps us make the sim a different experience each time.

LodeStar Branching options

Add Randomness (optional)

Randomness might be difficult to achieve and should, therefore, be considered optional.

Randomness is more than randomizing distractors.  (Randomizing distractors happens automatically on an Interview Page.  It’s done through a simple checkbox in a Text Layout widget.)  More sophisticated randomness might include a randomly generated sum of money, or a randomly selected path or scene, or randomly generated assets that are assigned to the learner.  It might be a randomly generated length of fuse that represents the customer’s patience.   In our course design example, it might be randomly generated student characteristics that include age, gender, and subject interest.  That level of randomness is best achieved with the help of LodeStar’s scripting language and is best left to its own article.

Conclusion

Short Sims represent a level of interactivity that goes beyond the linear presentation of information.  They have the potential of promoting learner retention and application.  With the right tool (and there are plenty),  everyone can build short simulations.  One tool, LodeStar, was designed from the very start with the short simulation and the intrepid instructor in mind.  Short Sims may vary in sophistication and design but, in any form, they cause learners to think and to see the consequence of their actions.  The short sim is a strategy that is doable and repeatable within our budgets and time constraints.  Make it happen in your world!

Serious eLearning: Use Interactivity to Prompt Deep Engagement

Elements of Interactivity

The Serious eLearning Manifesto challenges us to move beyond typical eLearning to the values  and principles of Serious eLearning.   One of those principles is, to quote the manifesto, ‘Use Interactivity to Prompt Deep Engagement’.  The sky is the limit in terms of what that actually means.  We know that it means something beyond page turners and roll overs.  Authoring tools offer us templates that have interactivity logic baked into the template.  The tools’ form-based interfaces allow us to provide information that feeds the template.  To do something original – outside of the constraints of a page turner presentation, or even an interaction template — requires a bit of code.  Few authoring tools allow you to realize your design fully without the knowledge and application of some basic coding.

ZebraZapps is  one of the notable exceptions.  ZebraZapps enables you to build complex interactions by wiring objects together.  A click, hover, drag or collision, for example, on one object could change the properties of another.  Dragging the earth and moon along their orbital path can cause the rise and fall of a tide graphic.  Authors connect the drag of an object constrained to a path to the height property of another object.  Expressing this relationship comes from wiring the drag event of one object to the property height of another object.  This expressiveness through the action of wiring is rare.  Most systems enable this expressiveness through language.  In other words, code.

If you google “should instructional designers learn to code” you’ll get more than 37 million results and many opinions.  My own view relates to the situation that many instructional designers find themselves in.  Whether they support a university department or mid-sized firm, they lack access to a programmer.  They are limited to what they know and how well they can work an authoring tool like Storyline or Captivate.  For them, a little knowledge of code can go a long way.  With a little knowledge, they can realize some pretty sophisticated designs.  They can do more than ‘click and present’. 

In the late 80s I was driving down a dark, country road listening to MPR.  The story was on Interactive Video.  Laserdiscs.  I was enthralled by the possibilities.  I asked my dean who was completing an advanced degree at the time in computer-based learning, what I needed to learn to control an interactive video laserdisc.  He answered “C”.  C was a programming language and his answer, which was actually incorrect, sealed my fate.  I began studying my first programming language oblivious to tools like TenCore and Course of Action (progenitor of Authorware) that afforded a much simpler way to control the laserdisc.

To finish this anecdote, I also began to study instructional design at the University of Minnesota.  At my first Wisconsin Distance Teaching and Learning Conference, I attended a pre-conference cracker barrel session.  Sitting around drinking wine were a bunch of researchers from Alberta’s Athabasca University.  I posed the question to them: “should instructional designers learn to code”.  The answer from at least one was unequivocal.  Become an instructional designer or a programmer.  You can’t do both.  There is too much to learn in either discipline.

So, I don’t necessarily take issue with that.  There is so much to learn in either discipline.  But modern authoring systems give us a way forward where we don’t have to totally geek out.  With just a few coding skills we can go long long way to realizing the serious eLearning principle:  “Use Interactivity to Prompt Deep Engagement.”

So let’s explore the basic prerequisites to interactivity.   There are three parts to this post.  First, this post discusses the relationships between computer code and this thing called interactivity.   Secondly, this video (LodeStar 9 — Elements Of Interactivity – YouTube) demonstrates a simple interaction that is made possible with the LodeStar eLearning authoring tool and its script (code) editor. Lastly, this DIY tutorial (Making your projects interactive and interesting with a little bit of code | LodeStar Help (wordpress.com)) walks through the video example step by step.

But first we need to look at ‘interactivity’ and understand where we benefit from some knowledge of coding.

The Serious eLearning Manifesto states that “We will use elearning’s unique interactive capabilities to support reflection, application, rehearsal, elaboration, contextualization, debate, evaluation, synthesization, et cetera”.   When we examine this list of strategies/activities and consider the unique interactive capabilities that will support them, we start with the following:

  • Ability to store information about the learners and their behavior.
  • Ability to offer something different and individualized based on this information.
  • Ability to create a visual, manipulatable, and functional learning environment that suggests an authentic (if not totally realistic) context.

That’s not an exhaustive list.  It’s a start.  It promises more than page turners and roll-overs.  Now, we need to match these capabilities with the authoring tool and the required code.

 

Ability to store information about the learners and their behavior.

Variables are used in code to store information.  The information can range from a number to a sentence to a list to a full essay.  Variables provide a human-friendly way to store and retrieve information.  They represent addresses in the computer’s memory.  As instructional designers we don’t need to know anything about those gobbledygook addresses or how the information is stored physically in the computer.  We usually need to know whether the variable is intended to store a number or a string of characters. (See Appendix A) 

So what can we store in a variable?  The answer is many things. 

  • Points scored
  • Type of question answered incorrectly
  • Number of tries
  • Learner’s journal entry
  • Bookmarked page where the learner left off
  • Much much more

In a recent eLearning program, our objective was to help the learners use LinkedIn effectively to promote their professional brand.  Their eLearning task was to help a fictitious character build up his Social Selling Index.  The index is made up of four components: brand, people, insights and relationships.  Successful completion of the activities increased the character’s brand index, people index, insights index, and relationships index.  We created four variables and, you guessed it, they were:  brand, people, insights, and relationships.  Each activity was categorized and affected one of these indices.  In other words, we increased the numerical value in the corresponding variable.

Variables included in a LodeStar authored eLearning module

This contributed to what the Serious eLearning Manifesto calls authentic context.  The performance objective was to help employees increase their SSI.  The activities in the eLearning module increased the character’s SSI.  We could have designed a presentation and a quiz.  We didn’t.  But to achieve that authentic context, we needed to store values in variables. 

To learn more about variables, complete the hands-on exercise shown in the video (mentioned above) and the accompanying tutorial.  You can download LodeStar 9 and use it at no charge to complete the exercise.  LodeStar Learning Corporation

Ability to offer something different and individualized based on this information.

In another recent project, we created a simple simulation of a workplace engagement platform.  The simulation helped guide employees through the steps of requesting feedback from their supervisor, co-worker or reports. A future simulation will be focused less on the procedural and more on the best practices of soliciting and giving feedback.  The first simulation was a post-training exercise. Our HR Director conducted the training.  The post-training exercise helped refresh participants’ memory on the basic steps.   The strategy was to add points for correct choices and subtract points for incorrect choices.  In response to wrong choices, feedback steered participants in the right direction.    A counter in the bottom left corner showed the result of correct and incorrect choices.  It was a bit of gamification but always with the intent to guide participants to the right choice.  In other words, guided practice.

So what role does code play?

This simple simulation wasn’t built from a template with some sort of pre-defined logic.  It was custom built for our purposes.  But it was a very simple construction. We began with a blank screen, uploaded screenshots and defined click/touch areas.

As a result of click, we wanted to a) add or subtract points and b) branch to a new screen or display an overlay.  We never subtracted points multiple times in response to multiple clicks on the same thing – but we always showed feedback.

Code can help us to:

  • Check if the item has been clicked before.  If no and if correct choice, add points and then branch.  If no and if not correct, subtract points and provide corrective feedback.  If yes and incorrect, increment a counter to provide another level of feedback with more urgency.
  • Store a value that enables us to check if item has been clicked.

These rules are simple.  They can be complex.  In this simple example, we use variables and conditional logic (i.e. if statements).  We also use branching, which, in this case means, display an overlay or display a new screen with hotspots and more code that gets executed when the invisible hotspot is clicked on.

A Simple eLearning Simulation

To be true to this section heading (i.e. Offer something individualized) , we could have gone further.   If the participant breezed through a scenario, we could have used conditional logic to increase the difficulty of the scenario.   If the participant stumbled through, we could have kept the level of difficulty the same (i.e. plateau).  The same tools apply: variables and if-then statements.  I’m tempted to say that this approach is simpler than trying to shoehorn a pre-programmed template to your needs.  

Ability to create a visual, manipulatable, and functional learning environment that suggests an authentic (if not totally realistic) context.

The screenshot below shows the beginnings of a tutorial on automatic direction finding (ADF), an older navigational method for airplane pilots.  There is just enough detail to make this panel somewhat realistic but the panel is a simple composition of ellipses, paths, rectangles and text.  The Scalable Vector Graphic (SVG) is composed of these elements.  Each element can generate a click event that can result in the execution of some code.  In the screenshot we are highlighting a switch that has the id of g2423.  When this switch is clicked, with a bit of code, we can cause something to happen.   The graphical element is tied to a LodeStar branch option.  The branch option executes commands that relate to a NDB (Non-Directional Beacon) that the pilot can tune in – in this case, the audio playback of Morse Code to identify the beacon.   As I’ve heard Ethan Edwards from Allen Interactions say many times, you just need enough realism to accomplish your learning objective.  Any more and you’re wasting your time or your client’s money or both.

Automatic Direction Finding — eLearning Module

To show another example, in the video and tutorial link referenced in the conclusion, I walk through a simple example of how to make Scalable Vector Graphics interactive.  I walk through an example of a traffic light switch.   I chose this example because it is a little easier to understand than the ADF on an airplane.

A LodeStar Learning tutorial on variables, conditional statements, functions, and SVG graphics

 

Conclusion

In the pursuit of serious eLearning and meaningful interactivity, I’ve noted LodeStar’s ability to support variables, conditional statements, branch options and the ability to change the properties of objects.  Other authoring systems also support these concepts and require the author to understand the basics behind variables, conditional statements and logic in general.  Allen Learning Technologies’ ZebraZapps requires no coding – but it does require the instructional designer to think logically.  Wiring replaces code, but logical reasoning is still required.  Articulate Storyline has the concept of triggers and supports events such as clicks, hovers and drags.  Those events can be tied to property changes of Storyline’s native vector format.  Storyline also supports variables and has an easy-to-use interface for building sophisticated conditional statements.  Adobe Captivate supports the association of actions with graphics.  For example, the learner can click on a rectangle associated with an action such as show/hide and increment/decrement.   Captivate also supports an interface that can apply conditional logic to an action.  For example, a variable might keep track of slide states.  Each state can house different text.  As the learner clicks a rectangle, an ‘if’ condition displays the matching text based on the current value of the variable.   In short, Storyline and Captivate support the idea of variables, events, conditional statements and the ability to dynamically change the properties of graphics.  ZebraZapps has the same ability but without requiring a line of code. 

Whatever the authoring tools’ approach, the ability to store information about the learners, to offer something different and tailored for the learner, and the ability to create a visual, manipulatable, and functional learning environment relies on the instructional designer’s logical thinking and the authoring tools’ ability to store values, change course based on conditions, and modify the visual environment in some way.

These resources can help you get started.  The first two, I’ve already mentioned.  The third is a terrific resource to learn the basics of coding.

LodeStar 9 — Elements Of Interactivity – YouTube

Making your projects interactive and interesting with a little bit of code | LodeStar Help (wordpress.com)

Learn to Code – for Free | Codecademy

Appendix A

To illustrate the concept of data type in variables, examine the following table:

Name                    Rank

Joe                         11

Anna                      2

Kim                        1

In the preceding table, Kim came in first place, Anna in second, and Joe in eleventh place.    A variable stores a person’s rank.  If we interpreted the information in the variable as a number, then this would be the sorted order:

Kim     1

Anna   2

Joe       11

If we treated the variable as a string of characters, this would be the sorted order:

Kim     1

Joe       11

Anna  2

In the second case, the value stored in the variable is treated as a character.  In the computer’s character table, ‘1’ is assigned the numerical value of 49.  ‘2’ is assigned the numerical value of 50.  The computer compares the first character 1 to the first character of 2.  It looks up the character value and processes the comparison as 49 to 50.  49 is lower, therefore, the computer places 11 before 2.    But that’s practically all there is to the complexity.  Variables store information.  It matters whether we interpret the information as numbers or as characters. This is known as the data type of the variable.

DIY Serious eLearning

Introduction

In the past decade, leaders in the field of learning experience design have given us much to think about, much to strive for.  They represent a synthesis of  instructional design, learning sciences, and user experience design.  They also possess, in one form or another,  the resources to execute their ideas.  But, if you are an educator or, perhaps, a learning and development specialist in a mid-sized company, you know that you haven’t got a large team or a large budget.  You have highly specialized objectives.  You want your learning designs to be effective.  And you know that you can’t just pull something off the shelf.

In a series of posts, I’ll explore what the leaders are saying and then get down to DIY specifics.  I will parse out the skills that instructors and specialists need in order to implement some of these ideas – especially in the area of eLearning interactivity. But, in this post, let’s first contemplate some of the themes that are consistent with evidence-based learning design. Conveniently, many of them are listed in the Serious eLearning Manifesto.

The Serious eLearning Manifesto?

If the manifesto hasn’t lit your corner of the world, here is a little background.  In 2014, some highly respected thought leaders in eLearning convened to, in their own words, instigate the Serious eLearning Manifesto.  The instigators were Michael Allen, Julie Dirksen, Will Thalheimer, and Clark Quinn.  If these names are new to you, you’ll be delighted to learn that each name represents a treasure trove of ideas, insights, research, and reflections on how people learn and how to design effective learning experiences.  Joining in the pledge to promote ‘Serious Learning’ is a list that reads like the Who’s Who of learning design.  Among them: M.David Merrill, Allison Rossett, Roger Schank, and Sivisailam Thiagarajan, better known to the world as Thiagi.

If you haven’t read the Serious eLearning Manifesto, it is available at https://elearningmanifesto.org/   Parts of the manifesto might seem self-evident.  One of the listed attributes of serious eLearning is that it must be meaningful to learners.  We might think that it’s obvious we want our learning activities to be meaningful to learners.  But, the site discloses the status quo: too much eLearning is content focused, efficient for authors, attendance-driven, focused on knowledge delivery and so on.  I encourage you to visit the site for the full story.

Implementing the Supporting Principles

The Serious eLearning Manifesto is based on a number of supporting principles.  Each supporting principle is a study in itself. Some aspects of the manifesto and other evidence-based practices are not easily achieved with the traditional skillset and/or toolset of the college or corporation, including the Learning Management System.   I’ll sample a few of these.  I will place the language of the manifesto in bold.  The rest is my running commentary.

The manifesto states:

  • Do not assume that learning is the solution

This is a principle that was driven home to me by the Minnesota chapter of the International Society of Performance Improvement, MnISPI.  They espouse the Performance Improvement Model where training is but one outcome of a performance needs analysis.  At our firm, Redpath and Company, we are working on a Knowledge Management platform that will eventually be integrated with our learning management system. In the both the academic and corporate worlds, students and employees might benefit from a knowledge management center that gives them the cheat sheets, job aids, micro-learning and whatever they need to solve a problem or perform a task just when they need them.

  • Tie Learning to Performance Goals. A new breed of tool can help support this principle. At our firm, we recently implemented an employee engagement system that will soon integrate goals, feedback, and one-one-reviews with training and performance solutions. The system is currently integrated with our Human Resource System (HSRIS), but interoperability standards offer the opportunity to integrate some of the key pieces in learning development: knowledge management, learning management, curriculum mapping, resource library, and employee engagement.  The full suite of tools includes Bamboo HRIS; Microsoft Teams, SharePoint, and Automate; Prolaera Learning Management System; Microsoft Stream; and Quantum Workplace.  All of these systems can communicate to one another through application programming interfaces (API), which act as connectors between vendors. 
  • Provide Realistic Practice  In eLearning, providing realistic practice might mean a case study, decision-making scenario or simulation that simplifies the world into digestible learning chunks.  At our firm, we have generated a few of these and uploaded them to the SCORM cloud, which is integrated with our learning management system.  (The SCORM cloud supports traditional SCORM and a newer standard known as the Experience API or xAPI.)  
  • Adapt to Learner Needs  In eLearning that might mean an adaptive learning system that uses some form of artificial intelligence or smart decision-making to meet individual student’s needs.  These are systems that predict and/or evaluate student performance and prescribe a learning plan with resources that are matched to topic, reading level, level of knowledge, and their place in a learning hierarchy.

I have a personal interest in all of the supporting principles.  As a toolmaker/instructional designer, I’ve been slowly developing and promoting the  knowledge management center.  I’ve been helping our HR department with the employee engagement system.  I’ve researched a host of adaptive learning systems —  but have yet to adopt one.  I have a deep-rooted interest in promoting the benefit attached to the following supporting principle:  Use Interactivity to Prompt Deep Engagement.

Use Interactivity to Prompt Deep Engagement. 

Interactivity can mean a number of things.  eLearning texts often cite the Community of Inquiry framework, wherein the complete educational experience is described as student-to-student, student-to-instructor, and student-to-content engagement or interactivity.  I’ve observed instructors use the first two to good effect.  Many experienced online instructors deftly use discussion boards, chats and video conferencing.  The tools are there.  The instructional support is often there.  One of my favorite memories of effective student-to-student interactivity is from a marketing course.  The instructor set up the discussion thread so that students pitched ideas to the sub-grouped discussion board as if they were pitching to clients.  Students recalled the text and drew from their own knowledge to discuss the merits of the pitch.  The discussion wasn’t formulaic as too many are.  It was not ‘Read a chapter, post by Wednesday, respond to two posts by Sunday.’  In contrast, the marketing pitch simulated an authentic context (serious eLearning), and provided real-world consequences to the student.  Their pitch got a positive or negative response.

Student-to-content interaction is a bit more challenging for both instructors and learning and development folks to implement.  The manifesto talks about using interactivity to support reflection, application, rehearsal, elaboration, contextualization, debate, evaluation, synthesis and more.  Some of this can be accomplished with the traditional tools of the LMS as described above.  Some require 3rd party authoring tools like ZebraZapps, StoryLine,  Captivate, and LodeStar.  They are vital tools in the eLearning instructor’s toolkit.  But making elearning meaningful with the use of authoring tools requires a new set of skills.  Without those skills, we settle for what the Serious Learning Manifesto decries:  page turning, roll-overs and information search. 

Some skills are technical; others related to psychology and cognition. One of the manifesto’s instigators, Michael Allen, wrote more than a half-dozen books and built two incredible tools to enable instructors and instructional designers to build rich learning experiences: Authorware and ZebraZapps.  Both tools gave non-computer-programmers the ability to design something interesting:  realistic scenarios, storytelling,  challenges, environments that invoked action and showed the consequences.  The other instigators of the manifesto gave us additional insights into cognition. Julie Dirksen in her highly acclaimed book, Design for How People Learn, gave us insight into why people persist in their negative behaviors, how they remember things, what motivates them, and what strategies are effective. Wil Thalheimer bridged research and practice in topics related to memory, evaluation and presentation, and he led the charge to debunk many of the learning myths that we hold near and dear to our hearts.  Clark Quinn has written numerous books that cover learning science and design.

Underlying all of this is research-based evidence.  Michael Allen and Julie Dirksen, especially, soft pedal the research.  That’s their style. Their writings are lighter and not riddled with citations.  Some of it is even iconoclastic – like this title of Michael Allen’s Designing Successful e-Learning: Forget What You Know About Instructional Design and Do Something Interesting.  In this field, creative, insightful practices often take a back seat to formulaic approaches.  Stating the objective on page one, presenting information on page two, and quizzing on page three would be an example of a formulaic approach. 

Julie Dirksen’s Design for How People Learning is illustrated with these quirky line drawings that simplify serious ideas and make them more digestible.  But these books, style aside, are grounded in research.   A recent book, which incidentally recognizes the contributions of Julie Dirksen and Wil Thalheimer, focuses precisely on evidence-based practices, and exposes the myths. 

Evidence-Informed Learning Design  was authored my Mirjam Neelen and Paul Kirchner, both highly respected for their contribution to learning sciences. In their book, they list top five ingredients in order of effectivity and efficiency.  The practices include spaced practice, practice tests, overlapping the practice of one topic with the practice of another, and questioning and encouraging learners to explain a process or procedure to themselves. 

If you look up these authors, read their books, read their blogs, listen to their podcast interviews (see resources below), you are further convinced that the serious eLearning manifesto has merit. 

In academia, many have read How Learning Works and contemplated 7 research-based principles  for smart teaching offered by Susan Ambrose, Michele DiPietro and others.  In How Learning Works,  you will find the same themes:  Students and trainees are not blank slates.  How they are prompted to organize knowledge influences how they learn. Addressing motivation is paramount.  Component skills need to be identified, addressed with targeted strategies, mixed and remixed.  Meaningful eLearning should offer practice, practice, and more practice with guidance, feedback, scaffolding, elaboration and so on.  A page-turner PowerPoint with little engagement doesn’t cut it.

Conclusion

So, in the next post, I will tackle one aspect of serious eLearning.  I will parse out what it takes to design a meaningful interaction between student and content.  I will use our own tool, LodeStar, to illustrate the ideas but not confine the discussion to our own self-interest.  I’ll expand the discussion to include other authoring tools and, hopefully, contribute in some small way to the cause of Serious eLearning. In the meantime, please check out the resources listed below.

Resources

Michael Allen’s Books

Julie Dirksen’s Book: Design for How People Learn

Wil Thalheimer’s Site: Work-Learning Research Site

Clark Quinn’s Blog: Learnlets

Mirjam Neelen and Paul Kirschner’s Blog: 3 Star Learning Experiences

The Learning Hack Podcast

International Society for Performance Improvement

Minnesota Chapter of the International Society for Performance Improvement

Using Photospheres in Online Courses

Introduction

If you read my last post, you’ll know that I love technology but am wary of it.  As an instructional designer and toolmaker, I’m selective about the educational technologies I choose to learn and integrate into our authoring tool, LodeStar.  My basic rule is that a little investment must pay large dividends.  My second rule is that instructors and trainers should be able easily to envision how the technology will apply to student learning.

One technology in particular has tempted me down the rabbit hole in the past:  virtual reality.  Until recently, I kept away from integrating VR into LodeStar.  Now, I concede that there are solid stepping stones to instructors using VR in eLearning applications.  The investment can be small; the dividends, with the right design, could be huge. One example of a stepping stone is the ‘photosphere.’

Photospheres

The photosphere is more commonly known as the 360-degree panoramic image, VR photo, and interactive panorama. A photosphere  is essentially a 360-degree scene that is viewed through a special viewer that transforms a two-dimensional, distorted image into something magical. 

Once upon a time, photospheres were difficult, time-consuming or expensive to produce.  Instructors needed special equipment and/or software to ‘stitch’ together many photographs into one viewable image.

Today, smartphone apps step instructors through the process of taking multiple images that are automatically mapped onto a sphere.  The sphere when projected onto a two-dimensional plane looks distorted.  When shown through a viewer it offers an undistorted 360-degree view of a scene. 

The Hermitage museum is a wonderful example of the use of photospheres (panoramas) to give visitors a virtual tour of the museum. 

https://www.hermitagemuseum.org/wps/portal/hermitage/panorama/virtual_visit/panoramas-m-1/?lng=

Now a photosphere can be created by any eLearning instructor with a dozen or so clicks.

I’ll suggest three simple ways that online instructors can get started using photospheres in their courses and conclude with a fourth, more sophisticated, example.  Each of these is illustrated in a LodeStar Learning activity found at:

https://lodestarlearning.github.io/VR-Demo/index.html

Suggestion One: Link to VR sites

An instructor can simply link to VR (360-degree panorama) sites.  Here are some examples:

Louvre
https://www.youvisit.com/tour/louvremuseum

Iceland
https://www.iceland360vr.com/map/

Rome
https://www.youvisit.com/tour/rome

Suggestion Two: Find and download images

Finding and downloading images for education is a bit of a challenge presently.  You’ll find photospheres on Facebook, Instagram, Flickr, virtual tour companies, museums, and  tourist bureaus.  But, you will be hard-pressed at the moment to find photospheres in Open Education Resource (OER) repositories.  We might be a little ahead of the curve.  I suspect that, for a variety of reasons, we’ll see an uptick in educationally useful photospheres in the most popular repositories like Merlot, OER Commons, and Curriki.

In the meantime, view and download examples from the following sites.

https://www.flickr.com/vr

https://commons.wikimedia.org/wiki/Category:Photo_Sphere

https://pixexid.com/search/360 panoramic

Suggestion Three: Use tablet or smartphone to generate an image

Photospheres are now easy to create.  As I mentioned, once upon a time, photospheres were difficult to produce.  Today, free software on a smartphone guides users by displaying dots on a screen.  The user moves the camera until a dot falls within a circle target.  The user follows the dots until a 360-degree ring of photos is created and then upwards and downwards in igloo-building fashion until all space is covered with images.  The software stitches all of the images together and produces what appears to be a distorted image when viewed without a photo sphere viewer.

Using Google Street View to Produce a Photosphere
Under the Golden Gate Bridge — a photosphere

Suggestion Four: Use Blender or other 3D software to generate a scene and render it as a photosphere

One of the more sophisticated uses of Photospheres is in creating them with 3D software.  In the early 90s when I worked with 3D software, the price tag was in the thousands of dollars.  Some complicated scenes required a room of twenty computers all working on some aspect of the image delegated to them by a rendering manager – a kind of orchestral conductor.

Today, students can download powerful software, like Blender (https://www.blender.org/),  for free.  Typically, instructors wouldn’t have the time to learn the software and build 3d models.  Some students, on the other hand, might be eager to support their teachers by learning the software and generating useful models.  Building 3D Models is a lot of fun and tremendously educational. 

In this example, I used a model produced by Marcin Lubecki.  Here is what the Blender environment looks like:

Blender 3D Software

Next I positioned a camera in the center of the kitchen.

Positioning a camera in a 3D Model created by Marcin Lubecki in Blender

Then I set the Blender tool’s rendering engine to ‘Cycles.’ I set the camera type to ‘panoramic’ and set the panorama type to ‘equirectangular’.   I then set the latitude from -90 to 90 and the longitude from -180 to 180.  I made a few more adjustments and then rendered the image.

The process renders one tile at time to produce this equirectangular projection and essentially stitches the whole thing together before your eyes. 

Rendering a model created by Marcin Lubecki

The result is in the linked LodeStar example found above.

Conclusion

This post focused mostly on finding, creating, and viewing photospheres. The first releases of LodeStar 9 will support the viewing of photospheres.  A near-future of version of LodeStar will enable instructors to add markers to a photosphere and connect the image to all of LodeStar’s branching options. 

Photospheres are easy to create.  Hopefully, in the future they will be easy to find by instructors in order to suit an instructional purpose.  One can easily imagine the applications:  virtual tours of places and items of interest in every discipline.  In a future post, I’ll tease out some of the possibilities and opportunities for the adventuresome online instructor.