Learner Experience Design

Introduction

Learner Experience Design has captured the attention and the imagination of just about everybody.  Some have cast learner experience design (LXD) as a discipline in direct opposition to instructional design; others consider LXD as a rebranded instructional design.

My own perspective comes directly from my community of practice.  For one, I worked as an instructional designer for creative studios who practiced learner experience design well before it became a thing.  We worked in teams that blended the disciplines of user experience design, cognitive psychology, learning technology, and design thinking, which included ideation and prototyping.  LXD as a discipline captures the very best of the principles that are espoused in the CCAF (Context-Challenge-Activity-Feedback) Model, the processes of design that include situational and user analysis, successive approximations, sketches, quick prototypes, a focus on the user, and a focus on doing.  The process of creating Allen Technology’s ZebraZapps, an eLearning authoring tool, included the best of design thinking and user experience design.

So what is Learner Experience Design?

So for me, LXD is what we’ve being doing for years and that is:

  • Centering on the learner versus the content (Dee Fink)
  • Focusing on the experience of the learner — on the doing (CCAF, problem-based learning)
  • Applying how people learn (cognitive science)
  • Empathizing, defining, idea-generating, prototyping, and testing (Design Thinking)
  • Following the principles of User Experience Design (Human Factors)
  • Collecting and analyzing data (Data analytics with the help of SCORM, and now xAPI, CMI5)
  • Using learning technology as enablers or affordances
  • Recognizing that formal training is but one part of improving human performance

In my view,  LXD is the power of all of these things combined under one label.  To illustrate the interplay of the learner, experience, cognition, behavior, UX, Design Thinking, data, technology, and human performance, I’ll draw upon a current project. 

An Example

The project goal is to help supervisors act more like coaches than formal evaluators.  The context is public accounting.  CPAs require deep technical skills and, as they progress in their careers, a host of success skills that include business development, leadership, supervision, and more.  In Minnesota, for example, CPAs complete 120 credits every three years to maintain their license.  They must also routinely attend trainings and updates related to changes in the law, technology, and business practices. 

In addition to this continuous training, the company seeks to improve employee retention, maintain good morale, and continue to grow rapidly.  To achieve its goals, the company adopted an employee engagement system that, among other things, helps supervisors collect feedback on employees from their tax reviewers or audit in-charges. More importantly, the company is switching from an annual review to monthly meetings that help supervisors and their reports improve their work.

There’s already a lot going on.  Learner Experience Design recognizes that all of these factors come into play:

  • Employees train a lot
  • New technology is in place
  • Industry is experiencing high turnover of staff
  • Company wants supervisors to be good coaches
  • Company is shifting from annual review to monthly meetings

At the heart of all of this lies a set of experiences shared between supervisors and their reports:

  • Requesting, providing, and organizing feedback with the employee engagement platform
  • Delivering effective feedback
  • Receiving feedback effectively

Let’s focus on one experience to illustrate the power of LXD.  Let’s focus on ‘giving feedback’.

There are underlying psychological principles as well as best and poor practices related to the giving feedback.  Giving feedback might elicit a perception of threat in the receiver and can easily be dismissed.  The feedback provider must use concrete examples, remain non-judgmental, draw from different perspectives, work toward a positive outcome and on and on. 

As designers, we can treat the topic of giving feedback in many different ways.  We can explain the function of the amygdala in the human brain and underscore its importance in decision making and emotional responses.  Feedback triggers those emotional responses and evokes a fight or flight response.  We could show video clips of good and bad practice or cartoon strips or excerpts from medical journals or any media that conveys information.  Our design might include this type of information sharing and then some form of assessment – a quiz or essay.

In contrast, LXD tends to favor placing the experience at the heart of the lesson.  In this case, the experience is the giving of feedback.  One design treatment might place the learner in a first-person scenario or simulation.  The context is the office with a new employee who is not performing well.  The learner acts as supervisor and selects the best thing to say in a conversation with the employee.  If the learner’s choices disagree with the principles and best practices of providing feedback, then the instruction may come in the form of an employee thought bubble, a pop-up outlining best practices, references to a text or a video, and other visual indicators of success or failure. 

In the prototype below some of these ideas come together.  The learner has selected one of three options.  The choice causes a change in the employee’s outward expression (full figure on the left), inward expression and thoughts, and in the information that is collected on the interaction.  In this prototype, the learner can access a transcript or review it at the end.  At this point in the scenario, the employee came in with the expectation of being coached only to be confronted by the reality that she is being evaluated (because of what the learner chose).  She outwardly smiles while inwardly expressing her concern about being evaluated.  A meter shows generally how things are going.

At the bottom of the screenshot, the learner has access to feedback given about the employee from two sources.  Just as in real life, the learner can consult that feedback to get different perspectives on the employee’s performance.

Giving Feedback prototype authored with the LodeStar eLearning authoring tool

The Design Thinking that led to this prototype included, to start, an analysis.  We must know something about the audience, their situation and the processes that were in place in the past.  In fact, while thinking about the actual problem we are trying to solve, we placed feedback ‘training’ on the back burner.  Other things needed to be in place first:  clear processes; and role definition between supervisors, audit in-charges and tax reviewers, and other personnel.  We also needed to work out how the workplace engagement platform will be used optimally to solicit and collect feedback in preparation for the one-on-one meetings between supervisors and their employees.

As we continue to think about people and processes, we’ll come up with new ideas, build new prototypes and test them out. 

Well…admittedly, to a point.  For a mid-sized company the return on time and effort is calculated quite differently than for a creative agency that plans training for thousands.  Design thinking still plays a role, but perhaps at a smaller scale.

The cognitive aspects of this training relate to how we can help the learners acquire and retain new knowledge without overload, how they can assimilate that new knowledge, and how they can apply the knowledge to their daily lives.  Human Performance Improvement considers any job aids or prompts that support the learner’s application of the principles and procedures.  User Experience Design challenges us to think about a lot of things on the screen (fonts, colors, layout, flow, navigation,  interactive elements, accessibility, desire paths) and off (cognitive overload, attention, memory, and more).

All of these things interplay and intersect.  Cognitive load might cause us to scaffold or plan out the curriculum differently (instructional design), or create a job aid (human performance), or map out the experience (UX) so that it doesn’t overwhelm the learner.  As we build prototypes or test the product, we collect data and analyze it.  Learning technology (xAPI, CMI5, SCORM) helps us collect the data from the learning experience.  xAPI and CMI5 are standards that are centered on experience.  (As I’ve written in the past, the x in xAPI is ‘experience’.)    Statistical methods help us make sense of the data.  For example, are learners benefiting from one design over another.

Conclusion

Since the term Learner Experience Design was first introduced, it has become part of our vocabulary and a rallying cry against content-centric designs, training-centric human performance improvement, and ineffective user interfaces.  LXD may not be anything new and yet it feels new and it feels exciting.

CMI5: A Call to Action

Introduction

Since 2000 a lot has changed. Think airport security, smart phones, digital television, and social media. In 2000, the Advanced Distributed Learning (ADL) Initiative gathered a set of eLearning specifications and organized them under the name of SCORM. In 2021, in a time of tremendous technological change, SCORM still remains the standard for how we describe, package, and report on eLearning.

However, finally, we are on the eve of adopting something new and something better: CMI5.

We no longer have landlines, but we still have SCORM

CMI5 Examples

To many, CMI5 is another meaningless acronym. To understand the power and benefit of CMI5, consider these very simple examples:


A Learning and Development specialist creates a learning activity that offers managers several samples of readings and videos from leadership experts. The activity allows the managers the freedom to pick and choose what they read or view; however, the specialist wants to know what they choose to read or watch as well as how they fare on a culminating assessment.

CMI5 enables the activity to capture both the learner experience (for example, the learner read an excerpt from Brené Brown’s Daring to Lead ) and the test score. CMI5 can generate a statement on virtually any kind of learner experience as well as the traditional data elements such as score, time on task, quiz questions and student answers. In this sense, CMI5 supports both openness and structure.

Let’s consider another example:

An instructor authors a learning activity that virtually guides students to places in Canada to observe the effects of climate change. She wants students to answer questions, post reflections and observe the effects of climate change on glaciers, Arctic ice, sea levels and permafrost. She sets a passing threshold for each activity. Once students have completed all of the units, then the learning management system registers that the course was mastered.

Let’s go further:

The instructor wants the learning activity to reside in a learning object repository or website outside of the learning management system – but still report to the learning management system. In fact, she wishes that no content reside on the learning management system. Regardless of where the content resides, she wants to know what sites students visited, how they scored on short quizzes, and how students reacted to the severe impact of climate change on Canada.

For students with disabilities, the instructor makes an accommodation and requests that the LMS administrator adjust the mastery score without editing the activity.

As the course becomes more and more popular, she anticipates placing the website and its activity onto CloudFlare or some content distribution network so that students all around the world can gain faster access to the learning activities.

The instructor works as adjunct for multiple universities and wants each of their learning management systems to get the content from a single location. In some cases, she wants the content locked for anyone who circumvents the Learning Management System and in other cases she openly lists the unlocked content with OER libraries like Merlot and OER Commons.


Before CMI5 much of this was difficult to achieve, if not impossible. So, let’s review what CMI5 offers us.


CMI5 captures scores in the traditional sense. But it also records data on learning experiences such as students virtually observing the change in the permafrost. CMI5 allows instructors and trainers to set the move-on criteria for each unit in a course (i.e. passing score before student moving on to the next unit).

CMI5 activities can reside anywhere – on one’s own website, for example, and still report to the learning management system. CMI5 enables an LMS administrator to change the mastery score from the LMS for the benefit of students who need accommodations and essentially trump what is set in the unit.

LodeStar’s CMI5 Implementation allows
authors to indicate where the content resides


CMI5 is a game changer. And yet for many – learning and development leaders, instructional designers, technologists and students – it doesn’t seem that way in 2021. CMI5 seems like a non-event. It feels like something we all talked about – a welcome change of weather on the horizon –and then nothing. Not a drop of rain.


We have been talking about and anticipating CMI5 for a long time – and yet, major learning management systems both in the corporate and academic worlds still don’t support it. CMI5 was envisioned in 2010, released to developers in 2015, and then released to the public in its first edition in 2016. We are now in the waning days of 2021—with limited adoption.


But that is likely to change.


For one, Rustici Software and ADL delivered on their promise of Catapult. Catapult is likely to accelerate adoption of CMI5. It provides many benefits to developers, including the ability to test if a CMI5 package conforms to the standard.

In my view, the learning technology architects have done their part. They brought us a meaningful set of specifications. They brought us the tools to test learning packages and to test the learning management system’s implementation of CMI5. Now’s it’s up to learning and development specialists and the instructional design community to cheer CMI5 on. It is my belief that once the community understands CMI5, spreads the word, and imposes its collective will on the LMS providers, CMI5 will become an important part of our tool bag. I urge you to share this article and others like it.


In the meantime, let’s take a deeper dive into CMI5’s potential.


Benefit One: Freedom to capture and report on any learner experience.


With CMI you can report on scores, completion status, and just about anything else. You can report on standard assessment results, and the not-so-standard learning experiences.


To understand this, we need to re-look at SCORM.


One should consider CMI5 as a replacement for SCORM – an improved specification. Conforming to SCORM was useful because a learning object or learning activity could be imported into just about any modern learning management system. As an instructor, if you created a game, quiz, presentation, simulation, whatever and exported it as a SCORM package, your activity could be imported into Moodle, BrightSpace, Canvas, Cornerstone, Blackboard, and any learning management system that supported SCORM. So, the benefit of SCORM was that it was a set of standards that most LMS systems understood. The standards that fell under the SCORM umbrella included metadata, a reporting data model, and standard methods for initializing an activity, reporting scores, reporting on interactions, and reporting passing or failing and completion status.

The data model included dozens of elements. One example of a data element is cmi.core.score.min. Related to score, SCORM conformant activities reported on the minimum score, the maximum score, the raw score (absolute number) and the scaled score ( a percentage between 0 and 1).


SCORM supported a lot of different data elements. A SCORM conformant activity could report on a variety of things. The limitation of SCORM, however, was that, despite the large number of elements, it was still a finite list. Take a Geolocation Storytelling activity as an example or an eBook reading. If I wanted to capture and report that the student virtually or physically visited location A, then B, and then C, I would have to work around the limitations of SCORM. I could not generate a statement such as, for example, ‘Student visited the Amphitheater in Arles’. If I wanted to capture a student’s progress through an eBook, SCORM would be problematic.


At this point, you might be protesting, but xAPI does that! xAPI? Another acronym! Yes. xAPI, or The Experience API is a new specification that makes it possible to report on a limitless range of things that a learner has experienced: such as, completed a chapter of an eBook; watched a video; toured a museum, and on and on. So, if we have this thing called xAPI, why CMI5?


The benefit of xAPI is that it supports the reporting of anything. The downside to xAPI is that, by itself, it doesn’t have a vocabulary that the LMS understands such as launched, initialized, scored, passed, completed. That is what CMI5 offers. CMI5 is, in fact, an xAPI profile that includes a vocabulary that the LMS understands. In addition, CMI5 can report on any type of learner experience. Here is the definition of CMI5 from the Advanced Distributed Learning Initiative:


cmi5 is a profile for using the xAPI specification with traditional learning management (LMS) systems

(Advanced Distributed Learning).


With CMI5, you can have your cake and eat it too. You can report on learner activity in a way that LMS understands and you can report on just about anything else that the Learning Management System stores in a Learner Record Store. The Learner Record Store or LRS is a database populated by statements about what the learner experienced.

xAPI Statements can capture an
any learner experience, including reading the instructions


Benefit Two: Freedom to put the learning activity anywhere


With CMI5, you can place a learning activity in a repository, in GitHub, on a web server, in a Site44 drop box site, in SharePoint, in a distributed network, wherever….without restricting its ability to connect with a learning management system. CMI5 content does not need to be imported. A CMI5 package can contain as little as one XML file, which among other things, tells the LMS where to find the content.


To appreciate this, we need to look back at SCORM once more (as if it were ancient history).


I’ll start with a pseudo technical explanation and then follow with why it matters.
The way SCORM works is that the learning activity sits in a window. The learning activity uses a simple looping algorithm to find the Learning Management System’s SCORM Adapter. It checks its parent window for a special object. If the window’s parent doesn’t contain the object, the activity looks to the parent’s parent, and so on. In other words, somewhere in that chain of parents, there must be that special object. Typically, the SCORM activity can only communicate to the learning management system if it is a child window of that system or if some server-side technology is used.

CMI5 works quite differently. CMI5 gives us freedom to leave our parents’ home. Whereas SCORM uses a Javascript Application Programmer Interface to communicate, CMI5 uses xAPI to reach across the internet and call a web service’s methods. Loosely, it’s like the difference between a landline and a cellular phone service. To use the landline you must be in the house; to use a cell phone, you must be in the network.

Benefit Three: A simplified sequencing model.

SCORM supported simple sequencing, which many say is not so simple. CMI5’s ‘move on’ property, in contrast, is very easy. A CMI course can contain one or more Assignable Units (AUs). The instructor spells out what the learner must achieve in an assignable unit before being able to move on. The move on property has one of the following values:


• Passed
• Completed
• Completed Or Passed
• Completed And Passed
• Not Applicable


Once the student has ‘moved on’ through all of the assignable units, the LMS notes that the course has been satisfied by that student.


Benefit Four: An assignable unit passing score can be overridden


In SCORM, the mastery score is hard-coded in the activity. In a SCORM activity, the instructor can base completion status on a passing score. But what if that hard-coded score were inappropriate for a group of students, for whatever reason? The specification enables an LMS to pass the mastery score to the Assignable Unit upon launch. So the LMS launches the AU, and sends it student name and mastery score (among other things). By specification, the AU cannot ignore the mastery score but must use it to trump what is hard-coded in the unit or refuse to run.


Benefit Five: Theoretically, CMI5 isn’t hamstrung by pop-up blockers.

When an LMS launches a SCORM activity, it either embeds the activity in an Iframe or launches a window. Both scenarios are problematic. The content may not be well suited for an iFrame and a pop-up blocker can obstruct the launched window.


Theoretically, CMI5 AU can replace the LMS with its own content. It’s not in an embedded iFrame and it’s not a pop-up window. When the LMS launches the AU, along with student name and mastery score, the LMS sends the AU a return URL. When ended, the AU returns the student to that return URL, which is the address of the LMS.


I write “theoretical” because the LMS should not but may ignore this requirement.

Benefit Six: CMI5 activities securely communicate to the Learner Record Store


As I wrote, the activity can send information about learner experiences clear across the internet to the learner record store. But how does the AU have the authorization to do this from, let’s say, a web site? And how does it happen securely?


This is the marvel of 2021 technology versus 2000 technology. Before 2000, we had difficult-to-use protocols for passing information securely across the internet. Oftentimes, special rules needed to be added to internet routers. Then along came a simpler protocol that the first version of CMI5 used (SOAP). Then came an even better way (OAUTH and REST). After launch, the LMS hands the AU a security token (kind of like a key that dissolves in time). The AU uses that key to gain access and to post information to the Learner Record Store.

Conclusion

CMI5 returns power to the instructor and to the L&D specialist. CMI5 allows one to choose where the content resides and to choose what the content reports. CMI5 captures learner experiences more completely and yet it communicates with Learning Management Systems with a vocabulary that LMSs understand. CMI5 supports accommodations for a special group of students without needing to change the code of the Assignable Unit. Finally, CMI5 uses current technology to send data over the internet.

The implications of this emerging specification are tremendous. It is better suited to mobile learning and it is better suited to the learner experience platforms that are emerging (e.g. LinkedIn Learning’s Learning Hub). Soon instructors may be able to organize content from a variety of providers (like LinkedIn Learning, Khan Academy, or OER Commons) but retain the learning management system as an organizer of content, data collector, and credentialing agent. Now instructors, average instructors, may be able participate in that content market from their own GitHub repositories and web sites.

But many LMSs have yet to adopt CMI5. The architects have done their part. Now it’s on us to understand this technology and advocate for it. Start by sharing this article. Thank you.

Appendix A — How it Works (A simplified flow)

For those interested in a deeper dive, let’s walk through the CMI5 process flow step-by-step. (See diagram)

To begin, the author (instructor, L&D specialist) exports content as a CMI5 package. The package can be a simple file that instructs the LMS where to find the content or it can include the content itself.

(1) When a student needs the content, the Learning Management System (LMS) launches the content and sends the Assignable Unit (a course can contain one or more Assignable Units) (2) information that includes student name, a fetch URL and the activity ID.

(3) The Assignable Unit (AU) uses the fetch URL to retrieve a security token. The security token enables the AU to communicate securely to the Learner Record Store (LRS).

(4) As the student interacts with the content, the AU can optionally send Experience API (xAPI) statements to the LRS . (5) At some point, the AU reports that the student passed and/or completed the unit.

(6) The LMS uses the ‘move-on’ information to determine whether or not the student can move on to the next assignable unit. The move-on options are passed, completed, passed and completed, passed or completed, or not applicable.

Finally, when all of the assignable units within a course are completed, the course is marked as satisfied for the specific learner.

A simplified process flow that starts with the
launch of the CMI5 Assignable Unit by the LMS

Short Sims

Introduction

Some of us aren’t content with simply presenting information in a linear fashion in an online course.  We have dozens of words to express what we wish to achieve: interactive, game-like, thought-provoking, challenging, problem-based….   We are also hard-pressed to find the time or the budget or the design that will fulfill our highest aspirations for eLearning. 

It’s easy to get discouraged – but occasionally we’re offered a strategy that works within our budget and time constraints.  One such strategy is the basis of  Clark Aldrich’s recent book, “Short Sims” (Aldrich, C. (2020). Short sims: A game changer. Boca Raton: CRC Press.)  

In his book, Clark Aldrich discusses the methodology of the short simulation.  He begins by lauding the virtues of interactivity.  Interactivity allows learners to experiment, customize their experience, role-play, make decisions and apply skills. He writes that game-like interactivity is expensive to build.  We all recognize that.  Short Sims, on the other hand, can be built in the “same time frame as linear content”.  Short Sims engage students in making decisions, doing things, meeting challenges, solving problems, learning from mistakes and so forth.  Essentially Short Sims offer us a strategy – a methodology – to do things differently and more effectively.

The hook comes from this excerpt: 

“From a pedagogical perspective, the more interactivity the better.  Connecting user action with feedback has long been proven to be critical for most neuron connections”. 

Aldrich, 2020

Aldrich credits the Journal of Comparative and Physiological Psychology for that insight.  But again, in Aldrich’s words, “game-like interactivity is expensive to build.  It is time-consuming.”  Aldrich offers a new Short Sim methodology as an antidote to linear-style presentation the death-by-PowerPoint approach.

Short Sims

                Show, not tell

                Engage learners quickly and are re-playable

                Are quick to build and easy to update

Short Sims square with the Context-Challenge-Activity-Feedback model that we’ve heard so much about from Dr. Michael Allen, Ethan Edwards and the designers at Allen Interactions.  They are a solution to M. David Merrill’s lament that so much learning material is shovelware.  ShortSims are not shovelware.  They are a cost-effective means of engaging students.

Quite frankly, the LodeStar eLearning authoring tool was made for the Short Sim.  Instructors have used LodeStar for years to produce Short Sims but never used that term.  We called them Simple Sims, which sometimes included decision-making scenarios, interactive case studies, problem-based learning and levelled challenges.  We solved the same problem.  We made it easy for instructors to create Short Sims quickly. 

Our design methodology has a lot in common with Aldrich’s methodology as described in his book.   The following ten points outline our approach to creating a simple decision-making scenario, which, in our view, is one form of Simple Sim.  To avoid mischaracterizing Aldrich’s methodology, I’ll use our own terms in this outline.

  1. Select Challenge
  2. Pick Context
  3. Determine the Happy Path
  4. Determine Distractors
  5. Pick a setting – background graphic
  6. Choose a character set
  7. Produce the Happy Path
  8. Add the Distractors
  9. Add Branches
  10. Add Randomness                                                                                                                                                                                                                                    

Select Challenge

Selecting the right problem and the right scope is, in itself, a challenge for the instructor or trainer.  Straightforward processes that present clear consequences for each decision are easy to simulate.   Processes like strategic planning that are influenced by dozens of variables are much more difficult.   The Short Sim methodology itself would be good candidate for a Short Sim.  Another example would be the backwards design method of instructional design.  In my early days at Metro State, a decade ago, we discussed the backwards design approach with instructors.   We then used a Short Sim to rehearse instructors on the key questions to ask during each phase of the backwards design process.  We based a lot of our thinking on Dee Fink’s “Creating Significant Learning Experiences” and  Grant Wiggins’ “Understanding By Design”.  Our objective was to help instructors design with the end in mind.  In Backwards Design, outcomes and assessments come before the development of activities.   The Short Sim did the trick.  Planning instruction is complicated business.  A simple and short simulation is not, in itself, transformative.  But we just wanted assurance that instructors understood the basic principles of backward design by the decisions they made.

Pick Context

In the Backwards Design example, a dean asks an instructor to design an online class to help K12 teachers use educational technology in their classrooms.  So, in this context, the learner is playing the role of online course designer.  The learner is challenged to make the right decisions at the right time.  If the learner holds off on designing activities until completing an analysis, defining outcomes and creating assessments, then the learner succeeds in the challenge.

Determine the Happy Path

The happy path is all the right decisions in the right order.  Situational Analysis -> Learner Outcomes -> Assessments -> Activities -> Transfer.  It is all of the right answers with no distractors.  It’s like creating a multiple choice test with only one option: the correct answer.

Determine Distractors

Now come the distractors.  What are the common pitfalls to Backward Design?  What might tempt the learner to go astray.  If we were designing a Short Sim on the Short Sim methodology, the pits and snares might be what Aldrich calls the Time Sucks:  choosing the wrong authoring tool, too many decision-makers on the project, custom art, and so on.  The learner might be tempted with “the medium is the message.  Invest in the medium.  Commission a graphic artist to create a compelling interface.”  The point of Short Sims is to not invest heavily in artwork or graphic design.  The focus is more on describing the context, presenting choices to the learner, and showing the consequence of learner choices.

Pick a Setting

A background photo helps to set the context.  Images that display settings without people can be found on sites like Pexels, Wikimedia Commons, in the public domain section of stock image services and, of course, on stock image sites. Because one image often suffices in a short sim, authors can snap their own photos and not waste too much time.

Alternatively, vector artwork can serve as an effective background.  Vector art can be found and  downloaded from such sites as https://publicdomainvectors.org/.    (LodeStar Learning doesn’t endorse any of these sites – but we have used them all.)

In either case, if the scene is relevant to the learning context and not just a vain attempt to gamify, it might actually contribute to content retention and recall. 

Choose a character set

A popular approach to Short Sims is the use of cutout characters with different poses and expressions.  Cutout characters can be photo-realistic images with transparent backgrounds or illustrations.  To see examples, please google ‘elearning interactive case studies’, select ‘images’ and you’ll see thousands of examples.  Despite their popularity, finding cutout characters cheaply can be frustrating.  Several authoring tools offer a built-in catalog of characters.  These tools tend to be expensive.  Many stock photo sites offer character packs but usually one must subscribe to these sites for a monthly charge.  Some sites offer pay-as-you-go services, meaning that you pay for the character pack once, without signing on to a monthly subscription.  The character pack can be as cheap as $4.  One such site is eLearning Templates for Course Developers – eLearningchips.  A complete character pack purchased from eLearningChips with more than 137 poses costs as little as $54. No subscription.  No additional fee.  (Again, we’re not endorsing eLearningChips, but we have used their service.)

Produce the Happy Path

With the LodeStar authoring tool, we had several options for producing the Happy Path.  We used the ActivityMaker template and, after the title page, added a sequence of Interview Pages.  The ActivityMaker template offers a range of page types. The Interview Page is one of them.  In an Interview Page, we dropped in a character and filled in the best choice.  We didn’t concern ourselves with the distractors (the wrong options) quite yet.  Again, we were focused on the Happy Path.

Here is the author view:

Authoring a short sim happy path

Here is what the student sees:

A short sim happy path

Add the distractors

Once we sorted out the happy path – a sequence of perfect, well-informed choices, we thought about the pits and snares—the problems and challenges.

In our course design example, a common problem is that we think too early about the content–that is, what topics should the course cover.  We anticipated those problems when designing our Short Sim.  If a learner unwittingly falls into our trap, we have the opportunity of providing feedback. It’s a teachable moment.

A short sim

An alternative to the Interview Page type is the Text Page.  In a text page, we can add images and widgets.  These give us a bit more flexibility than the Interview Page Type.  On a Text page, we can add an image (left or right aligned), then a Text Layout Widget.  Here you can see the page with image and the Text Layout widget.  The image was composed in our SVG editor. 

Authoring View

Here is what the student sees.

Student View of a LodeStar Activity

Add Branches

In one sense, a branch is a place where we get sent based on our decisions.  If this were a customer service sim and we made poor choices, the customer would appear more and more irritated and ultimately we lose his or her business.  Programmatically, the place where we get sent is a page that shows an irate customer and choices that represent a difficult situation.  The branches could lead us down a path of destruction but we may also have the opportunity of winning back the customer’s trust with a string of good decisions. 

Branching adds variety to the sim.  It gives us a customized experience or allows us safely to ‘test’ bad choices.

Branching can also be viewed as the consequence of a decision or choice.  In LodeStar, branch options include going to the next page, last page or jumping to a page.  They also include bringing up a web resource, adding an instructive overlay, setting a variable value, etc.  It could also mean the execution of a script or series of commands to make a lot of things happen simultaneously, such as setting a variable (that tracks our failings), sending us down a path, changing the image of a happy customer to an unhappy one, showing feedback, marking the choice with red, and more.

It’s probably most effective to show the learners the natural consequence of their decisions–an unhappy customer for example.  As designers, we might also need to be explicit and display feedback, or introduce a coach who provides feedback.  As Clark Aldrich writes, the sign of a good Short Sim is one that is played over and over again.  Branching helps us make the sim a different experience each time.

LodeStar Branching options

Add Randomness (optional)

Randomness might be difficult to achieve and should, therefore, be considered optional.

Randomness is more than randomizing distractors.  (Randomizing distractors happens automatically on an Interview Page.  It’s done through a simple checkbox in a Text Layout widget.)  More sophisticated randomness might include a randomly generated sum of money, or a randomly selected path or scene, or randomly generated assets that are assigned to the learner.  It might be a randomly generated length of fuse that represents the customer’s patience.   In our course design example, it might be randomly generated student characteristics that include age, gender, and subject interest.  That level of randomness is best achieved with the help of LodeStar’s scripting language and is best left to its own article.

Conclusion

Short Sims represent a level of interactivity that goes beyond the linear presentation of information.  They have the potential of promoting learner retention and application.  With the right tool (and there are plenty),  everyone can build short simulations.  One tool, LodeStar, was designed from the very start with the short simulation and the intrepid instructor in mind.  Short Sims may vary in sophistication and design but, in any form, they cause learners to think and to see the consequence of their actions.  The short sim is a strategy that is doable and repeatable within our budgets and time constraints.  Make it happen in your world!

DIY Serious eLearning

Introduction

In the past decade, leaders in the field of learning experience design have given us much to think about, much to strive for.  They represent a synthesis of  instructional design, learning sciences, and user experience design.  They also possess, in one form or another,  the resources to execute their ideas.  But, if you are an educator or, perhaps, a learning and development specialist in a mid-sized company, you know that you haven’t got a large team or a large budget.  You have highly specialized objectives.  You want your learning designs to be effective.  And you know that you can’t just pull something off the shelf.

In a series of posts, I’ll explore what the leaders are saying and then get down to DIY specifics.  I will parse out the skills that instructors and specialists need in order to implement some of these ideas – especially in the area of eLearning interactivity. But, in this post, let’s first contemplate some of the themes that are consistent with evidence-based learning design. Conveniently, many of them are listed in the Serious eLearning Manifesto.

The Serious eLearning Manifesto?

If the manifesto hasn’t lit your corner of the world, here is a little background.  In 2014, some highly respected thought leaders in eLearning convened to, in their own words, instigate the Serious eLearning Manifesto.  The instigators were Michael Allen, Julie Dirksen, Will Thalheimer, and Clark Quinn.  If these names are new to you, you’ll be delighted to learn that each name represents a treasure trove of ideas, insights, research, and reflections on how people learn and how to design effective learning experiences.  Joining in the pledge to promote ‘Serious Learning’ is a list that reads like the Who’s Who of learning design.  Among them: M.David Merrill, Allison Rossett, Roger Schank, and Sivisailam Thiagarajan, better known to the world as Thiagi.

If you haven’t read the Serious eLearning Manifesto, it is available at https://elearningmanifesto.org/   Parts of the manifesto might seem self-evident.  One of the listed attributes of serious eLearning is that it must be meaningful to learners.  We might think that it’s obvious we want our learning activities to be meaningful to learners.  But, the site discloses the status quo: too much eLearning is content focused, efficient for authors, attendance-driven, focused on knowledge delivery and so on.  I encourage you to visit the site for the full story.

Implementing the Supporting Principles

The Serious eLearning Manifesto is based on a number of supporting principles.  Each supporting principle is a study in itself. Some aspects of the manifesto and other evidence-based practices are not easily achieved with the traditional skillset and/or toolset of the college or corporation, including the Learning Management System.   I’ll sample a few of these.  I will place the language of the manifesto in bold.  The rest is my running commentary.

The manifesto states:

  • Do not assume that learning is the solution

This is a principle that was driven home to me by the Minnesota chapter of the International Society of Performance Improvement, MnISPI.  They espouse the Performance Improvement Model where training is but one outcome of a performance needs analysis.  At our firm, Redpath and Company, we are working on a Knowledge Management platform that will eventually be integrated with our learning management system. In the both the academic and corporate worlds, students and employees might benefit from a knowledge management center that gives them the cheat sheets, job aids, micro-learning and whatever they need to solve a problem or perform a task just when they need them.

  • Tie Learning to Performance Goals. A new breed of tool can help support this principle. At our firm, we recently implemented an employee engagement system that will soon integrate goals, feedback, and one-one-reviews with training and performance solutions. The system is currently integrated with our Human Resource System (HSRIS), but interoperability standards offer the opportunity to integrate some of the key pieces in learning development: knowledge management, learning management, curriculum mapping, resource library, and employee engagement.  The full suite of tools includes Bamboo HRIS; Microsoft Teams, SharePoint, and Automate; Prolaera Learning Management System; Microsoft Stream; and Quantum Workplace.  All of these systems can communicate to one another through application programming interfaces (API), which act as connectors between vendors. 
  • Provide Realistic Practice  In eLearning, providing realistic practice might mean a case study, decision-making scenario or simulation that simplifies the world into digestible learning chunks.  At our firm, we have generated a few of these and uploaded them to the SCORM cloud, which is integrated with our learning management system.  (The SCORM cloud supports traditional SCORM and a newer standard known as the Experience API or xAPI.)  
  • Adapt to Learner Needs  In eLearning that might mean an adaptive learning system that uses some form of artificial intelligence or smart decision-making to meet individual student’s needs.  These are systems that predict and/or evaluate student performance and prescribe a learning plan with resources that are matched to topic, reading level, level of knowledge, and their place in a learning hierarchy.

I have a personal interest in all of the supporting principles.  As a toolmaker/instructional designer, I’ve been slowly developing and promoting the  knowledge management center.  I’ve been helping our HR department with the employee engagement system.  I’ve researched a host of adaptive learning systems —  but have yet to adopt one.  I have a deep-rooted interest in promoting the benefit attached to the following supporting principle:  Use Interactivity to Prompt Deep Engagement.

Use Interactivity to Prompt Deep Engagement. 

Interactivity can mean a number of things.  eLearning texts often cite the Community of Inquiry framework, wherein the complete educational experience is described as student-to-student, student-to-instructor, and student-to-content engagement or interactivity.  I’ve observed instructors use the first two to good effect.  Many experienced online instructors deftly use discussion boards, chats and video conferencing.  The tools are there.  The instructional support is often there.  One of my favorite memories of effective student-to-student interactivity is from a marketing course.  The instructor set up the discussion thread so that students pitched ideas to the sub-grouped discussion board as if they were pitching to clients.  Students recalled the text and drew from their own knowledge to discuss the merits of the pitch.  The discussion wasn’t formulaic as too many are.  It was not ‘Read a chapter, post by Wednesday, respond to two posts by Sunday.’  In contrast, the marketing pitch simulated an authentic context (serious eLearning), and provided real-world consequences to the student.  Their pitch got a positive or negative response.

Student-to-content interaction is a bit more challenging for both instructors and learning and development folks to implement.  The manifesto talks about using interactivity to support reflection, application, rehearsal, elaboration, contextualization, debate, evaluation, synthesis and more.  Some of this can be accomplished with the traditional tools of the LMS as described above.  Some require 3rd party authoring tools like ZebraZapps, StoryLine,  Captivate, and LodeStar.  They are vital tools in the eLearning instructor’s toolkit.  But making elearning meaningful with the use of authoring tools requires a new set of skills.  Without those skills, we settle for what the Serious Learning Manifesto decries:  page turning, roll-overs and information search. 

Some skills are technical; others related to psychology and cognition. One of the manifesto’s instigators, Michael Allen, wrote more than a half-dozen books and built two incredible tools to enable instructors and instructional designers to build rich learning experiences: Authorware and ZebraZapps.  Both tools gave non-computer-programmers the ability to design something interesting:  realistic scenarios, storytelling,  challenges, environments that invoked action and showed the consequences.  The other instigators of the manifesto gave us additional insights into cognition. Julie Dirksen in her highly acclaimed book, Design for How People Learn, gave us insight into why people persist in their negative behaviors, how they remember things, what motivates them, and what strategies are effective. Wil Thalheimer bridged research and practice in topics related to memory, evaluation and presentation, and he led the charge to debunk many of the learning myths that we hold near and dear to our hearts.  Clark Quinn has written numerous books that cover learning science and design.

Underlying all of this is research-based evidence.  Michael Allen and Julie Dirksen, especially, soft pedal the research.  That’s their style. Their writings are lighter and not riddled with citations.  Some of it is even iconoclastic – like this title of Michael Allen’s Designing Successful e-Learning: Forget What You Know About Instructional Design and Do Something Interesting.  In this field, creative, insightful practices often take a back seat to formulaic approaches.  Stating the objective on page one, presenting information on page two, and quizzing on page three would be an example of a formulaic approach. 

Julie Dirksen’s Design for How People Learning is illustrated with these quirky line drawings that simplify serious ideas and make them more digestible.  But these books, style aside, are grounded in research.   A recent book, which incidentally recognizes the contributions of Julie Dirksen and Wil Thalheimer, focuses precisely on evidence-based practices, and exposes the myths. 

Evidence-Informed Learning Design  was authored my Mirjam Neelen and Paul Kirchner, both highly respected for their contribution to learning sciences. In their book, they list top five ingredients in order of effectivity and efficiency.  The practices include spaced practice, practice tests, overlapping the practice of one topic with the practice of another, and questioning and encouraging learners to explain a process or procedure to themselves. 

If you look up these authors, read their books, read their blogs, listen to their podcast interviews (see resources below), you are further convinced that the serious eLearning manifesto has merit. 

In academia, many have read How Learning Works and contemplated 7 research-based principles  for smart teaching offered by Susan Ambrose, Michele DiPietro and others.  In How Learning Works,  you will find the same themes:  Students and trainees are not blank slates.  How they are prompted to organize knowledge influences how they learn. Addressing motivation is paramount.  Component skills need to be identified, addressed with targeted strategies, mixed and remixed.  Meaningful eLearning should offer practice, practice, and more practice with guidance, feedback, scaffolding, elaboration and so on.  A page-turner PowerPoint with little engagement doesn’t cut it.

Conclusion

So, in the next post, I will tackle one aspect of serious eLearning.  I will parse out what it takes to design a meaningful interaction between student and content.  I will use our own tool, LodeStar, to illustrate the ideas but not confine the discussion to our own self-interest.  I’ll expand the discussion to include other authoring tools and, hopefully, contribute in some small way to the cause of Serious eLearning. In the meantime, please check out the resources listed below.

Resources

Michael Allen’s Books

Julie Dirksen’s Book: Design for How People Learn

Wil Thalheimer’s Site: Work-Learning Research Site

Clark Quinn’s Blog: Learnlets

Mirjam Neelen and Paul Kirschner’s Blog: 3 Star Learning Experiences

The Learning Hack Podcast

International Society for Performance Improvement

Minnesota Chapter of the International Society for Performance Improvement