# The Role of Artificial Intelligence in Learning Experience Design

## Introduction

I recently asked ChatGPT to list the five most influential instructional designers.

ChatGPT, of course, is an artificial intelligence chatbot that has grabbed the attention of the world.  Locally, it was put to the test by University of Minnesota Law School. It answered questions from law school exams in several legal subject areas and when judged blindly, it earned a C+.  On every front, businesses and institutions are assessing opportunity – and risk. (New York City has blocked ChatGPT on school devices and networks*.)  Every news outlet is talking about it. Artificial intelligence has stepped off the silver screen and into our lives in a new way.

## ChatGPT Lists 5 Influential Instructional Designers

I’ve asked ChatGPT several questions over the past two months.  But last evening, I turned its attention on instructional design and learning experience design.

So, I ask it:  who are the five most influential instructional designers.

Influential Instructional Designers:

1. Richard Mayer – Known for his work on multimedia learning and cognitive theory of multimedia learning, Mayer has made significant contributions to the field of instructional design.
2. Clark Aldrich – A pioneer in the field of simulation-based learning, Aldrich has designed numerous training programs and simulations that are used widely in corporate and military settings.
3. Michael Allen – The founder of Allen Interactions, Allen has designed over 1,000 e-learning courses and is known for his work in instructional design and performance support.
4. Julie Dirksen – Dirksen is a leader in the field of user experience (UX) design for learning, and has written several influential books on instructional design and UX.
5. Will Thalheimer – A researcher and consultant, Thalheimer is known for his work on evidence-based instructional design and his writing on instructional design best practices.

The five designers listed are certainly influential. In our Learner Experience Design graduate courses at Metropolitan State University, we’ve included the writings and/or texts of Clark Aldrich, Michael Allen and Julie Dirksen.  In working with faculty, I’ve used research from both Richard Mayer and Will Thalheimer.

And so, I liked its answer and my curiosity was piqued.  To explore further the role of AI in Learning Experience Design, I wondered if ChatGPT could provide content for a quick lesson on that very topic — Learning Experience Design. (It also gave me the opportunity to take the latest nightly build of LodeStar 10 on a little road test.)

The query produced interesting results.

## Disclaimers

Before I show that content, a few admissions and disclaimers.  First, I’m not advocating for the use of AI in the instructional design process.  In the near future, I will certainly evaluate its utility.  Secondly, in my example I offered ChatGPT no directives.  I didn’t specify grade level, target audience, prerequisite knowledge, cultural or situational context.  I simply asked it to spit out information.

A further admission is that I’m very uneasy about the topic of artificial intelligence.  AI obviously presents both opportunity and challenge.  I have a dystopian dread over AI and yet I recognize the many opportunities that it presents.  I’ve long held out hope for truly adaptive learning systems that could individualize the scope and sequence of instruction for the benefit of learners.  Many systems have appeared on the market with ‘semantic’ engines that don’t quite measure up to a system that can learn from a students’ successes, missteps, preferences and confidence level, and make appropriate decisions. Adaptive Learning is one area that benefits tremendously from AI.

The other thing I recognize is that the information that ChatGPT prints out is, at best, raw material for an instructional designer.  The design must include learning experiences that help students engage with the material, think critically, make decisions, make meaning, and make memory.

David Wiley blogs about this more eloquently in AI, Instructional Design, and OER – improving learning (opencontent.org).  Here is an excerpt from his blog:

“What distinguishes an educational resource from an informational resource is that the latter shows some intentional application of knowledge about how people learn. I have previously argued that the minimum amount of effort you could invest to convert an informational resource into an educational resource was to add practice with feedback. That simple sounding task quickly explodes in complexity as you consider the research on what form that practice should take, how long it should last, when it should be scheduled, what kind of feedback should be provided, whether the feedback should come immediately or after some delay, etc.”

David Wiley

That leads to my biggest disclaimer.  Just for fun, I wanted to see what ChatGPT would produce and how I could use it in a project with minimum effort.

David Wiley writes:

“LLMs will dramatically increase the speed of creating the informational resources that comprise the content infrastructure. Of course the drafts of these informational resources will need to be reviewed and improvements will need to be made – just as is the case with all first drafts – to insure accuracy and timeliness. But it appears that LLMs can get us 80% or so of the way to reasonable first drafts orders of magnitude faster, eliminating the majority of the expense involved in this part of the process.”

I took the ChatGPT output and sprinkled in some activities.  The disclaimer is that these are not activities that any of the above authors would endorse.  The design is just a simple present-and-check.  But I would hazard to say its not unlike the content that gets pumped out by course mills.  It’s not unlike the content that one consumes to meet a compliance check or earn continuing professional education credit.  It’s not unlike most of the stuff out there that our 5 influential instructional designers decry.

In doing this short exercise, I was reminded of David Thornburg’s often quoted “Any teacher that can be replaced by a computer, deserves to be.”  Perhaps, any instructional designer who can be replaced by an artificial intelligence chatbot deserves to be.  Or perhaps the real line of enquiry should be:  can a chatbot be trained to make decisions that imitate an instructional designer?  Can a chatbot be trained to pass the LXD equivalent of a Turing test? (See appendix A for what happened when I gave ChatGPT more information about the audience.)

But disclaimers aside, it is intriguing to me to view what ChatGPT produced.

## Rules of Engagement

Here were my rules when applying the content:

• Spend very little time (under two hours)
• Retain all of ChatGPT’s text – even if there were errors (example, Bloom’s Taxonomy was produced in 1956, not the sixties.)
• Format the content in different ways, including placing it in a time line
• Add videos or audio from external sources to supplement ChatGPT and provide some human connection.
• Return ChatGPT’s content to the engine and ask ChatGPT to create a final quiz from that content

You can see this for yourself by clicking on the following link and/or joining me on a screen-by-screen tour (not all screens shown).

Link to content: Learner Experience Design ChatGPT (lodestarlearning.github.io)

## Screen by Screen Commentary

First, in my title screen, I give full credit to ChatGPT.

I asked ChatGPT to explain instructional design versus learning experience design.  I simply presented the information as a choice between two explanations: one that matched Learning Experience Design and the other that matched Instructional Design. ChatGPT provided the content; I added this little warm-up learner engagement.

I added ChatGPT’s definition of What is Learning Experience Design.

I then added a video from YouTube so that we have an LXD person describe the characteristics of LXD.  This reinforces the information and humanizes it.

I asked ChatGPT to list the important events in the history of Instructional Design.  It put out a list of events.  Interestingly, the earliest event was the publishing of Didactica Magna in 1600s. I took that information and presented it in an interactive timeline.

ChatGPT output the importance of Learner’s Needs and Goals.  I added a video where the host really challenges his guests on how LXD designers actually translate goals and user research into meaningful experiences.

ChatGPT output an explanation of instructional methodology.  I did a strike-through on key words and asked LodeStar to convert them into fill-in-blanks.  I then provided a randomized word list.   So rather than just reading, the learner has to think about the context and type in the appropriate word.

To supplement ChatGPT’s explanation of Technology and Tools, I added a podcast of instructors talking about the use of technology and tools in response to COVID.

In the Assessment and Feedback session I added a word problem related to calculating the volume of a cylinder (as an example).  The dimensions of the cylinder (radius and height) are randomly generated from a defined range.  Students can work on these problems all day and always get a new set when they refresh the lesson.

For User Experience and Interaction Design, ChatGPT spat out content and I added a slideshow viewer of three interesting designs.

Finally, I gave back ChatGPT ‘s content and asked it to produce an assessment.  I then took the question stem and answer options and fed them into a Multiple Choice/Multiple Select Question Widget.

Most of the activities were assigned a point value.  I then added a report page to send this off to either a Learning Management System or a Learner Record Store (CMI5/xAPI).

Finally,

I submitted the content to Open AI’s text classifier so that it could determine whether or not the content was possibly AI-generated.

Here is the result:

*OpenAI announced the AI Text Classifier to help determine whether or not content was AI-generated

Conclusion

OpenAI already offers specific application programming interfaces (API) to its language processing engine. For example it offers auto completion in its Text Completion API. OpenAI is now inviting developers to its waiting list to use ChatGPT in new and existing applications. One can easily imagine the day when toolmakers will invoke the API commands in order to, in David Wiley’s words, “dramatically increase the speed of creating the informational resources that comprise the content infrastructure”.

We’ve just completed LodeStar 10 and continue to test it. We have no current plans to interface with AI but have been added to the waiting list by OpenAI to explore ChatGPT API capabilities. Who knows where that will lead? In the meantime, our focus is to help designers start with simple projects and then offer them more capability as their designs grow from simple present-and-checks to short simulations, story-telling, decision-making scenarios and CCAF (Context Challenge Activity Feedback) projects that the five influential instructional designers would endorse. As far as AI is concerned, we have a lot to learn about the role of AI in learning experience design. I would love to read your comments.

Doubtless, the course mills will go into overdrive on this technology. Caveat emptor.

## Appendix A

Here is my prompt:

Write about Learning Experience Design for an audience of high school students who know very little about instructional design or cognitive science. Write at an eighth grade level.

Here is ChatGPT’s response:

Learning Experience Design is all about creating educational experiences that are engaging, effective, and enjoyable. It’s a field that combines ideas from instructional design and cognitive science to help people learn new things.

Think about it like this: have you ever taken a boring class where you couldn’t stay focused or retain information? That’s where Learning Experience Design comes in! The goal is to create lessons that grab your attention and make you remember what you’ve learned.

The designers use research from cognitive science to understand how people learn best. For example, they know that people remember information better when they’re actively engaged in the learning process, instead of just passively listening. They also understand that people remember information better when it’s presented in a way that makes sense to them and connects to their own experiences.

So, Learning Experience Design combines these ideas to create learning experiences that are tailored to the way people learn. This might include interactive activities, simulations, games, videos, and more. The goal is to make learning a fun and memorable experience that sticks with you, so you can use what you’ve learned in the future.

Overall, Learning Experience Design is a field that’s all about making learning better. If you’re interested in making education more exciting and effective, this might be a great field for you to explore!

This content is appropriate for high school students or early college students.

When I applied the Flesch-Kincaid Reading Grade Level test, it scored 10.4. Just saying.

# Introduction

Modern web pages offer designers a rich palette of media types and standards to create engaging learning experiences. The web page has become an amazing success story.  It started as a battle ground of competing standards and self-interests and has arguably matured to become a meeting ground.

If you aren’t on the bleeding edge, you’ll benefit from the convergence of standards. In CanIUse.com, browser support for many enabling technologies such as SVG and WebGL (explained later) shows up as green tiles across the table of browsers.  Green means these technologies are commonly supported, which is good news because learning experience designers can put these technologies to work.

Many of today’s eLearning projects are essentially webpage applications with additional standards that support communication to learning management systems or learner record stores. Many of the technologies that make the web interactive, responsive, accessible, and expressive are the same technologies used in eLearning applications.  Most of the major eLearning authoring systems are web page design systems for web pages that are hosted in learning management or content management systems.  There are many exceptions, of course, which include augmented reality systems, gaming engines and environments, and other virtual spaces that are not built on HTML5.  But let’s stay focused, for a moment, on the web.

For maturing standards, the web has become a place of agreement.  In the not-too-distant past, basic HTML markup and styling had to address the many differences between browsers and how they interpreted the World Wide Web Consortium (W3C) standards.  Even a technology that most of us take for granted, the audio file, was once unsupported by a single file format that every browser could play.  Designers had to choose both an audio format and a fallback format. Thankfully that has changed.  All browsers can now legally play the .mp3 file or the HTML5 Ogg Vorbis audio format.

Soon the .m4a audio file (AAC) will be supported by all browsers and offer even higher quality audio at less the data cost.

But audio is only the beginning.  All modern browsers (IE 11 excluded) support GIF, animated GIF, JPEG, PNG images, animated PNG, and motion video in the MPEG-4/H.264 video format.

All browsers support the language features of the last major revision to JavaScript.  JavaScript is the code that makes the web interactive.  It is the code that makes eLearning projects interactive.  Standardization allows all of us to benefit from the interactions that eLearning authoring tools produce with less worry about browser and device differences.  (I emphasize less worry because there is always something to worry about.)

Interactive 3D has become a new frontier for eLearning.  All major browsers support WebGL, which is a method of generating 3D graphics using JavaScript and hardware acceleration.  In the early 90’s, when I first created 3D worlds, I needed an entire lab of computers dedicated to rendering three-dimensional meshes into an animation of three-dimensional images that we would transfer onto a laser disc.  Today, WebGL enables us to render a mesh into a rotatable, scalable image in real-time, all in a browser.  If you’re not familiar with WebGL, please read on.

In short, Learning Experience designers, instructors and trainers can now use audio, video, imagery, text, three-dimensional graphics, scalable vector graphics, math mark-up, interactivity, and logic to realize their grandest designs and create engaging experiences for their learners.

On the eve of LodeStar 10’s release, I am taking stock of these standards and other influences that had a strong bearing on where our product is headed.  Like all toolmakers, I am keeping an eye on effective strategies as well as emerging and maturing technologies and am imagining the opportunities for designers as we work to make these technologies practical and accessible.

Here is a list of standards and strategies that are central to LodeStar’s current development.

## Scalable Vector Graphics

A lot of our development has focused on Scalable Vector Graphics.  SVG offers the designer many benefits.  Simple graphics such as the famous SVG tiger pictured here keep their sharpness regardless of the display size and the resolution. They are scalable.  They also offer more opportunity for accessibility.  Scaling can help learners with low vision.  The SVG title is readable by most screen readers. Also importantly, the SVG graphic is made up of individual elements whose properties can be changed by program code or user interaction.

In the screenshot below, the tiger graphic is opened in an SVG editor in LodeStar.  The author has right-clicked on an eyeball and can now choose branch options based on selection, deselection, drag, hover over and hover out.  All of LodeStar’s branching options and script can be executed based on any of the above events.  For example, based on the click of an eye, things can happen: the eye color changes, an audio description plays, an overlay appears with a complete description of a tiger’s vision and so on.

### Importing PowerPoint as SVG

We’ve never been huge fans of starting an eLearning Project as a PowerPoint.  That hasn’t changed, but LodeStar10 does support importing a single PowerPoint slide or an entire PowerPoint presentation as a series of SVG pages.

PowerPoint supports exporting a slide or series of slides as SVG.

LodeStar 10 adds support for importing a single SVG image or an entire folder of SVG images.  LodeStar interrogates each slide and looks for things like Base64-encoded images.  PowerPoint converts imported images to a long string of characters called Base64.  This is a great format for transporting images inside a single file but browsers tend to load and render Base64-encoded images very very slowly.  LodeStar detects the Base64 encoding and then translates the characters back into an image file that is loaded into the project.

The result is that vector graphics are editable as SVG elements, and embedded images load and display quickly.  The designer can display the slide as is, edit elements and add branch options to elements.

# MathML

For a short while, all browsers supported the MathML markup language, enabling markup without the need for add-ons.

But there have been setbacks. We’re looking forward to when MathML is once again available in all browsers. Given the likelihood of that, LodeStar continues to support MathML.

MathML (Mathematical Markup Language) is supported by W3C as the preferred way of displaying mathematics on a web page or eLearning application. MathML describes structure and content of mathematical notation and provides for a higher level of accessibility than simply displaying an image.  Designers can quickly edit and manipulate the size of a MathML expression.  This is an improvement over taking a picture of an equation, for example, and pasting the image into a presentation.  In the past LodeStar automatically converted expressions into images or it used the MathJax library to convert expressions written in Latex to MathML.  But now we’re banking on full support for MathML in the near future.

## SimpleSim

For years, LodeStar offered the Interviewer Page Type to support what we called decision-making scenarios and simple simulations.  We continue to offer that page type but have expanded the number of layout options for interactive decision-making.

For starters, we added a new page type called the SimpleSim.  This page type supports graphics, interactive widgets, text and whatever else is needed to set the scene.  At center stage is the situational prompt and three decision options (as pictured below).   All of LodeStar’s branching options can be invoked based on the learner’s choice.   For example, the ‘Jump to Page’ branch option can bring up a scene that matches the choice and advances the narrative.   Branching options also allow us to add feedback, keep track of points, collect user responses and so forth.

To style the scene shown below, the author used a palette for the color scheme, added a header graphic through Tools>Themes, selected a layout style that set the window width and navigation at the top, and added a background graphic.   The use of palettes, themes, layouts and page types enables the author to control every aspect of this simple simulation, including the interactivity.

## CCAF

It’s no secret that we are huge fans of Dr. Michael Allen’s Context-Challenge-Activity-Feedback model.  In a project that was intended to improve employees’ social sales index (SSI) in LinkedIn, we set the context as a simulated LinkedIn.  For the challenge, the learner must improve the main character’s SSI score by providing the right advice and interacting with a simulated profile, notifications, messaging, etc. – just like LinkedIn!

CCAF projects are not page turners or Present-and-Checks.  They can be quite advanced.  To support a more sophisticated interaction than the display of content and multiple-choice questions, LodeStar offers LodeStarScript, which can be written in the Execute Command branch option.

LodeStarScript enables designers to change the properties of graphics on the fly, including SVG graphics.  Properties can include color, position, image source, rotation, opacity, etc.  LodeStarScript offers the designer the power of conditional logic, loops, local and global scoped variables, and a long long list of functions.

In the simulation below, the learner can select a camera aperture and control exposure.  The effects of exposure are simulated with the simple change of the color and opacity properties of an SVG element.

## xAPI/CMI5

Megan Torrance, a veteran of learning design, authored a research paper sponsored by the Learning Guild.  I won’t steal her thunder and encourage you to read the paper for yourself, but I’ll cite two statistics from her research that tell the story of xAPI.

In a survey of 368 respondents, the majority of whom belong to organizations that create or purchase learning solutions, 44.9% of the respondents indicated that ‘We are interested in xAPI but have not used it at all.”

Version 1.0 of xAPI was released way back in 2013, and yet 10 years later adoption is not widespread.

So what is xAPI, how does it relate to CMI5, and why are we so interested in it?  In short, xAPI and CMI5 are game changers.  They are not the same thing but they are close cousins.  An eLearning activity that uses CMI5 can generate an xAPI statement, which gets recorded in a Learner Record Store.  CMI5 can also tell the LMS whether the learner passed or failed.

So, let me be a little more specific.

With these technologies, I can store my eLearning projects in my own repository — GitHub for example.  I can then import a very lean and simple file to the Learning Management System, which tells the LMS from where to launch the activity.  The LMS then passes to the activity learner information and a token for secure communication.

CMI5 uses xAPI technology but it also understands the vocabulary that LMSs require.  Pass/Fail.  Incomplete/Complete.  xAPI reports to a learner record store any statement that the designer has added to the eLearning activity.  ‘Learner has reached Level Two.  Learner completed a video.  Learner attempted Level Three four times.’  CMI5 can generate any kind of xAPI statement in the form of learner actions.  In addition, CMI5 can tell the LMS whether the learner passed and/or completed the module.

But among the reasons why people don’t yet use it are:  lack of knowledge, lack of Learner Record Store, and LMS does not support it.

I am extremely fortunate in that our Learning Management System is Prolaera.  It is designed for the CPA industry.  Prolaera can import a CMI5 activity.  As a result, I can do the following:

1. Send a statement about the learner reaching Level 5 to the learner record store.

2.

Read a list of learner experiences from the Learning Management System’s Learner Record Store. (the learner’s name has been erased from the screenshot).

From the screenshot above, you can see that we can report on any learner experience.  For example, the learner first experienced the results page with a score of 200 points.  We can also see that the learner passed, satisfied the requirements, completed the module and terminated the activity.  These are all terms that the Learning Management System understands.

It may take time but CMI5/xAPI will eventually be widely adopted.  These standards are incredibly important to the advancement of eLearning.  It begins with awareness. The more designers learn about it, the more they can encourage their learning management system vendors to support it.  In the meantime, we are ready for it!

## 3D

Glen Fox’s Littlest Tokyo is a great example of what is possible with Three Dimensional objects viewed jn a browser.  The object is beautifully detailed with a running streetcar animation as an integral part of the 3D object.

Designers will be able to use free tools like Blender, TinkerCAD, Sketchup or even their smartphones to produce 3D meshes.

Smartphones like iPhone 12 come equipped with LIDAR.  LIDAR emits a laser pulse that reflects off of solid surfaces and returns to a sensor on the smartphone.  The round-trip duration is noted. From that, the software can accurately position the solid surface in three-dimensional space. LIDAR has been available in specialty instruments for a long time but for designers to be able to use this technology practically, the software needed to improve.

In whatever way the 3D model gets created (3D graphics software, downloaded from a warehouse, generated by LIDAR) it can then be loaded into a viewer and manipulated (scaled, rotated, navigated) by the learner.  Imagine vital organs or historical places or complicated machines as manipulable objects.

Currently, we’re working on a loader and viewer for 3D Models.  The first LodeStar 10 release won’t include a 3D model viewer, but we’ll introduce it later in a minor release.

In the meantime, we do support photospheres.  Photospheres use the same underlying technology: WebGL. WebGL enables hardware-accelerated physics and image processing and rendering onto the HTML5 canvas.  The hardware is a dedicated processor called the Graphical Processing Unit or GPU.

The photosphere that appears in the screenshot shows a distorted view of an art gallery.  The first art gallery image (shown below) was produced in Blender.  The second art gallery image was taken with an iPhone at the Minnesota Marine Art Museum in Winona.

The image appears distorted – in fact, spherical.

Once in LodeStar, the designer can add images, markers, and hotspots to the photosphere.  All of these things get correctly positioned on the sphere.

In the LodeStar editor below, I am adding Lawren Harris’ paintings to the gallery as well as hotspots.  A hotspot click takes the learner to another room in the gallery.  A click on the painting brings up an image overlay.  A click on the video graphic starts a video.

The end result:

## Conclusion

2023 marks the twentieth anniversary of LodeStar Learning.  We filed with the Minnesota Secretary of State on March 11, 2003.  I’m pleased that LodeStar has adapted to all of the technology changes over the years.  LodeStar began as code embedded in Lotus’ LearningSpace.  It then enabled instructors to create rich learning activities in ActionScript and Flash.  In 2013, LodeStar Learning pivoted to a whole new generation of software that used HTML5.  LodeStar10 continues that progression and harnesses the power of HTML5, SVG, 3D and so much more to help designers create great learning experiences.

# Meeting the CCAF Challenge

By Robert “Bob” Bilyk

## Introduction

I recently watched Ethan Edwards present ‘Cracking the e-Learning Authoring Challenge’.  This post is my attempt at cracking the e-Learning authoring challenge.

But first a little background.

As many of you have the privilege of knowing, Ethan Edwards is the Chief Instructional Strategist for Allen Interactions.  Cracking the challenge is all about building interactivity in an authoring tool – specifically, CCAF interactivity.  CCAF is an acronym for Context-Challenge-Action-Feedback.  The four components of CCAF are part of Michael Allen’s CCAF Design Model for effective learning experiences.  Michael Allen is the founder of Allen Interactions, the author of numerous books on eLearning, and the chief architect of Authorware and ZebraZapps.  Both authoring systems were designed for people with little technical expertise to be able to build – you guessed it — CCAF learning experiences.

In Ethan’s presentation, he demonstrates building a CCAF activity with Articulate Storyline.  In a nutshell, the CCAF learning experience is the experience of “doing”.  Rather than reading or viewing content, the learner experiences first-hand the application of principles, concepts, strategies, and problem-solving in completing a task and succeeding at a challenge.

In Ethan’s demo, his task is to detect a refrigerant leak.  The learner is shown refrigeration equipment and given a leak detector.  The learner doesn’t at first read a pdf or watch a video but performs an action.  In CCAF activities, text and videos might come in the form of feedback to a learner’s action.

Some of the CCAF learning experiences that I designed include running a multiple hearth wastewater incinerator, troubleshooting a cable network, supporting the adoption of a special needs child, designing an online class, assessing risk of recidivism, and, most recently, searching for documents in a document management system.  In all cases, most of the learning came from being immersed in a ‘real world’ setting’, presented with a challenge, and getting feedback because of learner actions.

Ethan’s presentation piqued my curiosity and a bit of self-reflection.  He lists things that are essential in an authoring tool to enable the design of a CCAF learning experience.  As a toolmaker, I explored each of the items on his list and I applied them to a small project built with our own LodeStar eLearning authoring tool.

As we explore each item on Ethan’s list, I’ll illustrate with LodeStar.  If you follow along, you’ll see the development of a simple CCAF application.  You’ll learn about the components of CCAF.  And you’ll also learn a little about LodeStar and its capabilities.

But first an important caveat. CCAF comes in all forms, shapes and sizes. Ethan’s example and my example happen to be very simple simulations. The principles of CCAF are not limited to simulations. They can be applied to anything that requires action on the part of the learner — which includes making a decision, crafting a plan, analyzing and solving a problem — a host of things.

This is but one example of CCAF to illustrate its principles and test whether or not our authoring tool is up to the challenge.

## Introduction to the Demo Application

The objective of the application is for learners to test an electrical outlet and determine which wires are hot or ‘energized’.  In completing this task, the learner must turn on an electrical multimeter and connect its probes to the various wires in an electrical outlet.  A multimeter is a measuring instrument that typically measures voltage, resistance, and current.  Once someone has learned the difference between these things, the practical skill is in choosing the right setting for the task and safely using the meter to complete the task.

So that’s the challenge:  find the hot wire with a multimeter.  The context is a simple residential electrical outlet.

Typical eLearning applications would use text, graphics and video to illustrate the use of the multimeter and explain underlying concepts.  CCAF applications challenge learners to complete the task in a manner that is an educational approximation of the ‘real thing’.  Text, graphics and video can offer explanations but not in lieu of the real-world task and often as a form of feedback.

## Basic Capabilities

But let’s start with an overview of the basic requirements.  To paraphrase Ethan, an authoring tool must have these capabilities:

• Complete visual freedom
• Variables
• Alternative branching
• Conditional logic
• Action/response structures

I’ll elaborate on each of these requirements in my demonstration.

## Complete Visual Freedom

LodeStar combines HTML flow layout and SVG layout.  Images imported into the HTML editor are placed in the HTML flow and are laid out according to the rules of HTML.  Images can also be taken out of the flow and applied with a CSS rule so that text flows around the image.

In addition, LodeStar authors can use the Scalable Vector Graphics (SVG) canvas to layout out graphics freely in any position on the x and y axis.

In other words, the graphical elements on the SVG canvas are laid out freely.  The SVG canvas itself is just another HTML element.  Depicted below is a flow of HTML elements like text, images, divs, tables, etc.  The SVG canvas is in the ‘flow’ right along with them. Inside the canvas, graphical elements can be positioned anywhere, but the canvas itself follows the HTML document flow. shrinking and expanding as needed.

The visual freedom is that LodeStar combines the benefits of a responsive HTML flow with the precise positioning of an SVG canvas.

I started with a multimeter image that I took from Pexels.com, a repository of free stock photos.  I used Photoshop to cut out the dial and imported it in the SVG canvas as a separate image.  I did this because I wanted the learner to be able to rotate the switch to place the multimeter in the right mode.  I also imported the image of an electrical box so that I could draw wires overtop.

## Variables

As I wrote in the Humble Variable (The Humble Variable | LodeStar Web Journal (wordpress.com)), variables are critical to some eLearning designs.  In this example, I need to store the position of the multimeter switch.  That’s what variables do.  They are storage places in the computer memory.  As the learner clicks on the switch, the dial rotates.  As an author, I must store the value of that rotation.  If the value of the rotation is 40 degrees, the code judges the switch to be in the right position.

To enter the code that uses the variable, I right-click on the switch and select, ‘Select Branch Options’.  Branch Options are basically things that happen as a result of displaying a page or clicking on a button or choosing a multiple-choice option or doing one of many things.

Branch Options can be as simple as turning a page or as complex as executing a whole list of instructions. The following is a basic example of the latter:

The Multimeter code

var rotation = getValue(“dialRotation”);

rotation+= 10;

setValue(“dialRotation”, rotation);

changeRotation(“dial”,rotation ,13,27);

if(rotation % 360 == 40){

changeOpacity(“display”, 1);

appendValue(“actions”, “Turned on multimeter. <br>”);

}

else{

changeOpacity(“display”, 0);

}

This code looks complicated to a non-programmer.  But it is not.  It just takes practice to write.  It’s on the same difficulty level as an Excel formula.

Here is the same code but with an explanation (in italics) underneath:

var rotation = getValue(“dialRotation”);

get the value of dialRotation from long-term memory and assign it to a local or temporary variable named ‘rotation’

rotation+= 10;

add 10 degrees to value of rotation.  In other words, rotation = the old value of rotation plus 10.

setValue(“dialRotation”, rotation);

store the new value in long-term memory in a location called ‘dialRotation’

changeRotation(“dial”,rotation ,13,27);

change the property of a graphic with the ID of ‘dial’.  All LodeStar graphics can be assigned an ‘ID’.

More specifically, change the rotation property by 10 degrees (the value of rotation).  Pivot the rotation at the precise point that equals 13% of the width of the SVG canvas and 27% of the height of the canvas.  That point is the center of the dial in its current position on the canvas.  If the dial were in the dead center of the canvas we would use 50, 50.

if(rotation % 360 == 40){

This line can be simplified to if(rotation == 40)   I used the modulo operator (that is, ‘%’) in case the learner kept rotating the dial around and around.  If rotation = 400, then 400 % 360 would equal 40.  360 divides into 400 once with a remainder of 40.  So, if rotation is equal to 40, then do the following:

changeOpacity(“display”, 1);

change the opacity of a graphic with the id of ‘display’  This is the text box used to show the voltage.

appendValue(“actions”, “Turned on multimeter. <br>”);

store the learner’s actions in long-term memory in a place called ‘actions’

}

else{

changeOpacity(“display”, 0);

if the rotation of the dial does not equal 40, then shut off the display by changing its opacity to 0.

}

The Probe Code

I won’t explain the probe code in as much detail.  Basically, when you drag the red or black probe, then the following code is executed.  It essentially checks whether or not the probes are in the right spot.  If they are, the multimeter display shows 110 volts.

var  condition1 = isOverlap(“RedProbeTip”, “BlackWireBTarget”);

var  condition2 = isOverlap(“BlackProbeTip”, “box”);

if(condition1 == true && condition2 == true){

changeText(“display”, “110.0”);

appendValue(“actions”, “Moved red probe to correct position. Black probe in correct position.<br>”);

}

else if(condition1 == true){

changeText(“display”, “0”);

appendValue(“actions”, “Moved red probe to correct position.<br>”);

}

else{

changeText(“display”, “0”);

appendValue(“actions”, “Moved red  probe to incorrect position.<br>”);

}

These are the drag branch options that are tied to an object with a specific ID.

## Alternative branching

Once the learner has tested the wires with the probes, with one probe connected to the wire and the other grounded, then the learner must select A, B, C, or D.  Here’s where alternative branching comes in.  Learners who select the right answer might go on to a more difficult scenario.  The above scenario is as easy as it gets.  Perhaps they must do a continuity test to detect where there is a break in the circuit.  Learners who select the wrong answer can be branched to a simple circuit or given an explanation that one black wire is coming directly from the power source, and the second black wire is passing on that power to the next outlet or switch.

CCAF applications accommodate the differences in learners.  The application can alter the sequence of experiences based on learner performance.  This is a profoundly different thing than typical eLearning applications where every learner reads the same text, watches the same videos, and completes the same quiz.

## Conditional Logic

Ethan also lists conditional logic as a basic requirement of CCAF applications.  Conditional logic comes in the form of if-else statements as evidenced by the code.  Conditional logic also comes in the form of alternative branching.  Select the wrong answer and then get help.  In LodeStar, conditional logic is supported by not only its language and branch options but also by logic gates.

In the display below, we see what happens when the learner reaches a gate.  (Incidentally, learners don’t actually see a gate.  When they page forward, the application checks the gate’s logic and then branches them according to some condition.  In this example, the author might configure the Gate with a pass threshold.  Let’s say 80%.  If the learner meets or exceeds a score of 80%, they are branched to the ‘Results’ page’.  If not, they may be routed to Circuit Basics. Follow the dotted lines.

## Action/response structures

In our example, the learner moves the probes around.  If the multimeter is turned on, the learner sees a voltage display.  The action is moving the probe. The response is a voltage display.

First, this a ‘real world’ action and ‘real world’ response.  I write ‘real-world’ in contrast to what happens in a typical multiple-choice question.  In a multiple-choice question, the learner clicks on a radio button and possibly sees a checkmark.  That’s only ‘real-world’ to an educational institution.  The world doesn’t present itself as set of multiple-choice questions.

Second, when the learner sees a voltage display, that is feedback in the CCAF sense of the word.  The learner does something and then gets feedback.   Now, in our example, we did choose to combine ‘real-world’ feedback with a multiple-choice question.  Ultimately, the learner is asked to choose the letter next to the ‘hot’ wire.  In our example, we logged the learner’s actions and can unravel how they arrived at their final decision.  Did they connect the red probe to the right wire and did they ground the black probe?  If they selected the right answer but didn’t perform the correct actions that would lead to the right answer, we know they haven’t learned anything at all.

## Conclusion

Authoring tools that enable one to create CCAF must have these capabilities: complete visual freedom, variable support, alternative branching, conditional logic, and action/response structures.

The hot wire example is an example of a very simple simulation.  But, as I wrote, the concept of CCAF isn’t restricted to this type of simulation.  CCAF can be found in decision making scenarios, for example. The learner might be placed in a situation and challenged to make the right decision or say the right thing.  That too is CCAF.  CCAF lies at the heart of effective learning experiences.

# eLearning Strategies to Support Memory Recall

## Introduction

At the university where I worked for eight years, occasionally I observed non-traditional students in class well into the evening, struggling to stay alert, struggling to soak it in, trying to make something better for themselves. Several years earlier, I watched a new employee at a software company resign in utter defeat. Nothing he had studied before in terms of software language, database, and mathematics prepared him for a new domain of knowledge.  It was all foreign, and it was disheartening, and it was delivered in a manner that was all too much.

Late evening classes or eight-hour training days push more and more information at the learners, until they literally break down, quit, or somehow miraculously hang on to fight another day.

The tremendous tax on learners is not unusual in either the corporate or the academic environment.  Both schools and companies place a heavy demand on the learner’s ability to remember things.

The constraints of human memory!  Our lack of understanding of memory would be almost humorous if it weren’t for the wasted effort of students and employees alike.  In this vacuum of understanding, myths and falsehoods and deceptive practices have filled in.   Fortunately we have people like Will Thalheimer (The Debunker Club : Debunking Resources – The Debunker Club) and the authors of The Urban Myths of Learning and Education to help set us straight.

## The Forgetting Curve

What we do know, and what research supports, is that we are wired to forget.  Many of us cite Herman Ebbinghaus’ ‘Forgetting Curve’.  The forgetting curve is real and, in some cases, very steep depending on a number of factors, but as Dr. Thalheimer points out, you just can’t put a number on it.  You can’t say with any certainty, for example, that learners will forget 70% of what they have learned within a day.

Let’s consider the forgetting curve just for a moment, and then we’ll turn to eLearning.

The forgetting curve was the outcome of research done in the late 1800s by Herman Ebbinghaus.  He scientifically observed his own recall of nonsense syllables.  He made up lists of three-letter nonsense words and committed them to memory.  Once he successfully memorized 100% of a list, he attempted to recall the list.  The forgetting curve shows that he forgot 42% of the words within 20 minutes.  After a day he retained only 33% of this list of nonsense.

We know that people forget, perhaps at disheartening rates, but the rate of forgetfulness is based on dozens of factors.  Are these new employees who are being introduced to something new to them, or are they seasoned employees?  Do they have any prior knowledge that will help them organize new information?  Are they paying attention or are they distracted?  Are they motivated to learn – intrinsically or with an external reward?  Is there a threat if they don’t learn?  Is there too much of a threat that inhibits their learning?  Are they just trying to earn CPE credit?  Are they taught how to recall the information in the right place at the right time for the right reason?  Is the material difficult?  Are they asked to recall the information? How many times?

Try placing those variables in a formula.  It’s impossible.

We know that the forgetting curve is real.  It has been replicated recently (Replication and Analysis of Ebbinghaus’ Forgetting Curve (nih.gov)) and it will accurately mirror our students’ or employees’ rate of forgetfulness if we do not:

• Help learners recall prior knowledge
• Help learners organize new knowledge
• Provide storage and retrieval cues that will help them use the information in the right context
• Practice retrieval of the new knowledge
• Space the retrieval over time.
• Integrate the new knowledge with other knowledge
• Apply the new knowledge before forgetting

This is where eLearning plays a role. Oftentimes, trainers are busy workers or busy teachers who can’t address deficits in prior knowledge, for example, or even assess prior knowledge, or fit spaced practice or simulated application into their training.

That is where I think eLearning can shine.

I know, I know.  I’m an eLearning developer and an eLearning authoring toolmaker.  But there are reasons why I chose this field.  This is one of them.

The design of eLearning experiences can help improve the training experience, even if the latter is traditional face-to-face teaching.  As I’ve observed, many people dread eLearning because of the page-turner drudgery they’ve been subjected to.  Medical workers, lawyers, and accountants, and anyone with continuing education demands, have had too many bad self-study experiences.   In my current company, group-live (face-to-face) instruction is preferred over eLearning. That doesn’t, however, eliminate the option of eLearning. As a pre-training preparation or a post-training reinforcement and application, eLearning can still play a role.

Against this backdrop, here are some strategies or designs that can help:

Plan the training or academic curriculum to include pre-training activities and post-training reinforcements.  Make room for recalling prior knowledge in the training or lesson plans of future courses.

Flip the training.  That means, use eLearning (or self-studies) to present the content and use face-to-face training time to observe student performance and provide feedback. Data from 317 studies shows that flipped classroom interventions produced positive gains across all three learning domains (To Flip or Not to Flip? A Meta-Analysis of the Efficacy of Flipped Learning in Higher Education – Carrie A. Bredow, Patricia V. Roehling, Alexandra J. Knorp, Andrea M. Sweet, 2021 (sagepub.com))

## Pre-training

Let the post-training assessments for the last course or training session be the pre-training assignments for the new thing — not as assessments, but as highly scaffolded activities with prompts and hints and feedback and textbook references and video helps and whatever.  The point is to help recall and to prepare learners for what lies ahead.

Design activities that help learners recall vocabulary, basic concepts, laws, principles and procedures.  Activities can help prompt that recall and reduce the cognitive load of the new stuff.  If an accounting teacher makes references to cash or accrual accounting, do you want students struggling to recall the terms or do you want them paying attention to the new information?  It’s hard for them to do both.

Use flashcards, crosswords, matching, categorization, and other activities.  They’re not as sophisticated as things I’ve discussed in past posts, but they can play a useful role in helping recall.

Embed a video or a short Powtoon presentation.

Use quizzes with circular queues (missed questions get repeated) or variable interval queues (missed questions get repeated at spaced intervals).

Make it fun.  Gamify it.

## Post-training

All of the pre-training suggestions apply to post-training as well.  But you can do even more.

### Interactive Storyboards

This strategy walks the learner through the presented content in a storyboard fashion.  In the interactive storyboard, however, the learner must fill in the missing pieces. Recently our HR department presented on employee feedback and the different roles that in-charges, supervisors and talent advisors play in giving feedback to accountants and auditors.  She talked about a process that included feedback in review notes, one-on-one meetings with supervisors, and regular meetings with talent advisors.  The post-training activity can follow along in the life of an accountant but leave blanks for the learner to fill or questions for the learner to answer. It causes the learner to retrieve important elements of the presentation and become an active participant in reconstructing the information. When the learner gets it wrong, that’s an opportunity for feedback!

An added benefit to the activity is that we can see how learners experienced the post-training activity through the xAPI statements that the (CMI5-conformant) activity generated.  In the following screenshot from the Learner Record Store, we can see that this employee missed the point that there is a connection between one-on-one meetings and talent development meetings.  We also see that this employee did hit the results page with a decent score the first time around.  The employee satisfied the requirements of the assignable unit (AU) and completed the course. That tells us a lot.  If we were to analyze all of the items that employees missed, we could either improve the presentation or improve the questions.

### Embedded Discussions

Higher education instructors often invite students to discuss topics online after a presentation.  There is a reason for this. At the most elemental level, it forces recall of the presentation. At a higher level, it generates new knowledge as students hear differing perspectives.

In my time in higher ed, I’ve seen this done well and I’ve seen it done poorly.  My poster child for doing it right was a marketing instructor who simulated product advertising pitches in a discussion forum.  My hunch is that online discussion in corporate training environments is rarer.  To my point, our corporate Learning Management System (LMS) doesn’t even offer a discussion board.

The following screenshot depicts an activity prototype with an embedded discussion board.  For this prototype, we used Tribe from Tribe | A Customizable Community Platform.   Tribe allows you to create and embed your discussion board.   (I’m not necessarily endorsing Tribe.) The strategy is to refresh employees on the fundamental principles of giving and receiving feedback and then ask them to discuss what works for them.

The key idea is to immerse learners in the content with enough information to prompt their recall of the training.  Then we invite them to share their insights or strategies with others.  They don’t need to leave the activity and log in to another service.  They can share their thoughts right there and then.

This is an important idea in a general strategy that we’ve been working out called 3Di.  That means delivery of interactive content, discussion, and then decision.  Students apply what they have both learned and discussed to make a decision.

### Staged Journals

We first developed this strategy for a literature teacher.  She taught students how to be analytical of fairy tales.  She instructed them on the Propp analysis based on the work of Vladimir Propp.  In the staged journal technique, students would be presented with one step or stage of the analysis.  They would complete the step and go on to the next.  In the end, they had a journal that was compiled of all the steps.

The screenshot below depicts an employee who types in his greatest difficulty when asked to give a subordinate corrective feedback.  The learner brainstorms difficulties, and then brainstorms remedies.

Here is an excerpt from a journal that compiles it all together in a feedback summary.

## Conclusion

Face-to-face instruction may have its supporters, but even this delivery type should include pre-training and post-training eLearning activities.  We know from research and from our own surveys that students and employees forget too much of what we teach.  The amount and rate of forgetfulness may not precisely follow Ebbinghaus’ curve but unless we address forgetfulness, students won’t achieve the desired outcomes of the training.

More in-depth activities might include decision-making scenarios and simulations.  I’ve written about those in past articles but, in this post, I have featured activities that can be quickly and easily generated.  All three activities represent strategies that can help in the reflection and recall of training.

# CMI5: A Call to Action

## Introduction

Since 2000 a lot has changed. Think airport security, smart phones, digital television, and social media. In 2000, the Advanced Distributed Learning (ADL) Initiative gathered a set of eLearning specifications and organized them under the name of SCORM. In 2021, in a time of tremendous technological change, SCORM still remains the standard for how we describe, package, and report on eLearning.

However, finally, we are on the eve of adopting something new and something better: CMI5.

## CMI5 Examples

To many, CMI5 is another meaningless acronym. To understand the power and benefit of CMI5, consider these very simple examples:

A Learning and Development specialist creates a learning activity that offers managers several samples of readings and videos from leadership experts. The activity allows the managers the freedom to pick and choose what they read or view; however, the specialist wants to know what they choose to read or watch as well as how they fare on a culminating assessment.

CMI5 enables the activity to capture both the learner experience (for example, the learner read an excerpt from Brené Brown’s Daring to Lead ) and the test score. CMI5 can generate a statement on virtually any kind of learner experience as well as the traditional data elements such as score, time on task, quiz questions and student answers. In this sense, CMI5 supports both openness and structure.

Let’s consider another example:

An instructor authors a learning activity that virtually guides students to places in Canada to observe the effects of climate change. She wants students to answer questions, post reflections and observe the effects of climate change on glaciers, Arctic ice, sea levels and permafrost. She sets a passing threshold for each activity. Once students have completed all of the units, then the learning management system registers that the course was mastered.

Let’s go further:

The instructor wants the learning activity to reside in a learning object repository or website outside of the learning management system – but still report to the learning management system. In fact, she wishes that no content reside on the learning management system. Regardless of where the content resides, she wants to know what sites students visited, how they scored on short quizzes, and how students reacted to the severe impact of climate change on Canada.

For students with disabilities, the instructor makes an accommodation and requests that the LMS administrator adjust the mastery score without editing the activity.

As the course becomes more and more popular, she anticipates placing the website and its activity onto CloudFlare or some content distribution network so that students all around the world can gain faster access to the learning activities.

The instructor works as adjunct for multiple universities and wants each of their learning management systems to get the content from a single location. In some cases, she wants the content locked for anyone who circumvents the Learning Management System and in other cases she openly lists the unlocked content with OER libraries like Merlot and OER Commons.

Before CMI5 much of this was difficult to achieve, if not impossible. So, let’s review what CMI5 offers us.

CMI5 captures scores in the traditional sense. But it also records data on learning experiences such as students virtually observing the change in the permafrost. CMI5 allows instructors and trainers to set the move-on criteria for each unit in a course (i.e. passing score before student moving on to the next unit).

CMI5 activities can reside anywhere – on one’s own website, for example, and still report to the learning management system. CMI5 enables an LMS administrator to change the mastery score from the LMS for the benefit of students who need accommodations and essentially trump what is set in the unit.

CMI5 is a game changer. And yet for many – learning and development leaders, instructional designers, technologists and students – it doesn’t seem that way in 2021. CMI5 seems like a non-event. It feels like something we all talked about – a welcome change of weather on the horizon –and then nothing. Not a drop of rain.

We have been talking about and anticipating CMI5 for a long time – and yet, major learning management systems both in the corporate and academic worlds still don’t support it. CMI5 was envisioned in 2010, released to developers in 2015, and then released to the public in its first edition in 2016. We are now in the waning days of 2021—with limited adoption.

But that is likely to change.

For one, Rustici Software and ADL delivered on their promise of Catapult. Catapult is likely to accelerate adoption of CMI5. It provides many benefits to developers, including the ability to test if a CMI5 package conforms to the standard.

In my view, the learning technology architects have done their part. They brought us a meaningful set of specifications. They brought us the tools to test learning packages and to test the learning management system’s implementation of CMI5. Now’s it’s up to learning and development specialists and the instructional design community to cheer CMI5 on. It is my belief that once the community understands CMI5, spreads the word, and imposes its collective will on the LMS providers, CMI5 will become an important part of our tool bag. I urge you to share this article and others like it.

In the meantime, let’s take a deeper dive into CMI5’s potential.

## Benefit One: Freedom to capture and report on any learner experience.

With CMI you can report on scores, completion status, and just about anything else. You can report on standard assessment results, and the not-so-standard learning experiences.

To understand this, we need to re-look at SCORM.

One should consider CMI5 as a replacement for SCORM – an improved specification. Conforming to SCORM was useful because a learning object or learning activity could be imported into just about any modern learning management system. As an instructor, if you created a game, quiz, presentation, simulation, whatever and exported it as a SCORM package, your activity could be imported into Moodle, BrightSpace, Canvas, Cornerstone, Blackboard, and any learning management system that supported SCORM. So, the benefit of SCORM was that it was a set of standards that most LMS systems understood. The standards that fell under the SCORM umbrella included metadata, a reporting data model, and standard methods for initializing an activity, reporting scores, reporting on interactions, and reporting passing or failing and completion status.

The data model included dozens of elements. One example of a data element is cmi.core.score.min. Related to score, SCORM conformant activities reported on the minimum score, the maximum score, the raw score (absolute number) and the scaled score ( a percentage between 0 and 1).

SCORM supported a lot of different data elements. A SCORM conformant activity could report on a variety of things. The limitation of SCORM, however, was that, despite the large number of elements, it was still a finite list. Take a Geolocation Storytelling activity as an example or an eBook reading. If I wanted to capture and report that the student virtually or physically visited location A, then B, and then C, I would have to work around the limitations of SCORM. I could not generate a statement such as, for example, ‘Student visited the Amphitheater in Arles’. If I wanted to capture a student’s progress through an eBook, SCORM would be problematic.

At this point, you might be protesting, but xAPI does that! xAPI? Another acronym! Yes. xAPI, or The Experience API is a new specification that makes it possible to report on a limitless range of things that a learner has experienced: such as, completed a chapter of an eBook; watched a video; toured a museum, and on and on. So, if we have this thing called xAPI, why CMI5?

The benefit of xAPI is that it supports the reporting of anything. The downside to xAPI is that, by itself, it doesn’t have a vocabulary that the LMS understands such as launched, initialized, scored, passed, completed. That is what CMI5 offers. CMI5 is, in fact, an xAPI profile that includes a vocabulary that the LMS understands. In addition, CMI5 can report on any type of learner experience. Here is the definition of CMI5 from the Advanced Distributed Learning Initiative:

cmi5 is a profile for using the xAPI specification with traditional learning management (LMS) systems

With CMI5, you can have your cake and eat it too. You can report on learner activity in a way that LMS understands and you can report on just about anything else that the Learning Management System stores in a Learner Record Store. The Learner Record Store or LRS is a database populated by statements about what the learner experienced.

## Benefit Two: Freedom to put the learning activity anywhere

With CMI5, you can place a learning activity in a repository, in GitHub, on a web server, in a Site44 drop box site, in SharePoint, in a distributed network, wherever….without restricting its ability to connect with a learning management system. CMI5 content does not need to be imported. A CMI5 package can contain as little as one XML file, which among other things, tells the LMS where to find the content.

To appreciate this, we need to look back at SCORM once more (as if it were ancient history).

The way SCORM works is that the learning activity sits in a window. The learning activity uses a simple looping algorithm to find the Learning Management System’s SCORM Adapter. It checks its parent window for a special object. If the window’s parent doesn’t contain the object, the activity looks to the parent’s parent, and so on. In other words, somewhere in that chain of parents, there must be that special object. Typically, the SCORM activity can only communicate to the learning management system if it is a child window of that system or if some server-side technology is used.

CMI5 works quite differently. CMI5 gives us freedom to leave our parents’ home. Whereas SCORM uses a Javascript Application Programmer Interface to communicate, CMI5 uses xAPI to reach across the internet and call a web service’s methods. Loosely, it’s like the difference between a landline and a cellular phone service. To use the landline you must be in the house; to use a cell phone, you must be in the network.

## Benefit Three: A simplified sequencing model.

SCORM supported simple sequencing, which many say is not so simple. CMI5’s ‘move on’ property, in contrast, is very easy. A CMI course can contain one or more Assignable Units (AUs). The instructor spells out what the learner must achieve in an assignable unit before being able to move on. The move on property has one of the following values:

• Passed
• Completed
• Completed Or Passed
• Completed And Passed
• Not Applicable

Once the student has ‘moved on’ through all of the assignable units, the LMS notes that the course has been satisfied by that student.

## Benefit Four: An assignable unit passing score can be overridden

In SCORM, the mastery score is hard-coded in the activity. In a SCORM activity, the instructor can base completion status on a passing score. But what if that hard-coded score were inappropriate for a group of students, for whatever reason? The specification enables an LMS to pass the mastery score to the Assignable Unit upon launch. So the LMS launches the AU, and sends it student name and mastery score (among other things). By specification, the AU cannot ignore the mastery score but must use it to trump what is hard-coded in the unit or refuse to run.

## Benefit Five: Theoretically, CMI5 isn’t hamstrung by pop-up blockers.

When an LMS launches a SCORM activity, it either embeds the activity in an Iframe or launches a window. Both scenarios are problematic. The content may not be well suited for an iFrame and a pop-up blocker can obstruct the launched window.

Theoretically, CMI5 AU can replace the LMS with its own content. It’s not in an embedded iFrame and it’s not a pop-up window. When the LMS launches the AU, along with student name and mastery score, the LMS sends the AU a return URL. When ended, the AU returns the student to that return URL, which is the address of the LMS.

I write “theoretical” because the LMS should not but may ignore this requirement.

## Benefit Six: CMI5 activities securely communicate to the Learner Record Store

As I wrote, the activity can send information about learner experiences clear across the internet to the learner record store. But how does the AU have the authorization to do this from, let’s say, a web site? And how does it happen securely?

This is the marvel of 2021 technology versus 2000 technology. Before 2000, we had difficult-to-use protocols for passing information securely across the internet. Oftentimes, special rules needed to be added to internet routers. Then along came a simpler protocol that the first version of CMI5 used (SOAP). Then came an even better way (OAUTH and REST). After launch, the LMS hands the AU a security token (kind of like a key that dissolves in time). The AU uses that key to gain access and to post information to the Learner Record Store.

## Conclusion

CMI5 returns power to the instructor and to the L&D specialist. CMI5 allows one to choose where the content resides and to choose what the content reports. CMI5 captures learner experiences more completely and yet it communicates with Learning Management Systems with a vocabulary that LMSs understand. CMI5 supports accommodations for a special group of students without needing to change the code of the Assignable Unit. Finally, CMI5 uses current technology to send data over the internet.

The implications of this emerging specification are tremendous. It is better suited to mobile learning and it is better suited to the learner experience platforms that are emerging (e.g. LinkedIn Learning’s Learning Hub). Soon instructors may be able to organize content from a variety of providers (like LinkedIn Learning, Khan Academy, or OER Commons) but retain the learning management system as an organizer of content, data collector, and credentialing agent. Now instructors, average instructors, may be able participate in that content market from their own GitHub repositories and web sites.

But many LMSs have yet to adopt CMI5. The architects have done their part. Now it’s on us to understand this technology and advocate for it. Start by sharing this article. Thank you.

## Appendix A — How it Works (A simplified flow)

For those interested in a deeper dive, let’s walk through the CMI5 process flow step-by-step. (See diagram)

To begin, the author (instructor, L&D specialist) exports content as a CMI5 package. The package can be a simple file that instructs the LMS where to find the content or it can include the content itself.

(1) When a student needs the content, the Learning Management System (LMS) launches the content and sends the Assignable Unit (a course can contain one or more Assignable Units) (2) information that includes student name, a fetch URL and the activity ID.

(3) The Assignable Unit (AU) uses the fetch URL to retrieve a security token. The security token enables the AU to communicate securely to the Learner Record Store (LRS).

(4) As the student interacts with the content, the AU can optionally send Experience API (xAPI) statements to the LRS . (5) At some point, the AU reports that the student passed and/or completed the unit.

(6) The LMS uses the ‘move-on’ information to determine whether or not the student can move on to the next assignable unit. The move-on options are passed, completed, passed and completed, passed or completed, or not applicable.

Finally, when all of the assignable units within a course are completed, the course is marked as satisfied for the specific learner.

# Short Sims

## Introduction

Some of us aren’t content with simply presenting information in a linear fashion in an online course.  We have dozens of words to express what we wish to achieve: interactive, game-like, thought-provoking, challenging, problem-based….   We are also hard-pressed to find the time or the budget or the design that will fulfill our highest aspirations for eLearning.

It’s easy to get discouraged – but occasionally we’re offered a strategy that works within our budget and time constraints.  One such strategy is the basis of  Clark Aldrich’s recent book, “Short Sims” (Aldrich, C. (2020). Short sims: A game changer. Boca Raton: CRC Press.)

In his book, Clark Aldrich discusses the methodology of the short simulation.  He begins by lauding the virtues of interactivity.  Interactivity allows learners to experiment, customize their experience, role-play, make decisions and apply skills. He writes that game-like interactivity is expensive to build.  We all recognize that.  Short Sims, on the other hand, can be built in the “same time frame as linear content”.  Short Sims engage students in making decisions, doing things, meeting challenges, solving problems, learning from mistakes and so forth.  Essentially Short Sims offer us a strategy – a methodology – to do things differently and more effectively.

The hook comes from this excerpt:

Aldrich credits the Journal of Comparative and Physiological Psychology for that insight.  But again, in Aldrich’s words, “game-like interactivity is expensive to build.  It is time-consuming.”  Aldrich offers a new Short Sim methodology as an antidote to linear-style presentation the death-by-PowerPoint approach.

Short Sims

Show, not tell

Engage learners quickly and are re-playable

Are quick to build and easy to update

Short Sims square with the Context-Challenge-Activity-Feedback model that we’ve heard so much about from Dr. Michael Allen, Ethan Edwards and the designers at Allen Interactions.  They are a solution to M. David Merrill’s lament that so much learning material is shovelware.  ShortSims are not shovelware.  They are a cost-effective means of engaging students.

Quite frankly, the LodeStar eLearning authoring tool was made for the Short Sim.  Instructors have used LodeStar for years to produce Short Sims but never used that term.  We called them Simple Sims, which sometimes included decision-making scenarios, interactive case studies, problem-based learning and levelled challenges.  We solved the same problem.  We made it easy for instructors to create Short Sims quickly.

Our design methodology has a lot in common with Aldrich’s methodology as described in his book.   The following ten points outline our approach to creating a simple decision-making scenario, which, in our view, is one form of Simple Sim.  To avoid mischaracterizing Aldrich’s methodology, I’ll use our own terms in this outline.

1. Select Challenge
2. Pick Context
3. Determine the Happy Path
4. Determine Distractors
5. Pick a setting – background graphic
6. Choose a character set
7. Produce the Happy Path

## Select Challenge

Selecting the right problem and the right scope is, in itself, a challenge for the instructor or trainer.  Straightforward processes that present clear consequences for each decision are easy to simulate.   Processes like strategic planning that are influenced by dozens of variables are much more difficult.   The Short Sim methodology itself would be good candidate for a Short Sim.  Another example would be the backwards design method of instructional design.  In my early days at Metro State, a decade ago, we discussed the backwards design approach with instructors.   We then used a Short Sim to rehearse instructors on the key questions to ask during each phase of the backwards design process.  We based a lot of our thinking on Dee Fink’s “Creating Significant Learning Experiences” and  Grant Wiggins’ “Understanding By Design”.  Our objective was to help instructors design with the end in mind.  In Backwards Design, outcomes and assessments come before the development of activities.   The Short Sim did the trick.  Planning instruction is complicated business.  A simple and short simulation is not, in itself, transformative.  But we just wanted assurance that instructors understood the basic principles of backward design by the decisions they made.

## Pick Context

In the Backwards Design example, a dean asks an instructor to design an online class to help K12 teachers use educational technology in their classrooms.  So, in this context, the learner is playing the role of online course designer.  The learner is challenged to make the right decisions at the right time.  If the learner holds off on designing activities until completing an analysis, defining outcomes and creating assessments, then the learner succeeds in the challenge.

## Determine the Happy Path

The happy path is all the right decisions in the right order.  Situational Analysis -> Learner Outcomes -> Assessments -> Activities -> Transfer.  It is all of the right answers with no distractors.  It’s like creating a multiple choice test with only one option: the correct answer.

## Determine Distractors

Now come the distractors.  What are the common pitfalls to Backward Design?  What might tempt the learner to go astray.  If we were designing a Short Sim on the Short Sim methodology, the pits and snares might be what Aldrich calls the Time Sucks:  choosing the wrong authoring tool, too many decision-makers on the project, custom art, and so on.  The learner might be tempted with “the medium is the message.  Invest in the medium.  Commission a graphic artist to create a compelling interface.”  The point of Short Sims is to not invest heavily in artwork or graphic design.  The focus is more on describing the context, presenting choices to the learner, and showing the consequence of learner choices.

## Pick a Setting

A background photo helps to set the context.  Images that display settings without people can be found on sites like Pexels, Wikimedia Commons, in the public domain section of stock image services and, of course, on stock image sites. Because one image often suffices in a short sim, authors can snap their own photos and not waste too much time.

Alternatively, vector artwork can serve as an effective background.  Vector art can be found and  downloaded from such sites as https://publicdomainvectors.org/.    (LodeStar Learning doesn’t endorse any of these sites – but we have used them all.)

In either case, if the scene is relevant to the learning context and not just a vain attempt to gamify, it might actually contribute to content retention and recall.

## Choose a character set

A popular approach to Short Sims is the use of cutout characters with different poses and expressions.  Cutout characters can be photo-realistic images with transparent backgrounds or illustrations.  To see examples, please google ‘elearning interactive case studies’, select ‘images’ and you’ll see thousands of examples.  Despite their popularity, finding cutout characters cheaply can be frustrating.  Several authoring tools offer a built-in catalog of characters.  These tools tend to be expensive.  Many stock photo sites offer character packs but usually one must subscribe to these sites for a monthly charge.  Some sites offer pay-as-you-go services, meaning that you pay for the character pack once, without signing on to a monthly subscription.  The character pack can be as cheap as $4. One such site is eLearning Templates for Course Developers – eLearningchips. A complete character pack purchased from eLearningChips with more than 137 poses costs as little as$54. No subscription.  No additional fee.  (Again, we’re not endorsing eLearningChips, but we have used their service.)

## Produce the Happy Path

With the LodeStar authoring tool, we had several options for producing the Happy Path.  We used the ActivityMaker template and, after the title page, added a sequence of Interview Pages.  The ActivityMaker template offers a range of page types. The Interview Page is one of them.  In an Interview Page, we dropped in a character and filled in the best choice.  We didn’t concern ourselves with the distractors (the wrong options) quite yet.  Again, we were focused on the Happy Path.

Here is the author view:

Here is what the student sees:

Once we sorted out the happy path – a sequence of perfect, well-informed choices, we thought about the pits and snares—the problems and challenges.

In our course design example, a common problem is that we think too early about the content–that is, what topics should the course cover.  We anticipated those problems when designing our Short Sim.  If a learner unwittingly falls into our trap, we have the opportunity of providing feedback. It’s a teachable moment.

An alternative to the Interview Page type is the Text Page.  In a text page, we can add images and widgets.  These give us a bit more flexibility than the Interview Page Type.  On a Text page, we can add an image (left or right aligned), then a Text Layout Widget.  Here you can see the page with image and the Text Layout widget.  The image was composed in our SVG editor.

Here is what the student sees.

In one sense, a branch is a place where we get sent based on our decisions.  If this were a customer service sim and we made poor choices, the customer would appear more and more irritated and ultimately we lose his or her business.  Programmatically, the place where we get sent is a page that shows an irate customer and choices that represent a difficult situation.  The branches could lead us down a path of destruction but we may also have the opportunity of winning back the customer’s trust with a string of good decisions.

Branching adds variety to the sim.  It gives us a customized experience or allows us safely to ‘test’ bad choices.

Branching can also be viewed as the consequence of a decision or choice.  In LodeStar, branch options include going to the next page, last page or jumping to a page.  They also include bringing up a web resource, adding an instructive overlay, setting a variable value, etc.  It could also mean the execution of a script or series of commands to make a lot of things happen simultaneously, such as setting a variable (that tracks our failings), sending us down a path, changing the image of a happy customer to an unhappy one, showing feedback, marking the choice with red, and more.

It’s probably most effective to show the learners the natural consequence of their decisions–an unhappy customer for example.  As designers, we might also need to be explicit and display feedback, or introduce a coach who provides feedback.  As Clark Aldrich writes, the sign of a good Short Sim is one that is played over and over again.  Branching helps us make the sim a different experience each time.

Randomness might be difficult to achieve and should, therefore, be considered optional.

Randomness is more than randomizing distractors.  (Randomizing distractors happens automatically on an Interview Page.  It’s done through a simple checkbox in a Text Layout widget.)  More sophisticated randomness might include a randomly generated sum of money, or a randomly selected path or scene, or randomly generated assets that are assigned to the learner.  It might be a randomly generated length of fuse that represents the customer’s patience.   In our course design example, it might be randomly generated student characteristics that include age, gender, and subject interest.  That level of randomness is best achieved with the help of LodeStar’s scripting language and is best left to its own article.

## Conclusion

Short Sims represent a level of interactivity that goes beyond the linear presentation of information.  They have the potential of promoting learner retention and application.  With the right tool (and there are plenty),  everyone can build short simulations.  One tool, LodeStar, was designed from the very start with the short simulation and the intrepid instructor in mind.  Short Sims may vary in sophistication and design but, in any form, they cause learners to think and to see the consequence of their actions.  The short sim is a strategy that is doable and repeatable within our budgets and time constraints.  Make it happen in your world!

# The Humble Variable

## Introduction

Instructional Designers are skilled at using text, media and graphics to help meet learner objectives.  But design often extends beyond the visible into the functional.  Designs might require tracking user performance, branching to an appropriate level of instruction, saving state, and creating highly individualized, interactive, learning experiences.

At the root of this functionality is the humble variable.  Understanding the variable and all of its implications in a learning design may seem a little out of reach of instructional designers.  That seems like programming…and programming is the domain of specialists like programmers or instructional technologists who know and, perhaps, even enjoy things like mathematics and logic.

But most instructors and many designers don’t have such specialists as a resource.  With a little knowledge, designers can expand their designs on their own and create better experiences for learners.

## The Variable

As a start, there are some basic things about the variable that all instructional designers should know – some basic things that will help designers think about their designs more clearly.

First, a bit of unlearning.

We learned about the variable in elementary school.  We were asked to solve for x, given this type of equation.

6 + x = 10;

‘x’ was a challenge.  You had to manipulate things in your head like  x = 10 – 6.  You needed to learn about the dark art of algebra.

And so, something as arcane as this

produced the graph below if you repeatedly plugged in a number for t, solved for x and scaled up the result:

Probably not the traditional domain of instructional designers.

But, in instructional design, the variable isn’t a problem to solve.  It’s a tool.  It’s a tool like text and graphics and media.  And you can start simple.

The use of variables gives us the ability to save state (remember things like user performance on a question item) and to branch (go down one learning pathway versus another) and to evaluate performance (were the right things chosen and in the right order, perhaps).

So powerful is the variable that all major eLearning authoring systems not only use variables internally but give the author access to them.

Down below is a screenshot from Storyline, a popular authoring tool.    The author of a game is tracking how many correct answers the learner achieved (correctcounter), whether or not the learner has achieved a fail condition (fail), and other things not pictured here like whether or not the learner has attempted the question once, twice or three times, the overall score and the number of seconds on the timer (timer).

The variable is a storage place.   Some people like to use the analogy of a bucket – a place to dump data into.  I like the analogy of a mailbox.  The mailbox has both an address and a place to store stuff. Like a mailbox, variables have an address in computer memory; they have an easy name that we can use to refer to that place in memory; and they store a value. That storage place can typically hold numerical values, characters (as in a name) or true/false states.  There are fancy names for all these things like integers, floats, strings and booleans – but we are only concerned about basic things such as the value being stored as a number, set of characters or true/false.

Numbers versus characters versus true or false matter because they take up different amounts of computer memory, they enforce the type of data that is stored in the variable so that coding mistakes aren’t inadvertently made, and they are stored differently in the bowels of the computer.

The following screenshots also hint at another division between variables.  The first screenshot that follows shows user variables.  User variables, in this case, store information about the student name and id.

### User Variables in Captivate

In the next screenshot, system variables store program settings related to movie control.

### System Variables in Captivate

There is also another category often referred to as the user-defined or custom variable, not shown here.  In most programs, if you wanted to track something special, you would create your own variable.   For example, if I gave the learner a choice of tools to select in order to complete a task and wanted to track which tool was selected, I could create a variable called ‘toolSelected’ and assign the variable a value.

For example, toolSelected = ‘caliper’

Or, optionally, I could assign a number to the variable, as in  toolSelected = 1

Alternatively, I could create a variable called ‘caliperSelected’ and set it to true or false. Or I could create a variable called ‘toolsSelected’ and in this case, set it to:

toolsSelected = “caliper; nippers”

In short, I have options.

So with that we are straying dangerously close to the wheelhouse of the computer programmer.  But for the instructional designer, what is important is an affordance — a capability.  We could give our learner a task and have the learner collect the appropriate tools.  Just knowing that variables can hold a bunch of values gives us a strategy to think about.  What if we placed learners in a situation where they could gather things to use in a problem-solving situation?  Thinking about variables and their capacity to store can inform our thinking – and give us a strategy or a way to accomplish our objective.

Let’s take this a bit further.

## Conditional Statements

In my next example, I will use a custom variable and apply it to some branching logic.   In order to understand the example, we’ve already looked at the variable.  Now let’s look at some logic.  Branching logic can be achieved by either a conditional statement like one finds in Microsoft Excel or, in the example that follows, a ‘Gate’

In the spreadsheet below, we have scores in column B.  The logic is that if the score in column B is greater than 49, then the text in column C will show ‘Pass’.  Else, column C will show ‘Fail’

The gobbly-gook language part of this, looks like:

=IF(B3 > 49, “Pass”, “Fail”)

B3 is the cell that lies at the intersection of column B and row 3.   So, if you can think of the first part inside of the parentheses as a condition, the second part is a value if the condition is true, and the third part is a value if the condition is false, then the gobbly gook reads like this:

If the condition is true, show “true”, else show “false”.

The condition:  is the value in B3 larger than 49?  If yes, show ‘Pass’; else if no, show ‘Fail’.

eLearning authoring systems present different ways of using the same type of logic.  You can imagine a branching scenario.  If the learner score is greater than 80, proceed down the ‘enrichment’ path.  If not, proceed down the ‘remedial’ path.   Branching is just a series of else if statements, like the one shown on the spreadsheet.

So now, let’s show an example that combines the use of the variable and some branching logic.

## An Example

In the following example, we’ll introduce LodeStar 8 (which will be released soon).    In the activity, I will show 6 animals.  3 of the animals are critically endangered.

The object of the lesson is for students to understand what critically endangered means and, given some data,  to be able to identify some animals that are examples of critically endangered species.

Identifying the critically endangered is actually highly technical, involving numbers, habitat area, habitat fragmentation, number of generations and so forth.  Let’s say, for the sake of our example, that we presented students with all of that information and then asked them to select the animals that are critically endangered.

If students correctly select a critically endangered species, they will earn 2 points.  Selecting an endangered species subtracts 1 point.  Selecting a vulnerable species subtracts 2 points.

Out of the six animals, three are critically endangered.  The best score is therefore 6.

Here is a screenshot of LodeStar 8 and the ActivityMaker template, which we used in our example.

A screenshot of LodeStar 8, due to be released March 2020

ActivityMaker supports different page types.  I’ll select the “Text” page type.  This page type supports text, imagery, SVG graphics, and widgets.  (We’ll talk about widgets soon.)

On the first page, I’ll add six images and a page heading.

Produced with LodeStar 8 ActivityMaker Template

## Adding Branch Options to Images

First, to assign a Branch Option to an image, I click on the image and select the branch icon.  The branch icon is used throughout LodeStar.  (Please note:  You can only add branching logic to an image once it is loaded and appears on the page.)

The Branch Option dictates what happens when a question is answered correctly or incorrectly, when a page is displayed, when a gate is reached and so forth.  In this case, the branch icon controls what happens when an image is selected.  There is a selected branch option and a deselected branch option.  This is new to LodeStar 8.

To start, I load the image, select a scalable size (in percentage) and then click on OK.  I then click on the image and re-open the dialog box.

I click on the ‘Selected’ Branch for the Sumatran Rhino and launch the branch dialog.

I then set the Branch Option to ‘Append Value’ and fill in the variable name, which is ‘score’ and a value that will be appended to the variable, which is the value of 2.

Appended, in this case, means that 2 will be added to whatever the value that the variable ‘score’ is currently storing.    Essentially this:

Score = Score + 2

Meaning

The new value of score is assigned the old value of score + 2.

For deselected, the opposite is true.

score = score +  (-2);

Or

-2 will be appended to score, which is the same as

score = score – 2;

I then want to present the option for students to evaluate their selections.  I type in text ‘Check Answer’, highlight it, and then select the ‘Insert Link’ tool in the HTML editor.

LodeStar’s HTML editor is unlike any other editor.   The ‘Insert Link’ dialog presents multiple options including the ability to link to one of the LodeStar pages.  The Pages (UID) dropdown displays all of the available pages.  If the author forgets to give a page a human-friendly name, then only the computer-friendly UID number is shown.  In the screenshot below, you can see both.

When the student clicks on ‘Check Answer’ they will jump to the ‘Evaluate’ page and see an Embedded Variable widget displayed on the page.

The purpose of the Embedded Variable widget is to display the values of variables.    The widget dialog is launched by clicking on the sprocket icon as pictured.  (Remember, the LodeStar HTML editor is not your everyday brand of HTML editor.)

Insert a widget on a page

The widget dialog presents a menu of different widgets.

Widgets enable authors to embed timelines, word problems, questions, drag and drop, and other items on a Text Page

The author inserts the ‘Embedded Variable’ widget wherever s/he wishes to display variables and then types in the following:

Your ability to identify critically endangered species ranks {score} out of 6.

‘score’ is a variable name.  It holds a value (the student performance).  When the student sees this sentence, they will see the value and not the variable name.  If the variable has not been initialized (given a starting value), they will see ‘undefined’.

Students can go back and attempt to improve their scores or they can move on.    The ‘Start Your Journey’ links to the ‘Gate 1’ page.  The ‘Go back’ links to the page with the animals.   The following diagram, found under ‘Branches’ on the left side, shows the branching connections from the Evaluate page to the preceding page and from the Evaluate page to the Gate. (I’ll explain gates in a second.)

The Branches view

The following screenshot shows the ‘Embedded Variable’ widget editor.   Variables that have been used elsewhere in the program need only curly braces {} to be used.  Variables that don’t exist can be declared here.  (They can hold the result of expressions written in JavaScript, which is a more advanced concept.)   ‘score’ was used on an earlier page and, so, it can simply be referenced with the curly braces.

Again, the two links on the page cause the learner either to move forward to the gate or backward to the animals.

Finally, we have the ‘Gate’, which is a LodeStar page type.  We use the gate in this case to branch the student.  If the student scored 5 or above, then we follow the ‘Pass Branch Options’.  If the student scored lower than 5 then we follow the ‘Fail Branch Options’.  ‘Pass’ and ‘Fail’ might not be appropriate terms, but students never see these terms.  They just imply one branch if the condition evaluates to true and another branch if the condition evaluates to false.

The condition is:

Pass only if Score Is >=  5

The variable that holds the score is the variable named ‘score’.  The variable name can be anything.  The author simply checks ‘Use Custom Score’ and identifies which variable will be used in the condition, as pictured below.

The following two screens show the two branch options.  The ‘Pass’ option is set to ‘Jump to Page’ to a page that is titled ‘Enrichment’.  The ‘Fail’ option is set to ‘Jump to Page’ to a page that is titled ‘Remedial’.

The following screenshot shows a page labeled ‘Enrichment’.  Notice the ‘Page ID’?  The Page Id was used in the gate.  This represents the start of a whole series of pages that represent the enrichment sequence. Similarly, there is a remedial page, the start to a series of pages that represent the remedial sequence.

Here is what the ‘fail’ branch dialog looks like.

When I click on the ‘Gate’ in the Branches view (as opposed to Pages view on the left side) and filter out the other pages, I can see the following.  Gate 1 branches to either ‘Enrichment’ or ‘Remedial’.  If I check off the filter I will see all of the branches for all of the pages, which gets to be a bit overwhelming.

### More Complex Scenario-based Learning

So far, we are making the learner do something.  We then store their performance in a variable called ‘score’.  We use the value of the variable to branch in one direction if the score is low and in another direction if the score meets or exceeds a number.

That is a very basic building block.  It’s like Legos.  A Lego® brick is a simple thing, but Lego® bricks can be combined to form ever more complex shapes.  So too in eLearning.

As a culminating example, let me describe a project we recently completed.  The basic strategy of storing values in variables was used in a highly interactive learning module that we created to teach the topic of using LinkedIn in for business development.

With the use of variables, we were able to track learner performance through four Social Sales Index (SSI) measures: brand, people, insights, and relationships.  If learners acquire the skills to improve their SSI index through the learning module, then they can apply that directly to LinkedIn and see tangible results.

In the learning module, behind the scenes, there are four variables, each matched to an SSI metric.  As learners expand their LinkedIn network, respond appropriately to notifications, build their profile, etc. etc. they increase their SSI.  Each activity is tied to one of the variables.

## The Function

We started with the humble variable, and then saw it used in branching logic.  Variables are also frequently used with functions.

A function is a group of instructions used by programming languages to return a single result or a set of results or simply to do something.

Because LodeStar automatically tallies student-earned points and reports performance to the learning management system, in our example, we use functions to override that behavior by setting the user score and total score to our SSI metrics or to anything we want.

Let’s look at functions in general, and then at how our example uses them.

As mentioned, the function either does something or gives you are result based on some input.  In LodeStar, functions are just something you use rather than define.  But if you looked at a function from a programmer’s point of view, it would look like the following function named addValues.  (functions are often named in this way, with the first letter lower cased.)

sum = value1 + value2;

return sum;

}

‘value1’ and ‘value2’ are inputs (or arguments, in technical speak)

The body of the function falls inside the curly braces {}.  The body of the function adds the two inputs and spits out a result — a return value.  Notice how we assign the sum of ‘value1’ and ‘value2’ to a variable?

Our use of a function is simpler.  We don’t need to define functions.  That work has been done for us.  We just need to use them.  We need two functions to override the default behavior of LodeStar.  As mentioned, the default behavior is that LodeStar automatically tallies up the student performance points in all of the different question types and reports that to the learning management system.  But we don’t want that.  We want to report the SSI score.

A perfect SSI score is 100, so that becomes the total score.  The sum of brand, people, insights, and relationships becomes the user score.

We use the function named setCustomUserScore(value) to set the user score.  We use setCustomTotalScore(value) to set the total score.

Once we do that, all of the module’s learning activities are tied to this real-world performance measure.  Finally, and most, importantly, all of the activities simulate real-world LinkedIn actions.

### Conclusion

So, for our project, it all started with the humble variable.  We asked how does LinkedIn measure proficiency.  The answer is the SSI index.  We then asked how would we capture the learner’s performance in an SSI metric.  The answer is four variables named brand, people, insights and relationships.  We then asked how could we bring up different types of content in the form of notifications, messages and so forth.  The answer was in the use of variables and some conditional logic.  Finally, how would we report the SSI index to the learning management system.  The answer was….the function.

Instructional Designers traditionally think about text, graphics, audio and other types of media.  These elements alone lead to very linear designs.  The addition of variables, logic, and functions frees up our designs from the constraints of these linear models and allows us to add variability, surprise, realism and other things that enrich the learning experience.

So, start simple.

# Visual Design for eLearning

## Introduction

In eLearning, good visual design is yet another challenge.  As instructors, we want our interactive lessons to look good – but we aren’t trained in layout and graphic design.  In many of my own projects, I’ve relied on graphic designers – but often I’ve had to make do with my own limited skills.  I’ve learned a couple of things over the years and am happy to share what little I know – more as a starting than an ending point.

Let’s begin with the premise that we want our pages to be visually appealing to students.  Of course, more importantly, we want our pages and layouts to support our instructional objectives.  We want things to look good and function well.  At the very least, we don’t want our design to distract the students or confuse them.

Fortunately, visual design is a combination of art and science.   We can draw from a body of knowledge that is evidence-based and not as subjective as we might imagine.

To describe visual design, I can start with the basic concepts of  flow, color, style, reading order, consistency, contrast and structure.

## When in doubt, simplify

Whenever I’m in any doubt about visual design, I think about the art gallery.  In most galleries, the walls don’t compete with the art work.  Plain walls.  Open spaces. Strategically lit rooms.  The labels and interpretive text are positioned so the information is easily associated with the art work. The label doesn’t compete and isn’t crammed.  The text is printed in high contrast to the background.  I can move easily from piece to piece all around the room and then onto the next.  The flow is well thought out.

Tufts University Art Gallery

Our interactive lessons can be designed similarly.  Text can be cleanly separated  from imagery – with an adequate margin between text and image.  Margins can provide clean separation of the other page elements. The page background can be selected to not compete or distract from the lesson.  The developer can be intentional about guiding the eye from one thing to the next.

## Or not

Or sometimes, for effect, we can do the exact opposite.  Agitate, provoke, move students out of their comfort zones.  But, in either case, visual design requires intentionality.

## Visual flow

Screen elements have different visual weights or powers of attraction based on the size, color, and even shape.  Unusual things attract the student’s attention.

Instructors should decide where students should look first.  If one element is larger than the others, students’ eyes might be drawn there.  If all elements are in black and white but there is a splash of color somewhere on the page, the student’s eye will go there.  We’ve known these things for some time, but recently, usability labs have provided us with eye tracking sensors, which produce heat maps. Heat maps graphically display how people look at a software screen, for example, and which elements they look at. Areas that attract the most attention appear in hot red.

From usability studies and from age-old observation, we know that visual designs have an entry point. We need to plan or consider where that entry point might be.

We also know that visual designs can have unintended exit points. As an example, hyperlinks can be hugely counterproductive to visual flow control.  For good reason, we think of hyperlinked information as being highly useful to students (another resource) but they introduce the risk of students losing the flow, being distracted, perhaps never returning to the lesson.

If our visual design is a simple text page, our job is easier.  We can use headings, sub-headings, text wrapped around images as well as size, italics and color to signal very important information.  If a page is a free-form layout, we need to plan visual flow more carefully.  In that planning, we need to note that the eye is attracted to color, strong contrasts, and follows along thick lines or elements that are composed in a way that suggests directionality.

## Color

Color can be used to direct the eye and to attract the student’s eye to key information.  Richard Mayer, in his book Multimedia Learning (Cambridge Press, 2001), describes the signaling principle.  The signaling principle states that people learn better when cues that highlight the organization of the essential material are added.  Instructors can use color to provide that cue, but color-blind students will not benefit.  Multiple cues are needed to highlight essential material.  Italics for example.

Color used sparingly to draw the eye.  Layout created by Clint Clarkson

I’ve always been cautious of the ‘circus’ effect of too many colors.  One color will clearly signal important information or draw the student’s attention if s/he is not color blind.  Two and three colors can be used effectively.  Introducing more colors leans toward a circus effect, where color ceases to attract attention.  Graphic design sites describe a 60-30-10 rule, which states that:

The dominant color should be used 60% of the time, your secondary color 30% of the time, and an accent color 10% of the time. Typically, the most dominant color should also remain the least saturated color, while your bold or highly saturated accent color should be saved for your most important content.

http://www.eyequant.com/blog/2013/06/27/capturing-user-attention-with-color

## Style

Style may be the most fickle thing to embrace in your visual design approach.

In the early 20th century, graphic designers were influenced by modern art, the Bauhaus school, posters, the De Stijl movement (think Piet Modrian), constructivism, architecture and more.  Today graphic designers are as likely to be influenced by styles on the web.

Just a couple of years ago, instructional screens featured gradients, beveled buttons, drop shadows, textured backgrounds and an attempt to imitate the material world in the digital medium.  Microsoft and Apple, in the redesign of their graphical user interfaces, reflected the sudden change away from material world imitation.  Buttons lost their three-dimensionality and became flat, single-color, texture less features.  The new look became, in a sense, minimalist and, perhaps, more functional.  The rise in mobile computing favored flat designs over both texture and minute detail as well as other features that didn’t translate well to the small screen smart phone.

Apple Interface: Shift to a flat design

Flat design is a thing.

“Flat design is a minimalistic design approach that emphasizes usability. It features clean, open space, crisp edges, bright colours and two-dimensional illustrations.”  –Tom May, 2018

But styles change.  So, what is an instructor to do?  My hunch is that we should focus on evidence-based practices and embrace minimalism not for its trendy appeal but for its functionality.    We should probably pay attention to the world around us.  Pay attention to styles on the web.  Pick your favorite website and think about the underlying elements that make it visually appealing and functional.  Visit the website of a college of art and design.  Follow it over time.  But don’t get too hung up on style.  It is a black hole.  Once you pass the event horizon, you’ll never return to creating anything useful for your students.

Use bulleted lists and numbered lists where appropriate and reduce the amount of writing.   The traditional wisdom was to ‘chunk’ writing by separating it into pages – but mobile devices may be affecting students’ habits.  They are accustomed to endless scrolls.  More research is needed on the effects of cognitive load of endlessly scrolling pages.

Again, when in doubt, simplicity is preferable.

## Consistency

Consistency is key. As students navigate the lesson, they shouldn’t burn brain cells on figuring out each page.   Pages that function the same should be styled the same.   For example, imagine that your page summarizes key concepts with a bulleted list.  Summarizing key concepts is an important strategy.  Our  pages may dive deeply into the details – but we want students to emerge with a clear map of the key ideas.  A bulleted list can be set off to the side of the page (left or right) or placed underneath, separated by space, color, and possibly a border.   The placement should be consistent so that students know where to find the summary in each part of the lesson.  They’ll look for it.

## Contrast

At all times we need a strong contrast between the text and the background.  Lack of contrast affects readability.   Strong contrast also directs the eye.   I break this rule too often when I style hyperlinks to be colored in something other than the standard, boring blue with no decorative underline.  And I always regret it.  I strive for elegance and create a problem instead.

Some of these key principles relate to work done on perception by the Gestalt psychologists of the early twentieth century.  One of their principles, ‘Figure-Ground’ relates to an object and its surroundings.  Photographers embrace this principle when they want the subject of a photograph to be clearly known – in other words separation of the subject from the background.  Photographers will use a large aperture setting to blur the background (reduced depth of field) and thus create a clear distinction between figure and ground.  All elements in the lesson need to be distinct from the background – and that especially applies to text and the background.

## Structure

Structure relates to the organization of elements on the screen.  It is concerned with proportion, symmetry, asymmetry, and balance.  These concepts are expressed in so many ways.  In photography, artists may think in terms of the rule of thirds – whether they are following or breaking the rule.  Two-thirds land; one-third sky.  One-third rocky foreground; two-thirds blurred valley background.   Two-thirds of blank space on the left; one-third of birds on the right.  Halves, in symmetry has quite a different effect and can be a statement in and of itself.  The parliament buildings of London reflected in perfect symmetry in the Thames, for example.

We can make similar decisions with the placement of images on the page.  They can be set with a width of 66%, which means that they will always scale to two-thirds of the page, regardless of page size.  Or the image can be set to 33% with text wrapping the image and taking up the remaining space.  Or they can be wrapped in negative space (e.g. white background) with the ratio of image to negative space a very deliberate choice.  Again, photographers might subdivide the plane in a three by three grid, which gives them 9 spaces in which to organize the structural elements of the photograph.  Traditional layout artists, similarly, had grids that subdivided the page.  Instructors can get a sense of their layout by abstracting the visual elements on the page as shapes.  The paragraph becomes a dark block.  The negative space becomes a white block.  What proportion of the overall space do the blocks occupy?  What is their relationship to one another?  Are they pleasing and pure?  Are they distracting and confusing?

Ratios or proportions reduced to formulas probably doesn’t explain why some layouts are pleasing to the eye and others are not – but it is still interesting to consider the use of math in the pursuit of beauty. The divine proportion or the golden ratio was probably used to plan some of the great pyramids and it is being used evidently today to construct websites.  We know that from, again, abstracting web elements into dark and light shapes. The ratio is defined by a simple equation:

a/b = (a+b)/a = 1.6180339887498948420

So, if our text block was denoted by ‘a’ and our image block was denoted by ‘b’, the ratio of text to image would be the same as the ratio of text plus image to text alone.  So, the secret to all good learning is in the golden ratio?  Not quite.  The only point I am making is that the proportion of things will have an effect.  We should at least be aware of how things are laid out on the screen. Proportion matters.

Layout created by Lauren Franza

## Conclusion

The instructor who consciously and conscientiously includes visual design in the planning of his or her eLearning lesson will reap the reward.  Students will benefit from being guided through the lesson, and not being distracted by colors, crammed elements, inconsistency, poor readability, and an off-putting layout.  Visual design is a large study – but the application of a few principles will greatly improve one’s eLearning design.

# The Explore – Validate Design Pattern

Introduction

As online instructors, we recognize that students benefit from interacting with content in a manner that truly makes them think.  And yet we find the task of creating interactive, meaningful content to be extremely challenging and time-consuming.

For some subject matter, interactive content that lets students manipulate the data and see different outcomes can be highly effective.  Marketing students can test the principles of the marketing mix by adjusting the amount invested in the quality of the product versus its advertising.  Civil engineering students might control the amount of ammonia in a wastewater treatment pond or the food to microorganism ratio.  Sociology students might explore the consequences of unequal distribution of wealth.  Health care students might explore the implementation variables of chronic care management.

To tease out the benefit of interactive content, let’s find a good example.  Suppose we pick the principles of composting.  That seems like an odd place to start, but we all understand composting at some level. How would an online instructor design an interactive lesson on composting that is effective and teaches the underlying principles?

Composting is bug farming.  Effective composting results from the right combination of carbon and nitrogen-rich material, water, and heat.  Students can learn composting by doing, but that might take weeks and without careful measurements and some guidance, they may not come to understand the underlying relationships and their effect.  They can learn from a handbook that teaches procedures,  or from a science text that teaches principles.  In either case their readings  may or may not lead to real understanding.

In contrast, in an online environment, the principles of composting can be taught through interactive models.  Students could be presented with an interactive model and challenged to generate the most compost in the shortest period of time.  In response, student might add more carbon-rich materials such as dry leaves to the compost.  Or change the moisture content.  Or change the ambient temperature.  Once students tweaked and played with the parameters, their instructor could assess their understanding – do they truly understand the relationships, the principles, the cause and effect — and then invite students to apply their knowledge to building a compost of their own.

As mentioned, students could follow the procedures of composting without understanding the underlying principles.  Students could recite textbook statements without really thinking about them. Online instructors must constantly ask the question:  how much thinking are my students actually doing in my course.  Not reading.  Not quizzing.  Not reciting.   But thinking.

When we write about time-worn concepts such as interactivity and engagement, that is what we are driving at.  Interactive engagement affords us the opportunity to get students to think.   Discussions, projects, group projects, online examinations can certainly challenge students to think, but how can we, without computer programming knowledge, facilitate interactive engagement between students and the content in a manner alluded to above and in a manner that fosters curiosity, promotes genuine interest in the content and puzzles students?

The Explore – Validate Design Pattern

The Explore – Validate Design Pattern gets students to think.  It is a form of interactive engagement that has, as one element, intense student-to-content interaction.

Interaction is a key word in online learning. Successful, effective online learning happens through students interacting with each other, their instructor and the course content.  Each type of interaction demands of the instructor special skills and intention.  With respect to student to student and student to instructor interaction, instructors can draw from their ability to foster interpersonal communications.  Good teachers know how to facilitate group discussions and engage students in Socratic dialog.  Although instructors must learn how to adapt their strategies to an online environment,  many of them have a good starting place. The third type of interaction, however, student-to-content, may arguably be the most challenging for instructors new to online learning.

Not all student-to-content interactions are equal. At the lowest level, passive eLearning involves very little interaction. Clicking buttons to page through content does not constitute interaction.  Clicking through a presentation on composting, for example, constitutes a very low level of interaction.  A higher level of student-to-content interaction might involve multimedia in the form of animations and video, drag and drop exercises and other basic forms of interaction.  A moderate level of interaction might involve scenarios, branched instruction,  personalized learning, case studies, decision making and the instructional design patterns that have been the basis of our past web journal articles.   The highest and most technical level of interaction might involve virtual reality, immersive games, simulations, augmented reality and more.

That said, the highest level of interactivity is not necessarily the best level for students. Interaction is essential insofar as it helps students achieve a cognitive goal, whether that relates to remembering, understanding, or applying. Interactions are useful only if they help students remember better, or understand a concept or a principle or apply their learning. One can’t categorically say that fully immerse interactive games are better than animated videos or drag and drop interactions. If the objective is that students will remember essential medical terms, then a fully immersive environment may hinder that accomplishment. Richard Mayer refers to extraneous processing. Extraneous processing is the attention that the learner must give to features of the learning environment that do not contribute to learning goal achievement.  If extraneous processing is too high then it impedes the student’s ability to focus on relevant information.

How it works

Considering the type of learning that students must activate is critical in determining whether or not instructors should plan on higher levels of interaction. In my second example, students are introduced to Isle Royale. Students examine data related to the wolf and moose population. They must draw inferences on how the rise and decline of one population affects the other. If this were a declarative knowledge lesson, students would simply need to recite the critical facts. How many moose were introduced to Isle Royale? How many wolves? What are the population numbers today? What were they at any given point? Students can simply recite those numbers without understanding the true nature of the interaction between the wolf and moose population on the island. The real objective of the lesson is to understand feedback loops in ecological systems. Students arrive at this understanding not by reading facts and figures, but by asking what-if questions and manipulating the inputs on a simple simulation.

Asking what-if questions is an inductive approach.  Rather than being given a description of a law, for example, or a principle or concept, students infer the needed information from a simulation or a set of examples.

The deductive approach is the opposite.  Perhaps an overly negative view is that instructors who use a deductive approach simply state a principle or concept.  All of the students’ cognitive work is in listening and, perhaps, taking good notes.

Faculty may be skeptical or wary of inductive learning. It takes considerable time to set up; it seems less efficient. Conversely, in my experience, faculty commonly engage students in deductive learning. The instructor presents on and explains a concept. Students take notes. Lectures are often characterized by the deductive learning approach.

The inductive method makes use of student inferences. Instead of explaining concepts, the instructor presents students with a model or examples that embody the concept. The student manipulates inputs and ‘infers’ what the underlying rules are.

Instructors who are critical of inductive approaches fear that students will make incorrect inferences. In my experience, inductive learning is more challenging to facilitate.  It is easier to state facts than to set up examples for students to infer facts.  Especially, given the hazard that students could infer the wrong facts.

In recognition of this, the instructional design pattern called Explore and Validate features a check-for-understanding activity. Explore and Validate is one form of interactive engagement.

An example

Explore and Validate offers an environment in which students manipulate models or examine examples, draw inferences and check their understanding in some manner in order to validate their conclusions.

For example, students may read cases in which victims express feelings toward their oppressors.   In a deductive approach, the instructor can simply define the Stockholm syndrome.   The instructor may explain that hostages afflicted with this syndrome express feelings of empathy toward their captors.  An assessment might ask students to define Stockholm syndrome.  An inductive approach might involve students with reading brief summaries of cases in which they “notice” that the victims become empathetic or sympathetic toward their oppressors.  Students can describe the syndrome, offer explanations and even label the syndrome.  The instructor would then contrast the students’ descriptions with a more formalized, clinical description.  The first part of the activity is the explore phase.  The second part is the validate phase.

In our example below, students are told about Isle Royale.  In the early 1900s moose swam to Isle Royale from Minnesota.  50 years later a pair of wolves crossed an ice bridge to the island from Canada.  In a lesson designed with the Explore-Validate instructional design pattern, an optional strategy is to ask students to think about and predict the outcome of a given scenario.  In this example, what happens when a pair of wolves are introduced to an island with a finite number of moose.  Students might conclude that the moose population would eventually be annihilated – but that is not what happened historically.  As the students contrast their original predictions with the simulation results, they may be struck by the difference between their prediction and the simulation results. As I’ve written many times before, this is cognitive dissonance – and when applied correctly may stimulate learning. When applied correctly, students will say ‘I didn’t know that“ and want to probe more.  When applied incorrectly, students will simply be overwhelmed and shut down.

The key exploration in the moose-wolf example is with a model.  The model was generated by Scott Fortmann-Roe with a tool called InsightMaker.  InsightMaker is a free simulation and modeling tool.  It is easy to use and yet powerful.  It is cloud-based and works with the LodeStar authoring tool as either embedded content or linked content.   Models created with InsightMaker can be used to promote critical thinking in students.  The model can expose input parameters as sliders.  Students can change the value of an input and see the change in the output after they click on the ‘Simulate’ button.  InsightMaker is made up stocks, variables, flows, converters and more.  Stocks are simply containers for values such as population.  Variables can hold values such as birth rate, death rate and interest rate.  Flows are rules that can perform arithmetic operations on variables and affect the value in stocks. Students can click on the flow affecting the value of a stock and see the rules.  They can explore all of the relationships.  In the case of a feedback loop where the output is combined with the input to affect a new output, students can study the relationships and gain insight into dynamic systems.   Instructors can also simulate the spread of diseases through populations.  They can control the probability of infection and the degree to which the population can migrate away from the infected.  They can control the length of infection and the transition to a recovered state.  The instructor can model one person and then generate a population of such persons.

Models are an excellent way to engage students – to get them to explore, to ask what-if questions and notice patterns.   In public health, students can change the parameters of specific disease like the Zika virus.  In economics, students can increase supply or demand.  In engineering, students can work on wind resistance models.

With the LodeStar authoring tool, instructors can link to or embed an InsightMaker model.  They can then insert a series of questions to check students’ understanding and provide feedback.  The link below shows a simple example of the Isle Royale model and the Explore-Validate pattern.

Screenshot of an activity built with the LodeStar eLearning authoring tool and the ActivityMaker (Mobile) template

www.lodestarlearning.com/samples/Isle_Royale_Mobile/index.htm

Conclusion

We have been listening to students. The way they describe their online learning experience seems pretty humdrum.  Instructors don’t need to rely on publishers to create stimulating interactive lessons.  They can take matter into their own hands with tools like InsightMaker.  InsightMaker fulfills the Explore part of the activity.  LodeStar fulfills the Validate phase.

# 10 Techniques to Engage Students

The instructor as designer recognizes that the online platform can do more than simply compel students to read, watch, and listen. With carefully designed learning activities, instructors can engage students in explaining, categorizing, inferring, applying, solving problems and more. In short, rather than simply reading content, students can be working and interacting with content in meaningful ways.

Our audience is the instructor as designer. We hold on to the hope that online learning won’t be commercialized to the point of reducing instructors to proctors. We value the instructor as designer because no one understands his or her students’ needs better than the instructor. And, although developing online learning may be time consuming, it’s a lot of fun. Who wants to delegate that entirely to publishers?

Screenshot of the LodeStar eLearning authoring tool

In recent posts, we’ve been thinking and writing about larger design concepts that help instructors to engage students. This article, in contrast, surveys a range of techniques supported by the LodeStar eLearning authoring tool — sometimes in minute detail — that sharpen the edge of a well-designed activity and make it more effective.

Unless, you follow the development of the LodeStar authoring tool very closely, some of the items below will come as a surprise to you. Again, some of these items are simply techniques that will enhance your online learning projects; other items are larger in scope.

Let’s start with some simple techniques and then work our way up.

Introduction

The LodeStar eLearning authoring tool offers a range of templates that help instructors build online activities. The ‘Swiss Army Knife’ of templates is the ActivityMaker template. If you wish to get a good sense of what ActivityMaker can do, visit our post at https://lodestarlearn.wordpress.com/2015/01/03/12-ways-to-engage-students-with-elearning-authoring-tools/

This article (not the link) dips into some of the settings of the authoring system that can affect student engagement in a significant and meaningful way. Each of the techniques is numbered and listed below.

#1: Link to other pages in a project

Knowledge is an interconnected web of informational, procedural and conceptual relationships. Sometimes, we want our designs to pick a ‘happy path’ through a topic’s complexity. We want students to start with a limited number of simple propositions or declarations and build up to a more complex understanding of the subject matter. Sometimes we want students to infer generalizations from the information that is presented to them in a relational manner.

A LodeStar activity can be a linear progression through content and application or it can be an interconnected website or both. A LodeStar activity can have navigational buttons that step a student through the content or it can turn off navigational buttons. Students can navigate through links, table of contents, or branching (discussed later).

For example, instructors may want to create a menu to give students choice of content. The menu page in the ActivityMaker template is restricted to four menu options and may, therefore, not be satisfactory. What if an instructor wanted five or … ten menu items?

What if an instructor simply wanted to link together pages in a LodeStar activity? Fortunately, the technique is simple in LodeStar 7.2 build 12 or later.

To make this work, be sure to give each of your pages a unique page ID. Once you have given your pages a page id, then select text that you wish to convert to a link. Click on the link button in the editor. The pull- down menu will reveal page id’s to you both in name and numerical format. Select the page that you want linked.

Of course, you are not restricted to pages within LodeStar. You can link to anywhere on the internet.

So now that we know how create links to pages within LodeStar, let’s see what more we can do. Typically, links cause the program to jump to the linked page. If designers check ‘Show as Overlay’, the linked page displays as an overlay. In other words, students won’t jump to the page. The linked content gets overlaid on the current page. Students don’t lose their place or the context of the learning.

Note that text pages, with or without graphics, make the best overlays. Other page types are restricted from acting as overlays for technical reasons.

#3: Make Use of the Page Options

Each page type in ActivityMaker comes with various options that will help instructors to enhance the students’ experience.

The speaker icon enables instructors to import an MP3 file. In the audio dialog they can choose to display a player control to pause and play audio.They can also cause the audio to play automatically when the page starts.

Pages with audio look like this:

A small audio icon appears at the top left when audio is available

I’ll review some of the other controls that instructors may see either on a text page or question page or both.

Controls found on the right side of LodeStar pages

The Correct Answer and Incorrect Answer branch icons allow instructors to branch or provide feedback based on overall (page level) correct or incorrect responses instead of answer level option branches.

The Resources check box turns the current page into a resource that can be accessed at any time. Checking the check box causes a button to display at the bottom of the screen (depending on the layout), which will bring up the page as resource at any time.

Again, text pages, with or without graphics, make the best resources.

The ‘Do Not Display Correct Answer’ suspends feedback that informs the student of the correct answer.

The ‘Use Multiple Choice Radio Buttons’ converts the multiple select question type to a multiple choice question type. In playback mode, students will see radio buttons rather than checkboxes next to each answer option.

‘Point Value’, of course, assigns points to the current Question page.

‘Remove from Flow’ prevents the page from being displayed, unless branched to.

#4: Use Page Branching to Differentiate Instruction

The following is self-evident and almost foolish to write, if not for common practice: Student’s don’t all learn in the same manner. They don’t share the same level of prior knowledge, aptitude, experience, motivation, etc. A benefit of online learning, which is largely unrealized today, is that we can differentiate instruction based on student choice and performance.

Here is a recital of the various ways that projects created from the ActivityMaker template can differentiate instruction.

• Links to different pages (content areas) offer students choice and a sense of control over their own learning. This is particularly important for adult learners.
• Displaying pages as resources allows a student to summon up page content at any time. The student may be working on a case study and may wish to have quick access to critical information.
• Branches based on performance either at the answer option level, page level or section level. The branch icon appears in LodeStar in various places. We see it next to answer options on the multiple choice question page. That means that a branch option and/or feedback will display if that answer is picked. There are many branch actions. ‘Jump to Page’ is one example.
• Page Level branches follow a branch and/or provide feedback based on overall correct or incorrect responses. For example, in multiple select questions it might be difficult to branch based on any one selection. A page level branch can be based on whether or not the student answered correctly overall.
• Section level branches are accomplished with gates. A ‘Gate’ is an ActivityMaker page type. Gates support all of the branch actions supported by answer level options and more. In other words, gates control program flow. For example, the program can jump to remedial activities or a higher level of challenge.

#5: Use Video to Bring a Project to Life

Even though the well-known educational psychologist Richard E. Mayer observes that we don’t fully understand the role of video in online learning, he acknowledges that it plays an important role.

In our view, short videos can bring an online learning project to life. It can bring experts to the course site; provide students with an audio-visual look at phenomena in, perhaps, a more efficient way than text and graphics; and it may be the preferred mode of learning for many students.

LodeStar supports three important forms of video.

One, the ActivityMaker template video page supports YouTube Video. Rather than fussing with embed codes, instructors can simply paste a YouTube URL into the main field. Although the LodeStar previewer doesn’t display YouTube videos, instructors can preview videos by launching the project in Firefox. The Firefox browser supports the preview of local resources. Other browsers do not.

Two, ActivityMaker enables an instructor to link an MP4 video file that is available by URL over the internet. LodeStar supports merging an MP4 video from an internet source with a WebVtt (.vtt) caption file imported into the project.

Three, ActivityMaker enables an instructor to import an MP4 file into the project.

#6: Use Flashcards to Help Students Remember

When students struggle to remember a term or definition it increases their cognitive load and makes the assimilation of new information more taxing than need be.  Many strategies help students remember information.  The use of Flashcards is but one example.

ActivityMaker supports Flashcards. In other words, Flashcards are part of the Swiss Army knife that ActivityMaker represents. The positive side is that a Flashcard activity can be blended with other pages that engage students in such things as video, text and graphics and checks for understanding. The negative side is that instructors have found it challenging to set up the gates that are needed for incorrectly answered flashcards to be returned to the queue.

LodeStar now offers the Flashcards template. Instructors will find the gates preset correctly. Instructors need only add the instructions, fill in the first card and add more. This template is still based on the ActivityMaker template. That means that instructors can add other page types and benefit from the full functionality of ActivityMaker.

#7: Use Instructional Design Patterns (compound strategies) like WebQuests

In previous articles, we introduced the concept of instructional design patterns. If you missed the articles, start with https://lodestarlearn.wordpress.com/2015/05/04/instructional-design-patterns/

WebQuests are one example of an instructional design pattern. WebQuests are an inquiry-based format, first introduced by Dr, Bernie Dodge at San Diego State.

LodeStar now offers the Webquest template to make it easier for faculty to build them and export them to learning management systems. Webquests are extremely popular in K12, but they show great promise in higher education. In brief, a Webquest sends students out into the internet with a purpose. A Webquest defines a task for students to complete, often in groups, and then spells out a process for completing the task. A Webquest offers a finite set of links as resources that have been vetted by the instructor or ‘an expert’.

For a closer look at Webquests in area of nursing education, view the following quantitative and qualitative study submitted to the International Journal of Nursing:

http://aripd.org/journals/ijn/Vol_1_No_1_June_2014/4.pdf

The LodeStar Webquest template is also based on ActivityMaker. That means that Webquest authors have the full range of ActivityMaker capabilities open to them. At the same time, instructors do not need to complete all of the set up required for a Webquest.  That is all done for them in the Webquest template.

#8: Use eBooks (epub 3)

If you missed our article on Open Textbooks and ePub, you’ll want to visit the following link:

https://lodestarlearn.wordpress.com/2015/07/29/open-textbooks-and-epub/

In short, LodeStar enables instructors to author their own eBooks. Actually, authoring an eBook may seem daunting. The epub3Maker template supports not only eBooks but something much much smaller in scale such as white papers,  pamphlets, lab manuals, course introductions….whatever.

#9: Jazz up your Activities with Themes and Layouts

LodeStar now supports both themes and layouts. Themes enable instructors to choose from a number of color coordinated presets. Theme is mostly about color. Theme controls the colors of the header, footer, content area and HTML background. Instructors can even click on the advanced button in the ThemesManager dialog and create their own header and footer gradients.

Themes have been around in LodeStar for a long long time. Layouts are relatively new.

Layouts enable instructors to choose from a range of presets that affect the position of the header, footer, content body, table of contents and any gadgets that are used in the project.

With the combination of Themes and Layouts, instructors can create a unique look for their projects.

#10: Infographics

Infographics can play a number of roles in eLearning. They can provide data in a pleasing pictorial format through the use of headlines, graphs, symbols and images. They can outline a topic of interest to help students organize material and understand up front what some of the key points will be. They can be used to assess students when instructors invite students to generate their own infographics to communicate their understanding of a key issue or concept.

Here is the typical infographic:

http://elearninginfographics.com/elearning-statistics-2014-infographic/b

Here is one more to look at.

The following infographic was created in LodeStar and combines a Prezi style presentation with an infographic style of presentation. It introduces seven phenomena that we are paying close attention to:

www.lodestarlearning.com/samples/Ten_Trends_Infographic/index.htm

Conclusion

Instructors and students benefit from LodeStar’s rich array of options. Instructors can choose from a variety of templates. The ActivityMaker template offers an array of page types. Page types can offer an array of options. All of this helps the instructor create a rich and engaging experience for students.