Simulations on a Shoestring

Robert N. Bilyk
President, LodeStar Learning Corporation

Introduction

With artificial Intelligence, new tools, and new standards there is a sea change that improves our ability to create simulations efficiently.  I’ll take a moment to reprise my summary of the benefits and types of simulations and then dive into what we can do now that we could not do before — efficiently.

Why simulations

Simulations help learners draw upon their knowledge to make decisions in a ‘real-world’ setting.  They stimulate recall of recent learning and past knowledge.  They demonstrate how knowledge can be applied.  They can simplify situations to support learning or they can tax learners with the complexity of challenging environments.

The term simulation can apply to a broad range of instructional activities.  It can mean role-playing, a virtual lab, a decision-making scenario, a game, a diagnostic exercise, an immersive reality environment, and more.  A simulation can range from a simple demonstration of how things work, like an electrical circuit, to a complex problem that invokes the learner’s critical thinking and analytical skills.

I’ll limit this discussion to decision-making scenarios, which can include simple simulations (or ShortSims) and highly complex interactive case studies. 

Behavior mindset shift

AI, new tools and new standards dramatically lower the cost of building simulations.  But it takes more than technology to turn the tide.   Chrisopher Lind (host of Future-Focused1) and Michael Vaughn (CEO of The Regis Company) captured the issue in their recent conversation about simulations.

Michael Vaughn has been working in this space for years.  In his conversation with Christopher Lind, he notes that:

“The first thing we realized when we put instructional designers into it [simulation] the reaction is always the same, they loved it because it was fresh, it was new, it was a new way of thinking about building things, and all that stuff was really exciting and then when they had to sit down and actually start building a simulation it was almost the deer in the headlights.”

He and Christopher agree that a lot of design has really been about information architecture and not the design of learning experiences.   A simulation is a learning experience. They acknowledge that for years building simulations seemed beyond the capability and budget of learning and development professionals.

But that is no longer true. 

A simulation project

For my courses at Metropolitan State University in Learner Experience Design3, I have been contemplating a simulation that helps students apply what they’ve learned from my presentations, the texts, and the examples.  So, I’ve been building a prototype.  I’ll describe the prototype, what skills I needed, and how new tools and standards have helped me.

Let’s begin with the purpose of the simulation.

In my simulation I challenge students to develop training that helps a company solve a problem.  As Paul Matthews2 writes, sometimes learning experience designers need to play the role of a performance consultant and dig for the root cause of a performance issue.  In my simulation, you are asked to develop training for a residential solar panel installer.  The fictitious company installs and services solar panels.   The first tier of technical support for customers is the customer service technician.  If the tech can’t solve the problem over the phone with basic problem-solving strategies and tools, then a truck and field technician are dispatched to the customer site.   The company goal is to reduce what they call truck rolls.  The goal is to save the company money.

The simulation randomly selects a scenario from a bunch of scenarios, each with a different root problem.  Learners must interview technicians, the L&D director, and the dispatcher.  They must look at field notes and the tool metrics to decide on the learning outcomes that will make a difference.  Finally, they pick from a list.  The outcomes on the list are a strong or weak match depending on the randomly selected scenario.

Let’s review some of the actions that are critical to an interesting simulation.

Defining a Challenge

So, the first mindset shift is to think it terms of a problem and a challenge to present to the learner.  That’s a little different than a presentation on an ADDIE Analysis or a presentation on Design Thinking Empathy.  We’re not developing a presentation that competes with a hundred YouTube videos.  We’re developing an experience.

Creating Context

It’s been more than a year since AI was democratized by ChatGPT, DALL-E, and BARD.  Now there are dozens of tools that allow us to generate text and images, convert text to speech, synchronize animations with sound, and more.

For my prototype, Bing Image Creator generated images related to solar panel installation.  I tried to choose images that were both illustrations and stylistically consistent.  That proved to be very challenging but the tools improve each week and, at this stage, I’m only developing a prototype.

I used ChatGPT to help generate common problems that homeowners experience with their solar panels.  I didn’t use the ChatGPT content but it gave me ideas to dig into. The problems can range from decreased energy production to complete system failures.  I gathered information from ChatGPT, solar panel installers, and my own experiences.

Graphics generated by DALL-E

I asked Bing Image Creator (powered by DALL-E) to generate images of a customer service timeline and specific pieces of equipment like the inverter – a device that converts the current produced by a solar panel to one that will work in the home.

I assembled these images in my video editor and combined them with a script that was fed into a text-to-speech synthesizer.  I used Amazon Polly at no cost, but there are dozens of inexpensive synthesizers available.  They use AI to produce naturalistic sounding speech.  The inexpensive ones aren’t perfect but for prototyping or for projects on a shoestring budget, they may be acceptable.

Creating randomly generated scenarios with datasets. 

I used the LodeStar Dataset Widget to generate data.   I first defined the properties or data fields (upper part), clicked the ‘Generate’ button to create the dataset and then filled in the values.   In the screenshot below, I picked problem 1 and then identified the media that represents what each of the employees will say.  Three of the fields also control the level of tool usage that is displayed in the scenario screen.

Dataset widget in LodeStar enables author to both define a dataset and then populate it

Scripting Randomization

In my case, LodeStar randomly pulls from the dataset so that these characters say something different each time,  providing clues to the root problem.  The level of usage of the tools (KMC, Dashboard, and Ticketing system) is also drawn from the dataset.

This is accomplished with a single line:

var scenario = getDataSet(“scenarios”, “-1”);

The -1 causes the engine to randomly select from a group of scenarios.  Each subsequent call will return a new dataset from the remaining unused datasets.

LodeStar makes it easy, but the same thing can be accomplished with script that essentially does these things:

  1. Describes the scenario data in a machine readable format
  2. Adds each scenario to a collection
  3. Generates an unused random number and uses that to pull a scenario from the collection
Once a scenario is randomly selected, all of the elements give clues to the underlying problem: employees, tool metrics, and field notes

Using Voice Synthesis and Character Puppets

I used voice synthesis and animations to represent the technicians, dispatcher, and Learning and Development director.

I had a little fun with the next step.  I added character puppets. 

Character puppets are composed of rigging.  Character rigging is a skeletal structure made of bones and joints.  The puppet creator skins the structure with artwork. The more rigging a character has, the more complex and subtle movements they’re capable of.

The puppet artists create the skeletal structure, the joints and the artwork for these puppets as well as the visemes. A viseme defines the formation of the face and mouth while a person is speaking. 

Adobe Character Animator imports a puppet created by GraphicMama.com

Here is where AI comes into play again.  AI supports the analysis of the audio file and chooses matching visemes.  In short, lip synching.  In the past, there were several hokey tools that attempted lip synching.  In the 90s when I lip synched characters to recorded speech, I needed to painstakingly morph visemes in a complex tool like 3D Studio and its predecessor.  Today, the technology has improved and the effect is appearing less and less mechanical.  In the coming years, along with the use of avatars, this technique will be more and more refined. 

Character tells his part of the story.

Displaying randomly generated elements

I displayed a list of learning outcomes in random order.   I will challenge learners to choose the top five outcomes that address the underlying training problem out of list of 20.  Their answers are evaluated based on the current scenario and the outcomes that are listed in the dataset.   An outcome might be correct in one scenario and completely wrong in another.  Just like real life.  Also, as in real life, some of the information provided by the employees might be misleading.  One technician may be performing correctly while the others may lack the knowledge, opportunity or motivation to perform correctly.   

Learning outcomes are randomly ordered and judged according to the currently selected scenario

Coding for greater control

I added code (simple script) to evaluate the learner’s answers based on the scenario rather than answers that are hard-coded to be right or wrong.

LodeStar provides a simple scripting language that makes it fairly easy to evaluate the learner’s answers.   For those designers who wish to avoid scripting altogether then LodeStar offers a Gate page type and branching options.   The branch can direct learners to a batch of questions and answer options that are aligned to the scenario.

The use of simple scripting enables the designer to create one engine that will evaluate learner choices based on the scenario.    The use of branching enables the designer to create sets of questions and match those sets to each scenario.

In the example of simple scripting, the code would do the following:

  1. Ensure that the learner only selected five outcomes.
  2. Retrieve the dataset and parse out the right learning outcomes for that dataset.
  3. Loop through the outcomes and check if the outcome is contained in the set of learner’s choices.  If yes, append score and change the displayed outcome to green.

Here is an example of LodeStar Script accomplishing the above.  LodeStar Script follows JavaScript in the way that it declares variables, evaluates conditional statements, and loops through an array.  But LodeStar Script is not JavaScript.  It features high level functions that make it very easy to perform tasks, change properties, animate, play sound and video, and fifty other things.

Trainers and instructors can start with simple branching and slowly build on their scripting ability to increase their power to generate dynamic simulations that change every time the learner returns to them.

LodeStar Script also supports the generation of xAPI statements as do the branching options.

LodeStar Script offers high level functions that make it easy to perform tasks. Dozens of lines of code are reduced to a single line.

Controlling animations programmatically

In my simulation, I wanted a visual indicator of how the learner was performing.  I created a meter with the SVG editor, an animation timeline with the animation editor, and then controlled the animation with some script.  I also programmatically controlled the usage graphs for each of the tools – again, based on the dataset.

Simulations are often supported with animation that can be programmatically controlled.    The following screens show how I did this in LodeStar.

First, I created the meter with five bars in the SVG editor.  I also added a triangle indicator.

The SVG editor enables the creation of SVG elements that are assigned a unique ID

Then I opened the animation editor and changed the timeline for each bar.  I made the first bar fully opaque at 1 second, the second bar at 2 seconds and so on.

If a learner performed at 60 percent, the first three bars would be fully opaque.  The triangle indicator would appear just right of center.

The SVG graphics are brought into the Animation Editor and animated.

Now, I needed to control the animation play head.  Once I evaluated the learner (user) score I did the following:

  1. Set the system score for the user
  2. Set the total score for the user.
  3. Move the animation playhead to the 60 percent mark with the following function: changeAnimation(“meter”, 0, percentage)
  4. Change the text to show 60 percent
  5. Disable the check button.
  6. Send an xAPI statement to the learner record score that states what outcomes the learner chose for the given dataset problem.
LodeStar Script can be tested inside its Script Editor

Conclusion

Simulations are an effective tool to engage your audience and promote learning.  With artificial intelligence, we can research and write content, generate images, synthesize speech, and synchronize speech to lip movements.   With advances in HTML standards baked into authoring tools like LodeStar we can generate datasets and randomly draw from them.  With advances in standards, we can create SVG graphics, animate them, and control the animations programmatically. 

The biggest challenge is in finding authors who are willing to learn these techniques and use them to engage learners.  Thinking back on Christopher Lind and Michael Vaughn’s comments, we can become more than information architects.  We can become learning experience designers. 

References

  1. Future-Focused with Christopher Lind
    https://christopherlind.substack.com/
  2. Paul Matthews
    https://paul-matthews.com/
  3. Metropolitan State University, Learner Experience Design Graduate Certificate Program
    https://www.metrostate.edu/academics/programs/learner-experience-design-graduate#about

A Natural Experiment:  Uncovering online learning’s untapped potential.

Robert N. Bilyk
President, LodeStar Learning Corporation

Introduction

Natural experiments provide a unique opportunity to gain insight from the effects of naturally occurring phenomena such as new policies, environmental hazards, public health interventions, and other changes that are outside the control of the investigator.  This post briefly introduces a natural experiment in online education, summarizes a key finding, and then provides a detailed set of strategies to address the finding.

Background

During my time as Director for the Center for Online Learning, I was privileged to contribute to a research article authored by Dr. Reynold Galope and Dr. Daniel Woldeab, titled Exploiting a natural experiment in assessing student learning outcomes in public and nonprofit administration: A demonstration1.

The research paper describes a natural experiment to investigate whether online graduate courses are, at least, on par with their face-to-face counterparts.

Natural experiments exploit naturally occurring opportunities for comparison.  For example, a natural experiment might compare different regions with varying levels of exposure to lead or air pollution to provide insights into the impact of pollution on health.   Obviously, it would be unethical to expose one population to increased lead or air pollution levels in a traditional experiment.  When the differences happen naturally, researchers can study the impacts.

When investigating the effectiveness of online learning compared to face-to-face, researchers are challenged to filter out the student characteristics that lead them to choose online courses. 

In our research the independent variable was the course mode – online versus face-to-face – and the dependent variables were student scores related to several outcomes. In research, the causal effect of an independent variable must not be confounded by other factors.  Online can’t be considered better than face-to-face if online courses are selected by smarter, more organized, and more confident students.  The research design must eliminate rival explanations for the outcomes.

Our Natural Experiment – Lessons Learned

Our natural experiment benefited from a unique situation in Metropolitan State University’s Master’s program for Public and Nonprofit Administration (MPNA).

The situation was that all newly admitted MPNA graduate students were required to register for MPNA 600 in their first semester.  MPNA 600 teaches research methods and statistics. In the year of the research study, the course was offered in an online format in the fall and as a traditional, face-to-face course in the spring of the same academic year. Both versions of the course were taught by the same instructor. 

The researchers didn’t have any reason to believe that students who were admitted in the fall were any different from those who were admitted in the spring.  As the authors stated, one group was unlikely to be more intelligent, more motivated, more experienced, or more technologically savvy than the other group.  In short, this unique situation (wherein the department required students take the course in their first semester and the two versions were taught by the same instructor) rules out several confounding variables.

You can read the research paper to learn more about student outcomes in each of the course delivery modes.   For the most part, the two formats were statistically equivalent.  One of the outcomes of the course was a research paper. In terms of its quality, online students performed better.  But in statistical analysis, the technical part of the applied research methods course, face-to-face students performed better.

Outcome variableCompletely face-to-faceCompletely onlineDifference in outcomesp-value
     
Statistical Data Analyses93.689.04.60.0350 (significant at the 5% level)

It is the latter that interests me the most: the lower score in statistical data analyses for online students. I believe this uncovers online learning’s untapped potential.

In the past, I’ve often been surprised by studies that also show no difference in student performance between online and face-to-face courses.  I’m surprised because I’ve observed that many of the opportunities for deep, significant learning in online learning are simply missed. 

In my view, asynchronous online learning has its advantages.   Students can choose when to study or when to read or watch presentations.  It’s convenient. They save time by not driving to class, looking for parking, walking across campus.  They can turn up the volume, drink coffee to stay awake, take care of the kids, participate in online discussions, post questions, reduce their exposure to embarrassment, avoid discrimination, look up prerequisite information, and so on.

Face-to-face, of course, has its own advantages.   It imposes an order that learners who aren’t self-directed or poorly organized benefit from.  You must be in class and complete examinations at an appointed time.   It offers the opportunity to discuss topics with others at the same time, in the same location.  It offers good teachers the opportunity to practice a variety of instructional techniques in their comfort zone:  warm-up, cool-down, humor, Socratic method, pair and share, and more.   I’m not suggesting that any of these things can’t be done online.  In my experience, too often, they simply are not done online.

More to the point, we have a great opportunity for online to promote deep learning of difficult concepts. 

Here is a sample listing of what can be done online to promote an understanding of statistics and research methods.  I’ll tease out two topics in statistics to illustrate my points: confidence intervals and sample error.

Addressing the gap: Online strategies to promote learning

It’s not uncommon for online instructors to create a resource module and include a list of resources for their course.  The following strategies raise that practice to a higher level.  These are resources that are carefully selected and organized with instructional intent to promote understanding, application, analysis, and synthesis.  I’ll show concretely how this can all work together for the benefit of the student.

Also, these strategies apply to any online course.  Because of the study, I’m using statistics to show concrete examples, but the overall strategies are important for all subjects.

I will arbitrarily pick out a couple of topics in statistics that are often germane to understanding statistical analysis.  The first example is confidence intervals. 

Let’s identify the various strategies to promote understanding of confidence levels and drill deeper into some of them.

  • Online Tutorials
  • Interactive simulations
  • Data Visualization tools
  • Word problems
  • Online forums
  • Case Studies and applied examples
  • Open Educational Resource textbooks
  • Gamification

Online Tutorials

The online instructor can curate online tutorials that cover confidence intervals.  A Khan Academy video, for example, jumps right into the mathematical procedure for calculating confidence intervals.  Dr. Nic’s Math and Stats offers an overview of the purpose of confidence intervals. 

If confidence intervals were prerequisite knowledge, in a face-to-face course, I might be embarrassed to stop the professor and ask for the meaning of a confidence interval when it came up in their lecture.  Even if offered, any explanation would only be understood if I understood these prerequisite concepts:

  • Sampling error
  • Variation
  • Standard deviation

An online instructor can do an analysis of the topic and offer online tutorial resources that not only cover the topic but its prerequisite concepts.  One might object that this can go on indefinitely.  To understand standard deviation, you might need to understand variation, mean, and so forth.  Oftentimes, however, the tutorials do cover the prerequisite information.  Dr. Nic’s Math and Stats is a good example of that.

Properly selected and organized, these resources can be a powerful tool.  Dr. Nic provides the conceptual background.  Sal Khan provides the step-by-step calculation.

Interactive simulations

Desmos, an interactive learning platform, provides a fun way to play with statistics.  For confidence intervals, we can use a graph to see the confidence interval shrinking as the standard deviation is lessened.   Wide variation requires a wider range to maintain the same level of confidence.  Rather than just staring at a math formula, the student manipulates parameters to gain an understanding of how measurements like sample size, standard deviation, and confidence interval are interrelated.

Tutorials combined with simulations provide us with a basic understanding, a procedure to calculate confidence interval, and an interactive to display how changes in standard deviation, and sample size can affect the interval size.

Desmos Interactive
A decrease in Standard Deviation narrows the confidence interval.

Data visualization tools

Tools like Tableau, Power BI, and Google Data Studio enable students to create interactive and dynamic visualizations, helping them understand how to represent and interpret data effectively.  Available to any student is Google sheets.  Google Sheets provides statistical functions to create your own confidence interval calculator and plot out the outcome in a graph.   These activities lead to a conceptual and procedural understanding of the topic.

Google Sheets with Functions

For example, Google sheets has the t.Inverse function that gives the t value for sample sizes less than 30. With Google Sheets a student can create their own confidence interval calculator.

Real data sets

Kaggle is one site that provides real data sets from a variety of studies.  An example would be a data set of students with both test scores and reported numbers of study hours.   The online instructor can devise an assignment that requires students to calculate a correlation coefficient between study hours and scores.

Pearson’s Correlation Coefficent Formula

Or the instructor can ask students to take samples from subgroups and calculate intervals that gives us confidence that the sample mean is within range of the true mean.  Both these activities reinforce the concepts and procedures.

Kaggle Dataset

Word Problems

As stated in How Learning Works2, to develop mastery, students must acquire component skills, practice integrating them, and know when to apply what they have learned.

Here is where interactive word problems can excel. With tools like Storyline, Captivate and LodeStar, an instructor can easily set up word problems that are randomly generated and present a different problem to the learner on each visit. 

I believe online instructors should learn an authoring tool and deepen their toolchest of strategies.  I write this not because I build authoring tools.  Conversely, I build authoring tools so that instructors can deepen their toolchest of strategies.

In this example, using LodeStar, the author follows an elaboration strategy.  The target skill is the ability to calculate standard error given a dataset.  In the first problem, the dataset is limited to three numbers. The mean and standard deviation are given.  The student must divide the standard deviation by the square root of the mean and present a number with two digits of precision.

In the second problem, the author hides the mean and standard deviation.  The student must calculate those values from the dataset.

In the third problem, the author presents a slightly larger dataset with no other values shown.  The student must calculate the mean, then the standard deviation and then the standard error.

When a student answers incorrectly, the problem is shown again with a new set of numbers.

LodeStar’s Word Problem with data set

There is a variety of additional resources that an instructor can organize online.  They can include links to online statistical tools, online statistics community forums, OER textbooks, podcasts, online statistics quizzes, and games.  I’ll dive into the last one.

Gamification

The screenshot below shows a mock-up of a game that places the learner on a pharmaceutical board where they must decide what experimental diet drug to support and fund.  Each drug is tagged with an information icon that displays a set of data from a clinical trial.    The learner must apply statistics to determine which drugs have been effective in reducing weight.   In the end, the learner’s statistical analysis is compared to that of the expert’s analysis.  The game can offer three levels, each level representing an increase in challenge.

The game can be entertaining, but it can also be an opportunity to integrate concepts that include central tendency, variance, standard deviation, confidence intervals, comparison of means, p-value, and more. 

In summary, online modules can practice students on component skills and then offer gamified environments or simulations that combine component skills.

LodeStar’s Interactive Graphic with dataset

Conclusion

Our natural experiment showed that online learning students performed more poorly than face-to-face students in the technical area of statistical analysis.  I outlined online strategies that can close the gap in performance. 

We must continue to help faculty categorize student outcomes and decide what activities are best suited to promote those outcomes.   One video might help us to get a birds-eye view of a statistical analysis: its purpose, its related concepts, and procedures.  Another video might help students walk through a calculation of the statistical measurement.   The two videos align to different student outcomes.   

We must also reinforce some basic principles of how learning works.  One activity might practice a student on a component skill; another activity might help integrate several component skills.  One activity might be a word problem with changing values; another activity might be a case study that involves multiple calculations and a holistic understanding of statistical concepts.

Online learning attracts students for a variety of reasons, including its flexibility.  I look forward to online learning fulfilling its original promise of individualizing instruction and adapting to learner needs. 

Ultimately, online learning versus face-to-face may prove to be a false dichotomy.   Real-time lectures are being zoomed into the home; online strategies are already supporting flipped classrooms.  The strategies presented in this post will certainly improve online learning and, possibly, all modes of delivery.

References

  1. Galope, R., Bilyk, R., & Woldeab, D. (2023). Exploiting a natural experiment in assessing student learning outcomes in public and nonprofit administration: A demonstration. Teaching Public Administration. https://doi.org/10.1177/01447394231223071
  2. Ambrose, S., Bridges, M., & Lovett, M. (2010). How learning works: 7 research-based principles for smart teaching. John Wiley and Sons.

H5P

Robert N. Bilyk
President, LodeStar Learning Corporation

Introduction

LodeStar Learning now supports the import and playback of H5P activities in its LodeStar eLearning authoring tool.  This is a new feature that is made possible mainly by the ‘openness’ of H5P and its creators. 

This article will unpack the importance of this announcement and tease out all the instructional technologies that make it possible.  I’ll talk about H5P, its openness, its current uses and briefly describe all of the related technologies.  Then I’ll dig into how you can use LodeStar and H5P together.

H5P accordion and Image Collage displayed on LodeStar page

H5P:  What is it?

In their introduction to H5P, the H5P Group (Joubel) acknowledge that

‘There are lots of barriers for creating, sharing, and reusing interactive content. There are copyright issues, technical issues, complex authoring tools and huge problems with compatibility between different types of content formats, authoring tools and publishing plattforms. H5P is breaking down these barriers.’ 1

We’ve heard altruistic rhetoric before from Ed Tech vendors but, in the case of H5P, their claims hold water.  The H5P Group has truly broken down a lot of barriers and that’s evident in the open nature of their practices.

For those new to H5P, here is a quick background: 

H5P is interactive HTML 5 content presented in the form of quizzes, timelines, accordions, videos, drag and drop, ordering, interactive videos – in total, more than fifty different activities.

H5P playback is supported by plugins for Drupal, Moodle, WordPress and other platforms.  H5P content can be housed in the H5P.com software as a service (SaaS) for a fee or, with limitations, content can be authored and housed in H5P.org for free.   

Many universities subscribe to H5P.com and support a Learning Tools Interoperability (LTI) connection to the Learning Management System.  If you’re new to the concept of LTI, don’t worry.  Your learning management system administrator will be well-versed on that technology.  It is the primary way that learning management systems integrate publisher material and other add-ons to their system.

H5P captures and communicates learner performance through xAPI.  (I’ll describe xAPI in more detail in the next section.) As of this writing, H5P.com doesn’t include a Learner Record Store (LRS) to capture the xAPI statements but it passes those statements to platforms that may have their own LRS. 

About the connecting technologies  

If you’re new to the concept of xAPI, H5P is a great way to learn more.  For more than twenty years authoring systems like LodeStar, Storyline and Captivate sent learner performance data to their host learning management systems through SCORM.  SCORM is a set of specifications that allow authoring systems to a) package the content so that it will run in a variety of LMSs, b) capture learner performance in pre-defined data categories and send that information to the LMS, and c) sequence content. 

xAPI, on the other hand, is a far more versatile specification.  First, the activity does not need to be housed in the Learning Management System.   It can sit in a repository.  Activities can communicate learning performance data from anywhere in the internet.  Most importantly, xAPI can capture and send anything about the learner performance and not just what fits in pre-defined data categories.  xAPI can capture what the learner experienced, attempted, attended, answered, started, completed, passed, interacted, created and more — in short, anything that the learner experienced. And, of course, xAPI can capture and transmit details of how the learner performed.

H5P uses LTI and xAPI technologies.

The openness and modularity of H5P

Much of the code for many of the H5P activities actually sits in GitHub and can be publicly viewed.  The significance of this is that H5P is being built by not only a single entity but the instructional community at large.  

The H5P architecture is modular.  I’ll illustrate the modularity by describing two different activities and their dependencies.  One of the activities is a simple accordion display of content.  To play back that activity one needs some core libraries plus the following modules:

  • Accordion
  • AdvancedText
  • Fonts

In addition to the core, the libraries weigh in at under 1 MB.

A sorting exercise is a little more complex.  It needs:

  • Sort Paragraph
  • Fonts
  • JoubelUI  (for pop-up boxes and other user interface elements)
  • Transition
  • Question

The two activities together weigh in at 1.6 MB. 

When an author imports an H5P activity into LodeStar, LodeStar handles all of the dependencies automatically.

In addition to the playback of H5P activities, the H5P package also includes the code that supports the editing of the activity.  As of this writing, LodeStar only supports the playback of H5P activities and therefore discards the editing code.  Authors can easily make changes to their H5P activities in the H5P cloud and re-import into LodeStar.

The H5P activity follows a very specific open format.  The H5P.json file lists the main library of code and all of its dependencies (code that the main library needs).  A content folder holds media such as images and the content.json file that holds the content that loads into the activity. In the case of a set of questions, content.json holds each of the question stems and all of the answer options.

The combination of all of these things enables the community of developers to support H5P.  The code behind the activities sits in a publicly visible GitHub repository.  The .h5P file format is simply a zip file that follows a specific folder structure and includes the h5p.json and content.json that I mentioned earlier. 

Downloading an H5P

I’ve been at this game for a good number of years. So often I’ve listened to vendors describe the ‘openness’ of their systems or outline an ‘exit’ strategy should you choose to switch away from their platform, only to discover that content is, in effect, trapped.  

H5P.com and H5P.org offer a ‘Reuse’ button that sits just underneath the content player.  In this example of an Interactive video found at Interactive Video | H5P  , when I click on the Reuse button a dialog offers the option to ‘Download an .h5p file.’   Again, the .h5p format is simply a zip file that contains all of the code that I need.

So let’s see what we can do with the Interactive Video Activity in the LodeStar Authoring tool.  In LodeStar 10 Build 14 (or later), I selected the ActivityMaker template, created a new page and then clicked on the Widgets sprocket in the HTML editor.

Since LodeStar 10 Build 14, the list of widgets includes the H5P widget.

I select the widget and get this dialog:

The H5P widget dialog enables me to import a downloaded .H5P activity.  I click on ‘Get H5P Activity’ button and select the previously downloaded Interactive video activity.

LodeStar then extracts the contents of the H5P file and looks for the list of dependencies in the h5p.json file.  Our program then transfers all of the needed dependencies into a common library so that multiple H5P activities can draw from the common library.  In other words, only the files that are needed are added to the LodeStar content package payload.

Once LodeStar has imported the .h5p activity, I set the following properties as needed:

Point Value: 10

Preferred Height: 500

xAPI Verb: completed

Scoring Type: scaled

Selecting the correct xAPI verb and scoring type needs a little explanation.  First, it is important to note that all xAPI statements generated by the H5P activity are forwarded to the Learner Record Store.  Everything is captured and forwarded.

In addition, for scoring purposes, LodeStar listens to the xAPI events that use the verbs ‘answered’ and ‘completed’.    LodeStar then integrates xAPI performance data into its own reporting mechanism.

For each of these, there are two choices for scoring method: raw (meaning raw score) or scaled (meaning a percentage from 0 to 1).  If raw is chosen, the raw score is added to the accumulated points.  If scaled is chosen a percentage of the total points is assigned.

Appendix 1 dives deeper into two scenarios that capture H5P scores differently.

Benefits

H5P offers a wide variety of activities.  LodeStar offers a rich framework that supports branching, SVG graphics, animation, a library of functions, and a variety of page types and widgets.  The combination of the two technologies is truly synergistic.

A second benefit applies to those whose institution does not support an LTI connection between H5P.com and their learning management system.  As of this writing, H5P does not support SCORM.   With LodeStar, an author can import an H5P activity and export the entire package in any flavor of SCORM as well as CMI5.

Lastly, a LodeStar activity with imported H5P can be placed in a repository and made freely available to Open Education Resource libraries without the dependency on H5P.com.

Conclusion

LodeStar’s import and playback of H5P activities expands LodeStar’s repertoire of activities.  As importantly, authors benefit from placing H5P activities in LodeStar’s rich framework enhanced by page types, widgets, LodeStar script, media, animation, themes, designs, characters, backgrounds and more.

We look forward to future collaborations and further exploring how LodeStar Learning can leverage the openness of H5P for the benefit of students.

H5P Interactive Video imported into a LodeStar page and displayed with LodeStar background, themes and design style applied.

Appendix 1

Here are the two typical scoring scenarios involving an activity with 5 questions and a ‘Finished button”:

Scenario 1:

 (In LodeStar the verb and scoring type are set to Answered – Raw)

Total Points is set to 5.

If the learner answered all questions correctly, except for question 3, this is what LodeStar would record.

Question 1  — 1 point

Question 2 – 1 point

Question 3 – 0 points ❌

Question 4 – 1 point

Question 5 – 1 point

Finished – 0 points (finished generally sends an xAPI statement with the ‘completed’ verb.  ‘Completed’ does not match ‘answered’, which is what LodeStar is listening for.)

Score is 4/5

Scenario 2:

(In LodeStar the verb and scoring type are set to Completed – Scaled)

Total Points is set to 10.

If the learner answered all questions correctly, except for question 3, this is what LodeStar would record.

Question 1  — 0 point (Individual items generally send an xAPI statement with the ‘answered verb.  ‘Answered does not match ‘completed, which what LodeStar is listening for)

Question 2 – 0 point

Question 3 – 0 point ❌

Question 4 – 0 point

Question 5 – 0 point

Finished – 4 points (finished generally sends an xAPI statement with the ‘completed’ verb.  ‘Completed’ does match the ‘completed’ setting in the LodeStar scoring method.)

4 points out of 5 is a scaled score of .8 .  .8 multiplied by 10 total points is 8.

Score is 8/10

Reach out to supportteam@LodeStarLearning.com with any questions.

1  https://h5p.org/about-the-project#

State Response Engine

Robert N. Bilyk
President, LodeStar Learning Corporation

Kjrsten Holt
Faculty, Minneapolis Community and Technical College

Introduction

I’ve written about a ‘Thinking Activities’ continuum that categorizes the skills needed by an instructor or trainer to develop various levels of thinking activities.

At one end of the continuum, trainers and instructors can promote thinking with reading assignments and discussions; at the other end, they can use realistic simulations that immerse students in a real-world environment.

Creating a realistic simulation can be daunting.  It requires time, effort, and a specialized skill.

Therefore, in my opinion, we need to uncover and discover more thinking activities that fall in the middle range – activities that trainers and instructors can create given their time constraints and skill level.   This article proposes one such activity.

An introduction to the State Response Engine

A “State Response Engine” (SRE) randomly selects a “state” or situation and presents responses to learners in the form of actions. This approach is designed to promote higher-order thinking by presenting learners with different scenarios and challenges based on the randomly chosen state. The SRE allows for a variety of responses, some of which can be correct in one situation but incorrect in another. It introduces an element of unpredictability and adaptability to the learning process, encouraging learners to think critically and make decisions based on the presented context.

In a composting problem example, the SRE randomly selects states like “You are presented with bags and bags of brown leaves” or “You are presented with bags and bags of vegetable produce” and then provides corresponding response options. This dynamic and adaptable approach to learning is what defines the State Response Engine, making it an appropriate name for this type of activity. It offers a state-based framework for presenting challenges and responses to learners, promoting engagement and problem-solving skills.

Let’s see one in development from start to finish:

The need

Kjrsten Holt is a faculty member at Minneapolis Community and Technical College.  She teaches web and graphic design.  Originally, she created a ShortSim1 script for her class on web design fundamentals, part of a two-year program in graphic and web design.  This article takes her original design and translates it into a State Response Engine activity.

A little more background in Kjrsten’s own words:

Students in the course are a mix of print design and web design majors, ranging from high-school age to adults with previous degrees. One of the course’s learning objectives is to be able to successfully optimize images for various publishing formats. Upon completing this learning experience, students should be able to choose the correct image size and export format for print images, and web images.

The ShortSim strategy was an excellent choice for Kjrsten’s learning outcome because students would be presented with a situation and required to make the right choices related to technique, image size, filetype, compression type, and so forth.

I proposed that we try the SRE strategy to accomplish the same objective and Kjrsten agreed.  The SRE isn’t a better strategy than ShortSim.  It simply offers the following:

  • Easily generated – A little more difficult than setting up a multiple choice but not much more difficult
  • Quickly generated.
  • Largely text-based thus saving time although images can be used.
  • Randomly chooses a situation.  The students’ selections are judged right or wrong based on the randomly chosen situation, thus creating another element of challenge.
  • Uses delayed response, which means that students’ responses are evaluated after they’ve completed the entire scenario.  They get feedback and are then encouraged to improve their score.  Students are encouraged to read and remember the feedback to improve their score on the second round. 

Why is this important?

We need strategies that make students think but don’t require a disproportionate amount of time to generate.    Students get tired of discussions that seem perfunctory.  They also tire of strategies that have worn out their welcome from repeated use.  We need new strategies!

How do we use the SRE strategy?

We’ll use the LodeStar eLearning authoring tool to illustrate the SRE.  LodeStar makes configuring an SRE easy, but with a little ingenuity the same can be accomplished in StoryLine or Captivate.

The first step in LodeStar is to choose the Challenger Page Type in the ActivityMaker Template.  The Challenger Page type is based on the SRE strategy.

The Situations (States)

The next step is to construct the randomly selected states.

In our example, the State Response Engine randomly chooses a state from two possible situations: 

State 1:  Your task is to take a portrait of a musical band for a print poster.  Decide what device to take and, if needed, set it to an appropriate resolution.

State 2:  Your task is to take a photograph for a social media post for a band.  Decide what device to take and, if needed, set it to an appropriate resolution.

This is what it looks like from the author’s point of view:

The ‘Situation’ category will either display “Your task is to take a photograph for a print poster.” or “Your task is to take a photograph for a social media post.” 

The ‘Situation’ category can be assigned two or any number of randomly selected states.  This is what it looks like to the student:

In the screenshot, the student has chosen the situation category and is presented with the task as well as some choices to make that are related to type of device and resolution needed for the task.

Based on the situation, the student chooses the appropriate photographic device, sets the correct resolution, and then conducts the photoshoot and editing accordingly.

For example, for a print poster, the correct actions are:

  • Choose an 18-megapixel digital SLR.
  • Set device to shoot at 5184 x 3456 pixel resolution at 300 dpi.
  • Shoot a lot of images (always true).
  • Bring extra batteries or a charge cord (always true).
  • Consider the purpose and goals of the photoshoot. (always true)
  • Convert Image to TIFF.
  • Change color space to CMYK (optional).

The author sets the SRE options from one of the following:

  • Always correct
  • Never correct
  • Correct if dependency exists
  • Not Applicable

This is how it looks:

The selection option is the title:  Choose an 18-megapixel digital SLR.

The option will be displayed under the “Situation” Category.  Categories can be steps in a procedure or phases or categories or whatever.

We’ve already crafted our random states and so we select ‘No’ or leave the option blank.

Choosing an 18-megapixel camera depends on the task.  The prerequisite task must be to create a print poster.  This option is only correct if we’re shooting for a print poster. Therefore, the dependency is set to “Take a portrait of a band for a print poster…”

We then assign points.  If the student correctly selects this option, they gain 10 points.

We select the condition for which this choice is indeed the correct response. It is correct if the dependency exists.  The dependency is “Take a portrait of a band for a print poster…”

We indicate any resources that should be made available to support the student in making the right selection.  The resource is another page in the project that displays when we click a resource button.  In the screenshot above, “Resolutions” appears as a resource.

Finally, we provide feedback.  Feedback A is feedback that is displayed if the item is incorrectly selected.  Feedback B is feedback that is displayed if the item is incorrectly omitted.  (The tooltip shown on mouse-hover reminds us of this.)  In our example, if the student doesn’t select an 18-megapixel camera, we provide feedback that the student needs a high-resolution camera.

In our example we kept it simple.  We have 18 selections, which include the two random states.

Optionally, we can add more random states to the other categories.  For example, the SRE can randomly choose good or bad weather conditions.   The choice to use artificial lighting is correct only if a) we are doing a photoshoot for a print poster and b) we’re forced indoors because of the weather.  Here is how we configure the option:

In the above example using artificial lighting is dependent on the task of taking a portrait for a print poster and shooting inside because of the weather.  If those two dependencies are not present, the selection is judged incorrect.

The artificial lighting option was just an example that we did not include.  We kept our prototype simple.   The emphasis was really on resolutions and file types according to the stated learning outcome.

In our example, once the student completed all three categories ‘Situation’, ‘Photoshoot’ and ‘Editing’, they are shown the ‘Submit’ button.  When the student submits, the SRE offers feedback.  Here is a snippet.

You incorrectly omitted ‘Reduce image size so that it fits on your monitor’. Reducing file size is useful when handling images for the web.

You incorrectly omitted ‘Compress file with software and select JPG format’. JPG is a good choice when you need a smaller file size.

Also, in our example, we display the score and give the option for the student to try again.

Conclusion

The skill level needed to configure the State Response Engine is just slightly higher than constructing multiple choice or fill-in-the-blank questions in an authoring system.   The added value comes from mixing it up – putting students into different situations so that they exercise their thinking with different propositions or rules. 

If A, then…  

The SRE practices students on what Smith and Ragan2 call procedural rules and relational rules or principles.   

These relationships are often described in the form of an if-then or cause-effect relationship.

If we want students to think, we need to place them in situations where they can practice thinking.  The SRE makes it possible for trainers and instructors to construct these situations with a reasonable level of skill, time, and effort.

To view the prototype, visit Photoshoot

References

  1. ShortSim is a term coined by Clark Aldrich in his book ShortSims
    Aldrich, Clark (2020) ShortSims, CRC Press
  2. Smith, P. L., & Ragan, T. J. (n.d.). Instructional design. 1999.

Thinking Activities

Robert N. Bilyk
President, LodeStar Learning Corporation

Introduction

We want students to think.  That’s the common refrain among educators.  Ideally, we move students beyond the classroom and textbook and put them in situations where they apply their learning.

As instructors and trainers, we are also desperately strapped for time.   We seek resources from publishers and Open Education Repositories but seldom find the challenging activity that helps students put it all together for our specific course and intended outcomes.

If we consider home-growing a ‘thinking’ activity, we are presented with a continuum of possibilities.  In this discussion, we look at different types of activities along the continuum.  In my next post, I’ll suggest a new type of activity that can be developed in a short time frame with skill that is within reach of instructors and trainers.  I call it the State Response Engine.  But for now, let’s look at the “Thinking Activities” continuum.

Thinking Activities Continuum

By continuum, I mean a sequence, series or system in which adjacent elements are not perceptibly different from each other, although the extremes are quite distinct. 

In this case the continuum represents categories of learning experiences that promote thinking.  The ends of the continuum (in this case, Readings and Discussion, and Context-Challenge-Activity-Feedback) seem extremely different but the activities in the middle vary imperceptibly at least in terms of the skill needed to design and develop them.

Reading and Discussions

On the easier end, we educators can assign readings.  By easy, I mean technically easy.  We know our content, or we can work with a subject matter expert.  With a little ingenuity we can develop prompts to promote discussions, team-based problem-solving, role-playing and so forth.  The technology can be as simple as a discussion board or a content page in a learning management system.  The technology is easily mastered by instructors and trainers.

Quizzes and Puzzles

In addition to assigned readings and discussions, we can challenge trainees and students with multiple choice questions, true or false, fill-in-the blanks, matching, crosswords, word games and drag and drop exercises.  This implies a new skill level that requires knowledge to construct these interactions and form good questions. 

As Ben Clay wrote, “Technically correct and content-appropriate multiple choice and true-false test items require an extensive amount of time to write and revise.”1   This category can also include flashcards, quick knowledge checks, click-and-reveal, and other memory-recall exercises. There is a significant skill involved in developing memory games and good assessment items, to be sure.  But on the technical side, instructors can quickly master their learning management system’s quizzing tool or choose a good authoring tool with templates.

However, our goal isn’t to prepare students to answer multiple choice questions.  We want students and employees to assess their current situation, recall relevant concepts, principles, rules, and procedures and apply them to the situation.    To be fair, in the view of the authors of Make it Stick 2, it’s important for students ‘to get it out in order to get it in’ – to make it stick.  Any recall activity is helpful in that regard.  But, in my view, we need additional strategies to simulate real-world situations to help students draw from their knowledge and apply it appropriately.  And that’s why we progress to a higher level.

Simple Simulations

Further down the continuum we have simple simulations such as Clark Aldrich’s ShortSim3 that I’ve written about in the past.  The technical difference and the skill involved is almost imperceptible from developing online MCQ assessments. 

If an instructor can use an authoring tool and devise multiple choice questions, fill-in-the-blanks and so forth, then they will be able to do the things needed for a ShortSim. And that is, write narrative, add graphics, present decision options, advance the narrative.  Related to the latter, the instructor can choose a linear path or branch.  Branching allows students to experience something different as a consequence of their decisions.  Authoring systems like LodeStar, Captivate, StoryLine and BranchTrack make it relatively easy to set up branches.  But it is a new skill level and it takes a little time to master.

This category can also include simple virtual tours, which require branching and media like 360-degree panoramas and Photospheres.

Interactive Case Studies

Interactive case studies help students put their knowledge into practice. They help students develop the ability to analyze, judge, and make decisions.  Interactive case studies often include information that a student must consult to make the right decision.  In nursing, the information might come from a simulated electronic health record.  In economics, it might include a Dow Jones Index for a given time period. 

Akin to interactive case studies, decision-making scenarios display content and test the knowledge of learners by challenging them to make decisions and observe the consequences. Each screen presents new situations and new choices. 

In a lesson on patient management, the student is stepped through the patient history, an initial evaluation, physical examination, patient report and diagnosis.   The student is evaluated on time and budget management and on patient care management.

The interactive case studies and decision-making scenarios that belong to this category are highly developed.  They might include multiple resources, graphs, charts, fact sheets, data sets, videos, biographies, etc.  They may include people to interview and glean information from.  They may include programmatic branches that send students down instructional paths based on the choices they made.

Technically, interactive case studies and decision-making scenarios are just a step up from a ShortSim.  They involve more time and effort in deciding the level of detail and developing all of the resources to immerse the student in the case or decision-making situation.  In the Patient Management example, the additional technical challenge is in presenting an animated graphic that shows the student’s performance in time and patient management.  The Patient Management lesson now requires us to store student performance data in variables and then control a graphic based on the variables.

This category can also include interactive fiction/non-fiction, problem-based scenarios, historical case analyses, crisis management simulations, and role-playing scenarios.

Context Challenge Activity Feedback

Context, Challenge, Activity, and Feedback (CCAF) is a design model used to create interactive learning activities.  Designers like Allen Interactions, Rivertown Communications, TrueUp, and KDG Interactive build media-rich CCAF-type activities that include not only graphics, animation, video, and other media but performance indicators, branched instruction, resources, coaching, and simulations – all beautifully illustrated and intended to immerse the student in a real-life context.

CCAF activities are generally free-form activities.  They don’t appear to be produced from templates but represent an original treatment of an environment that sets the challenge for the learner.  Underlying these treatments may be re-purposed models (interaction types) but the presentation doesn’t appear to be repurposed and template based.

This category can also include simulated experiments, virtual laboratories, software simulations, equipment operation simulations, process simulations.

Virtual Reality/Augmented Reality

Virtual reality and Augmented reality applications can span the entire continuum of thinking activities, but they require specialized knowledge.  VR and AR can range from simple multiple-choice types of activities to simulations that are simple or quite sophisticated.   Applying VR and AR to thinking activities requires knowledge of platforms such as Unity and 3D creation tools like Blender.

Conclusion

The Thinking Activities continuum represents the technical skill needed by a trainer or instructor to develop activities at different levels.  At one end of the continuum, the instructor needs to be familiar with the learning management system to post discussions and collaborative problem-solving narratives.  To create quiz items and different types of exercises like drag and drops and matching, the instructor must be familiar with either the native tools of the learning management system or an authoring tool like Storyline, Captivate or LodeStar.  Then we get into the category of Simple Sims and the idea of branched instruction.   That carries upward to interactive case studies and decision-making scenarios that present resources and media and show the consequences of decisions.  Depending on the design, case studies and decision-making scenarios may introduce more sophisticated branching.  At the highest level in terms of time/expense and skill are activities that follow the CCAF model.  They often recreate real-world situations that blend media with branching logic and data. 

The challenge for instructors and trainers is often time.   Activities at the higher end of the continuum require more time to develop.  Instructors and trainers are smart people.  With today’s authoring tools, they can acquire the skills as they work their way from quizzes to short sims to interactive case studies.  I’ve observed many instructors and trainers move along the continuum.  But at some point, we all need a return on our investment of time or money.   That ROI might be supported by the numbers of students that we teach or the significance of the skill in the curriculum.  For example, the instructor might invest time in a project that helps students through a significant and known stumbling block.   In math that might mean difficulty with word problems and translating real-world situations into mathematical equations.  In chemistry, understanding chemical bonding.   In computer science, understanding data structures and algorithms. In machining, the use of the sine bar and basic trig. In interpersonal relations, recognizing unconscious biases.  Every subject has its stumbling points.   Time invested in helping students through those stumbling blocks is time well invested.

From here

That brings us back to the midpoint of the continuum, which includes ShortSims.  In my view we need to uncover and invent more types of activities that are easily within the technical reach of the intrepid instructor and producible in a short period of time.

I have developed several models that fit the bill.  The next post will re-introduce the State Response Engine (SRE).  The SRE randomly picks situations and then challenges students to choose the right responses or actions based on the randomly chosen situation.   Here is a link to that post:

References

1 Clay, Ben “Is This a Trick Question”, 2001, Kansas Curriculum Center

2 Brown, R. M. (2014). Make it Stick. Brown, Roediger, McDaniel.

3 ShortSim is a term coined by Clark Aldrich in his book ShortSims
Aldrich, Clark (2020) ShortSims, CRC Press

The Teaching Machine – Past, Present and Future

Robert N. Bilyk
President, LodeStar Learning Corporation

I’ve been reading Teaching Machines, The History of Personalized Learning by Audrey Watters1, who may be best known for her “Hack Education” blog.  Teaching Machines is a great read.   It was published in 2021.  Since then, Audrey Watters posted her last on ‘Hack Education’ and, according to her post, no longer writes about Tech Education. 

The Past

A good part of Teaching Machines is dedicated to B.F. Skinner’s Teaching Machine.  By today’s standards, it’s a crude mechanical device.  The early versions were wood boxes that displayed questions through a small window and required students to write answers on a paper strip.  Once students submitted their answers, they pulled a lever to display the answer and advance the paper strip.  The questions and answers were printed on a paper disk partitioned into pie wedges.

For fun, I simulated an early prototype of B.F. Skinner’s Teaching Machine.  I used our LodeStar Authoring tool to simulate how the early machine worked.

An early prototype of B.F. Skinner’s Teaching Machine

See the simulation at https://lodestarlearning.github.io/Teaching-Machine/index.htm

Films from that era (the 50s) conjure up rows of students working on the machines – a setting not unlike a factory floor with row upon row of workers at their sewing machines.

And yet, as Watters writes, the impetus – the motivation – behind the design of these teaching machines was largely, in the opinion of their inventors, altruistic and intended for the benefit of the student.

Watters recalls how Skinner visited his daughter’s 4th grade math class.  The teacher wrote math problems on the board.  Some students finished quickly and squirmed in their seats as they waited for the class to progress.  Other students struggled to complete the problem and perhaps never successfully finished.

Fast forward thirty years.  I was a high school language arts teacher in the 80s.  One of my classes consisted of students who could not read or write very well.  I’ve spoken about this a dozen times.  The underlying reasons for their inability to read or write very well had to do with who they were:  some students were newly immigrated from non-English speaking countries; other students had learning disabilities; and others hated school.  I was handed one curriculum that treated the class as one homogenous body.  I really needed materials and strategies that addressed the needs of at least three populations.  I needed some form of teaching machine.

Skinner, in response to his daughter’s class, decried that “something must be done”.  I approached our Instructional Materials Center Coordinator and asked if, in any way, the Apple IIe machine in the library could help me ‘individualize’ the instruction.  For Skinner, the realization led to his idea of a teaching machine.  For me, I began a career-long journey into the world of technology-assisted personalized learning.

Minnesota Educational Computing Consortium’s (MECC) Oregon Trail on an Apple II.

From time to time, I’ve challenged myself to take a hard look at the principles that drove my pursuit to help students with technology.  As a starting point, I was interested in the following ideas:

For the student:

  • Go at your own pace, where the learning objective is constant, but time is variable.
  • Get quick feedback.
  • Have fun.
  • Pursue lesson branches based on your interest, choices and performance.
  • Be challenged.

For the instructor:

  • Free up time for individualized help.
  • Get information on what questions students are consistently getting wrong. 
  • Inform future lesson plans.

I used computer programs on the Apple IIe, but I wanted to write my own.  Several years later, I was headed home on a dark country road, listening to Minnesota Public Radio.   I was listening to a story about interactive video discs – and I became smitten.  At this point in my career, I worked at a college.  I asked my dean (who had a background in instructional systems), what I needed to learn to program interactive discs.  He answered, “Learn C”.  And so I began my study of both learning theory and computer programming and camped out at their crossroads.

At the University of Minnesota, I studied curriculum and instruction.  At Saint Paul College, where I worked, I took classes from programmers who made their living at Unisys, Honeywell and other industries.  As I soon discovered, my dean’s advice was both bad and good.  It was bad because, by the 90s, one did not need to learn a low-level programming language, like C, to control a video disc player.  The TenCore Language Authoring System could do that with a much higher level language (thus easier).   It was good because I became proficient at programming and could extend authoring systems with lower-level code. In short, I was never limited by the authoring tool, only by my imagination. Later I worked on teams to design a content management system, several instructional products and two authoring systems.

The Tencore Language Authoring System (LAS)
was a TUTOR (PLATO) derivative developed by Paul Tenczar 
The Interactive Videodisc was a laser– readable random-access disc that contained both audio and analog video. The videodisc’s full motion video was displayed on a computer with the help of a video overlay card that superimposed the analog video on a digital monitor.

Authoring systems like TenCore, Authorware and Toolbook enabled designers to program the computer to send commands to the player via serial communications.

In the early 90s, the University of Minnesota introduced me to Course of Action, which was later renamed Authorware.  Authorware was a game-changer.  With this system, I could finally efficiently realize all the ideas I had started out with and that now took the form of:

  • Individualized instruction
  • Immediate feedback, or strategically delayed feedback, based on the design.
  • Fun interaction with analog video, high-fidelity sound, dithered graphics (256 color displays), three-dimensional animation (yes, in the early 90s), simulations, games….
  • Branched, non-linear instruction.
  • Data collection and reporting. (The installed computer program could gather data and, using an early standard from the aviation industry (AICC), report it to the instructor.)

It was a great start and a step toward the present. 

Authorware, designed by Dr. Michael Allen and his amazing team, changed everything. The work shifted from arcane computer commands to the design of learning experiences. Authors dragged icons onto a flowline that commanded the computer to present graphics and text, animate, play audio and video, branch instruction and execute simple scripts.

The Present

The promise of personalized instruction is partially realized today.  Our current teaching machines are programmed by smart authoring systems that can present and control media, animate, branch instruction, simulate real-world conditions in various degrees of fidelity, and report student performance through standards like SCORM, xAPI and CMI5.  We also have virtual reality, which can immerse learners in some ‘real-world’ or imaginary context, and augmented reality, which can add a digital layer over one’s interaction with the physical world.

We also benefit from gamification, adaptive learning, learning experience platforms, learning management systems, microlearning, digital story-telling, three-dimensional graphics, and 360 imagery and video.  In short, Teaching Machines have come a long way from questions displayed in a tiny window.

And yet there is still something significantly missing that would have benefited both Skinner’s daughter and my high school students…or any learner.

The Future

When it comes to the future, I’m on thin ice.  I offer this up more as an invitation to hear from others who have greater insights than I. There is so much happening in academia, industry and in academic-commercial partnerships that I’m just not aware of.  But I’ll briefly touch on what has been nagging me for years – and return to this theme in a later post with, perhaps, your contributions.

The problem, in my view, is in the finiteness of our present- day teaching machines.  As an example, years ago, my colleagues put out a math series on interactive video disc, then later on CDROM, and then later on the web.  The lessons were well presented and followed by numerous exercises in various topics like college-level algebra.

I observed students working through the math programs in Saint Paul College’s Instructional Technology Center.  If students didn’t ‘get it’ from the presentation or the exercise feedback, they had little recourse.  Fortunately, in some settings, a qualified instructor would then take over and help the student with a different tactic.  I observed settings, however, where similar programs were being used but with underqualified tutors.  If the students didn’t comprehend the lesson, that was the end of the line.  The lab monitor was unable to help them.

A lot of topics rely on a student’s prior knowledge, which may or may not be present.  In short, prerequisite skills. 

Several years ago, a fleet of‘adaptive learning’ platforms appeared on the market that proposed to remedy this short-coming.  (examples of adaptive learning systems include CogBooks, Acrobatiq, Knewton).  A lot of investment and effort went into assessing student’s performance and confidence with the subject matter and providing alternative instruction.  Some of the platforms catalogued open educational resources and used, what they called, semantic engines to match students with open resources to help them.

Adaptive learning systems are software platforms that optimize the content to adjust for the learner’s goals and current state of knowledge. They are designed to address challenges such as slow pace, difficulty, lack of practice, and insufficient mastery of content. In traditional e-learning courses, students linearly follow the path an instructor creates. They watch videos, read articles, take quizzes, and practice interactive modules often in a predetermined order. An adaptive learning system contains the same types of material but the order, pacing, and content may change for each learner. The system decides which content to show the learner based on the learner’s goal, performance and confidence level.  If the system determines the current path is too easy for the learner, it can branch to more challenging material. If it finds out the current path is too difficult, it may intervene and review prerequisite content, reduce the challenge, or slow down the pace. 

Most of these systems are proprietary and operate through subscription or a pay wall.  Presumably, students who use them benefit from them but the modern teaching machine (i.e., authoring systems, Learning management systems, learner experience platforms) needs to tap into the same technology and benefit from open systems.

Work is being done to open-source systems that use such approaches as Bayesian Knowledge Tracing.  Bayesian Knowledge Tracing is used in a number of Cognitive Tutoring systems.  This approach develops a model of a student’s knowledge in a given domain and constantly updates that model based on the student’s performance.  Probability of skill mastery can be computed from the student’s current knowledge and the proposed learning material.  Efforts are underway to use these systems to link learning management systems with open content repositories.

 A possibility in the future is that the entire web becomes the new teaching machine.  In the past, several concepts and specifications rose and fell in popularity.   We may see their renaissance.  One concept was Tim Berners-Lee’s Semantic web.

The goal of the semantic web as it relates to education is to make content on the web machine readable – and therefore categorizable and discoverable.  To support the description of educational resources several specifications exist including the Resource Description Framework, and the Web Ontology Language. 

A recently published article titled ‘Investigating the potential of the semantic web for education: Exploring Wikidata as a learning platform’2  explores the application of the semantic web in academic pursuits. 

In short, if one could accurately describe educational content (which may reside in an open educational repository), then authoring tools, Learning Management Systems, adaptive systems, etc. could discover and propose the content to a student who is struggling to achieve an objective.

The key is in an accurate and meaningful description so that its appropriateness can be evaluated for a given student and a given objective.

In the past, we haven’t done well with descriptions, ontologies, taxonomies, metadata and the like.  (By we, I mean the teaching profession.)  Even the concept of the re-usable learning object (RLO) fell to criticism.  A learning object was described by IEEE Learning Object Metadata specification.  The LOM standard defined the attributes required to describe learning content. Unfortunately, learning objects were created without adequate descriptions (after all, what was the point?) and their reusability was called into question.  In general, it’s possible that content authors don’t have the knowledge or the means to properly describe their content in a way that is meaningful to authoring, tutoring or adaptive systems.

The game-changer will probably be machine learning.  Blogs have been filled with examples of how artificial intelligence has supported content generation.  Another important use of AI will be to scan and describe open learning content.

For fun, I asked ChatGPT to parse some educational content on an electrical circuit and then provide me with a description that followed both the Resource Description Framework and then IEEE Learning Object Metadata.   I then asked ChatGPT to come up with its own machine-readable schema that included keywords and reading level.   The simple schema (markup in XML) is shown in the appendix.

The future Teaching Machine may be the semantic World Wide Web

Conclusion:

In my view, to personalize learning, the future Teaching Machine needs to tap into the World Wide Web as a resource to shore up prerequisite knowledge needed to achieve an educational objective.  The systems can’t be proprietary.  They must be open and available to a variety of platforms through an application programming interface (API).  The platforms include authoring systems like LodeStar eLearning authoring system, Captivate, and Storyline as well as learning experience platforms, adaptive learning systems, cognitive tutors and learning management systems. 

As importantly, the resources should remain in their repositories, but still report performance to a learning management system or learning experience platform.  That is now possible through standards like CMI5, but — as I’ve lamented in the past — few learning management systems support CMI5.

The teaching machine began with limited content printed on paper disks.  It progressed to include all of the affordances of the modern computer.  Its future promise might lie in open standards that connect learner needs to appropriate content scattered across the globe.

References

1Watters, A. (2021). Teaching machines the history of personalized learning. The MIT Press.

2 Evenstein Sigalov, S., Nachmias, R. Investigating the potential of the semantic web for education: Exploring Wikidata as a learning platform. Educ Inf Technol 28, 12565–12614 (2023). https://doi.org/10.1007/s10639-023-11664-1

Appendix

ChatGPT generated XML schema for a learning resource on electrical fundamentals.

<educational_resource>
    <title>Understanding Electrical Fundamentals: Voltage, Current, and Resistance</title>
    <keywords>
        <keyword>Voltage</keyword>
        <keyword>Current</keyword>
        <keyword>Resistance</keyword>
        <keyword>Electricity</keyword>
        <keyword>Electrical Power</keyword>
        <keyword>Ohm’s Law</keyword>
        <keyword>Circulatory System</keyword>
        <keyword>Electrical Concepts</keyword>
    </keywords>
    <reading_level>High School / Early College</reading_level>
    <content>
        Just as your heart produces the pressure to make blood circulate, a battery or generator produces the pressure or force to push electrons around a circuit. …..(the remaining content omitted for the sake of brevity)
    </content>
</educational_resource>

Beyond Presentations and Puzzles … to Problems

Introduction

The thrill of learner experience design is in finding the tools, techniques and strategies to engage learners with designs that go well beyond presentations and puzzles.  Often learners must solve problems to achieve a performance or academic goal and it’s in the nature of the problem that we find the best-suited strategies.

I use the term ‘problem’ very broadly to encompass many different things.  The late David H. Jonassen presented a typology of problems in his work, ‘Toward a design theory of problem solving’1.  He wrote that ‘learning to solve problems is too seldom required in formal educational settings.’  Even outside of formal education, we’re often asked to create presentations with assessments that cover some area of compliance whether it be data security, diversity in the workplace, or health and safety.  But often we’re challenged with a training problem that can’t be addressed without engaging learners with the problem.

David H. Jonassen published many books on constructivism, problem-solving and learning with technology.

The Challenge

Our challenge is to develop the right thing that gets the best results with the least expenditure of time and money.  We need strategies that are well-suited to the problem. Seasoned designers have a wide repertoire of strategies, templates, and models that can be used given a level of learning, type of learning, and type of problem.  However, this industry sees new entrants every year2 and the casual developer (an instructor, for example) may not have uncovered all of their options or know when to apply them.

What are the activity types that provide a situation, a problem to solve, and an activity that elicits student performance and constructive feedback?  What are the activities that help designers and instructors engage students at a level that involves challenge, activity and feedback.  More generally, what activities promote higher order thinking that allow us to integrate component skills into a coordinated response that enables us to complete tasks, analyze data, solve problems, and create things?

We have limited time and money.  The quick solution to any higher order learning objective is to simply talk about it, present on it, show a video, hear from an expert, ask some questions, and then, forgive me, throw the learners in the deep end.  Moreover, if we take the time to create an activity that makes the learner do things we may ask ourselves:  was the result commensurate with the time and effort?   In other words, given a learning objective and educational goal, the online instructor or trainer might wonder about the return of investment for a given strategy.  For example, an instructor may ask ‘Is it worth my time to create an activity rather than a multiple choice test?’   And what would that activity be?

Our industry doesn’t make it easy.  It turns out that the industry has many names for things.  There is no standard taxonomy with precise attributes.  And yet we can start with a few widely recognizable terms:  Interactive case study.  Decision-making scenario.  ShortSims3.  Simulation. Branching interaction.  Interactive Non-Fiction. 

Matching Strategies to the Problem

Our first clue that we need more than a presentation is that we’re asking learners to do more than just recall.  We recognize a need to engage learners’ ability to analyze, apply, synthesize, evaluate, make decisions, create things – all of which requires thinking beyond simple recall. Of course, the learner may need to recall the right fact, principle, concept, or rule that is useful in a given situation.  But it is the application of that information that is the key. We recognize that we need a story, a context, possibly varying levels of difficulty, and possibly a challenge in the form of incomplete information, confusion and stress. 

So what’s the solution?  The solution is suggested by the problem.

We can be guided by the nature of the problem that we want learners to solve or engage in.  M. David Merrill tells us that ‘effective instruction is problem-centered’.  Robert Gagné asserted that the point of education was to help learners become better problem solvers. Michael Allen emphasizes the importance of context (‘a problem-solving environment’) and challenge. Richard Mayer wrote that ‘a major challenge of education is improving students’ minds–a goal that is reflected in people being able to solve novel problems they encounter’. 

In ‘Toward a Design Theory of Problem Solving’, David H. Jonassen creates a typology of problems – a useful tool that might help in the selection of instructional activities to promote problem solving.

Categorizing Problems

David Jonassen gave us a way to categorize problems based on problem variation.  Does the problem have multiple solutions or only one?  How many steps are included in the solution?  Does the problem solution draw from multiple principles, concepts, or even disciplines?  Does it require research beyond the content offered in a program of study?  Is the problem domain specific – can it only be solved by learners who are schooled in a specific area of study — or is it general?  Do learners need varying stages of support or scaffolding as they engage in the problem?

If we’re tackling one aspect of the problem, we can think of a key principle from ‘How Learning Works’ 3. Loosely, learners need practice on component skills and then practice with integrating them.  Is our learning objective focused on one component skill or does it involve integration of multiple skills?   If the component skill is simple enough, we might prescribe an activity like a word problem or matching type of exercise.  If we engage students in a synthesis of skills, we’re going to need something more – something that has story (context) and multiple challenges. Jonassen recommends embedding instruction in some authentic context. Context is also a key element to the CCAF model.

Now we’re in the territory of ShortSims (generically, simple sims), decision making scenarios, interactive non-fiction, interactive case studies and other forms of interactions that place learners in realistic settings, challenge them, and make them do things.

A Document Management System was simulated with the LodeStar eLearning authoring tool that challenged new employees to find key documents based on varying search criteria

One way of rationalizing the design and development time relates to the complexity of thinking and acting …and the complexity of the problem.

In a ShortSim the learner is presented with choices.  The choices can cause the narrative to branch or simply advance the narrative in a linear progression.  Making better choices might cause the learner to recall past learning, read and analyze information while looking for clues.  The learner might wonder ‘Is this an instance of, for example, the unconscious bias that I’ve been told about?  If I make the wrong choice, am I compromising data security, risking my safety, … making my supervisor unhappy?’

A ShortSim can engage the learner in some pretty advanced stuff with the clever use of graphics, text and choices.

Clark Aldrich wrote a How-To book, ShortSims,
that walks designers through the process of planning and producing
a simple decision-making simulation. Full reference below

Further down the continuum of simulations, our learning objective might require the learner to make decisions in an environment that includes many variables.  The underlying model might not be a finite set of choices but a complex interplay of multiple variables.  The activity might require a coordinated, intelligent response to an infectious disease outbreak, for example.  The underlying model might be the SEIR model, which represents a population that is susceptible to a specific disease, exposed, infected and partly recovered.   The learning program might vary the number of treatment days, incubation period, contacts per day, and fatality rate.  The learner might need to observe the data and then prescribe a response.  The computer environment can present one scenario from a nearly infinite number of possible scenarios. 

The choice of one strategy over the other can relate to the complexity of the problem we are wanting learners to solve.  Jonassen helps us here. The level of realism and variability of the problem will require increasingly more sophisticated (and time-consuming) designs.

Logical problems that require students to recall information might suggest a word game, a crossword puzzle, or a multiple-choice quiz.

Algorithmic problems require learners to progress through a series of procedural steps and this might suggest a ShortSim or decision-making scenario. 

Story problems with embedded procedures might suggest word problems or interactive fiction or non-fiction that houses word problems in a longer narrative.

Rule-Using Problems and Decision-Making problems that involve defined procedures and methods can be handled with ShortSims, other forms of interactive branching and decision-making scenarios, and role-playing.

Trouble-shooting problems can begin with interactive trouble-shooting guides or low fidelity simulations but when the problem state has variation and complexity then a virtual lab or full-blown simulation might be needed that can a) generate variable states, b) represent those states to the learner in the form of dials, instruments, changes in the environment, changes in characters, etc., c) elicit learner response and d) and provide feedback in the form of dialog, pop-ups, and/or changes in the environment.

Case-analysis problems can be presented through problem-based learning scenarios, historical case analyses, interactive case studies in which the learner is presented with case descriptions and data and asked to accomplish a goal.

Additional forms of problems in Jonassen’s typology include design problems and dilemmas. The types of problems can certainly be expanded to include additional categories of problems: for example, strategic performance problems that involve business simulations and strategy games. And the list goes on …. all beyond presentations and puzzles.

Conclusion

After the designer has looked at the learner, the learner’s environment, the performance goal, the gap analysis, etc., the performance might be whittled down to a problem type that the learner may need to solve.  The problem can range from providing performance feedback to evaluating equations, from delegating authority to applying statistics to improve production.   The type of program often suggests the treatment.  The fun of being a learning experience designer is in exploring strategies and applying them where appropriate in a cost-effective, time-saving manner.

In past articles, I have covered some of these strategies. You’re welcome to visit them.

ShortSims

Interactive Storytelling

Interactive Case Studies

Decision-Making Scenarios

Simulations

References

1Jonassen, D.H. Toward a design theory of problem solving. ETR&D 48, 63–85 (2000). https://doi.org/10.1007/BF02300500

2In a 2021 survey held by Devlin Peck (n=615) (Instructional Designer Full Report 2021 | Devlin Peck), 44% of the respondents had 0 – 3 years’ experience.

3Aldrich, Clark (2020) ShortSims, CRC Press

 4 Ambrose, S., Bridges, M., & Lovett, M. (2010). How learning works: 7 research-based principles for smart teaching. John Wiley and Sons.) 

Designing with Animation

Animation can enrich the learning experience and, when used appropriately, improve learning outcomes.  New standards and new technology under the hood of the modern browser give learning experience designers a whole new set of tools and techniques to apply to their designs.  I’ll parse out the different types of animations and their uses, discuss the underlying technologies and present examples. 

To start, let’s reprise Richard Mayer, the author of Multimedia Learning. 

Mayer defines a concise narrated animation.  He explains that concise refers to a focus on the essential steps in the process. If the objective is related to understanding how a four-cycle engine works, then a concise animation would include only the details that related to the objective.  For example, I might include a simple animation of a piston traveling up and down a cylinder, compressing gas, power stroking from the combustion, exhausting the spent fuel and then refilling with fresh gas.

Wikimedia Commons ((CC BY-SA 3.0)

Concise in this example means that we focus on the crankshaft, the traveling piston, the ignition, the intake and the exhaust.  The learner is not distracted by anything not related to the objective

The image above is an animated GIF.  Most authoring tools can easily import an animated GIF. This technology, however, has its uses and its limitations.  It is difficult to use the animated GIF in a concise narrated animation because it is difficult synchronizing the 4 cycles with the narration.  If you understand four-cycle engines, the GIF makes sense.  If not, then narration will help learners understand each cycle of the process.  Fortunately, there are easy ways to sync animation with narration.  This brings us to the timeline animation.

Timeline Animation

The basic idea behind a timeline is that you control animation effects according to time.  In tools like Storyline, each row of the timeline represents a different screen element.  You can then apply entrance and exit animations to the screen element at specific times.

The timeline in our authoring tool, LodeStar, works differently.   LodeStar displays one timeline per graphical element.  Each row represents a different property such as left, top, opacity, rotation and scale.   With these five properties you can control the position of an element on the x and y axis at a specific time and you can control fade-in and fade-out, rotation and size.  These are the properties that designers typically want to animate.

In the screenshot below, you can see that I have two gears.  A left gear and a right gear.  When I select a gear or choose it from the lower right pull-down menu, I get its corresponding timeline.

As you can see on the timeline, each row represents a different property.

LodeStar Animation Editor with SVG graphic

To understand the positioning properties, we need to understand the difference between the SVG image and its sub-elements.  We also need to understand other types of images like PNGs, GIFs, and JPEGs.  These are called bitmapped or raster graphics because that is precisely what the graphic is:  a map of binary digit (bit) values for every colored pixel.  The rules for positioning bitmapped graphics and SVG elements differ.

To understand this, let’s first tackle the SVG graphic.

SVG

SVG stands for Scalable Vector Graphics. It is a vector image format that can be scaled up or down without losing any image quality. This is different from the bitmap or raster graphics which are made up of pixels and can become pixelated or blurry when they are resized.

SVG graphics are great for logos, icons, and other types of graphics that need to be scalable and look crisp at any size. They can also be easily edited with text editors or graphic design software, and they support interactivity and animation through script.

We can think of an SVG graphic in two ways: as a whole image or as a collection of elements.  Examine the SVG image of an old Buick below.  The automobile can be animated from left to right, for example, like any bitmapped graphic.

SVG Graphic on a LodeStar page

We can do that with a timeline:

Timeline applied to graphic

The timeline shows a duration of 5 seconds.  At 0 seconds the car’s left property is 1.  That means 1% of the width of the window.  At 5 seconds, the left value is 100%.  This means that the left edge of the graphic will be at the right edge of the window – in other words the car rolls off the screen.

This animation would be less than satisfactory because the tires don’t rotate.  So, if we only animated the entire graphic we’d get an inferior result.

That now brings us to SVG elements.  Loosely described, these are the sub elements of the SVG graphic.  They consist of polygons, lines, rectangles, ellipses, paths, layers, groups, and more. In the screenshot below of LodeStar’s SVG editor, we see that the rear tire is selected.  It has a cryptic id, which we can change to an easier name.  Whatever the name, this element is both programmatically addressable (meaning we can change it with a simple script) and it is separately animate-able.  For example, we can rotate the tire.  Now we can move the whole graphic from left to right and rotate two of its sub-elements to improve the animation. (You will see this in an animation sampler introduced in the conclusion.)

SVG graphic with element selected

Bitmapped Graphics

Bitmapped graphics, also known as raster graphics, are digital images made up of tiny colored squares called pixels. Each pixel represents a small portion of the image and can be assigned a specific color value. Generally, only programs like Photoshop allow us to manipulate bitmapped graphics at the pixel level.  Examples of bitmapped graphics are PNGS, JPEGs, and GIFs.

With bitmapped graphics we can animate the entire graphic’s position, opacity, rotation and scale.  But we can’t take one of its subparts (a small sub-section of pixels) and independently animate that section.  At least, not without sophisticated code. Nevertheless, bitmapped graphics have their advantages. Any photorealistic image is best captured in a bitmapped graphic.

The animate-able properties

In LodeStar, the meaning of left and top differ between SVG graphics and bitmapped graphics.  To best explain this, we need to place images in three categories:  the entire SVG graphic, an SVG element (sub-element) and bitmapped graphics that are not inside an SVG graphic.

For SVG elements, left means a translation or change along the x axis.  1 means that the graphic has shifted 1 pixel to the right.  100 means that the graphic has shifted 100 pixels to the right.

For images (including JPEGS, PNGS, GIFS and the SVG graphic as a whole), left means the percentage of the window.  0 means that graphic is painted at the very left of the window.  50 means that the graphic is painted half-way across the window on the x axis.  The reason for the difference is that LodeStar projects maintain their responsiveness to different devices with different screen widths whenever possible.

Technically, when we reposition an image, we are removing it from its normal place in the HTML document.  When we assign a timeline to an image, we are removing it from the HTML flow and assigning it an absolute position. 

If you didn’t want to remove the image from the flow (its position in the document), then you can lock the image position in the image dialog.

An absolute position means a position relative to its parent.  If its parent is 1000 pixels wide, a left position of 10 places the image 100 pixels to the left of its parent’s left edge. If the image is positioned beyond its parent’s boundaries, it is hidden.

SVG elements are displayed inside a viewbox.  We transform the position, scale and rotation of these elements without removing them from the flow. They are painted or shown inside the viewbox.  If we shift the position beyond the boundaries of the viewbox, the element is clipped or hidden.

For SVG elements it is important, when adding a left, top or rotate keyframe to also add a keyframe at the same time offset for the other two properties. SVG transformations (change of position and rotation) are defined by all three properties.

Controlling animations with script

LodeStar animations can be controlled by the timeline, as we’ve seen in the example above.  They can also be controlled by script or by a combination of timeline and script.

Let’s return to the gears example. In this example there are two SVG elements inside an SVG graphic.  In the example, the SVG graphic is not animated at all.  However, its elements (the gears) are positioned and rotated.  One gear is rotated from 0 to 360, the other gear is rotated from 35 to -325.  This causes the gears to rotate in opposite directions at a slight offset from one another so that they mesh.

A separate timeline for each graphic or SVG element

In the example we also positioned a rectangle with rounded corners at the bottom of the viewbox.  We are treating this graphic as a button. We added a branch option to the rectangle, which converts it into a button that responds to clicks.

The branch option that we applied to the rectangle is called a ‘Select Branch Option’.  When clicked, the button executes the following script:

appendValue(“rate”, 1);
var rate= getValue(“rate”)
updateAnimation(“1681686408319”, “play”, “”, “”,10, rate);

In this script, we are adding 1 (appending) to a stored value named “rate”.  We then get that value from storage and assign it to a variable named ‘rate’.  ‘var’ means variable. 

In the third line we use the variable in a function called updateAnimation().  This function allows us to

  1. Identify a page by a page unique identifier called a UID.
  2. Set the state of the animation:  play, pause, or reverse
  3. Optionally we can set the current time in the animation.  By default it starts at 0 seconds. That is why we use “” in the function and don’t bother setting the current time.
  4. Optionally, we can also state the duration of the animation.  By default, the duration is set by the timeline.  We therefore use “” in the function.  We could shorten or lengthen the duration.
  5. We set the number of iterations or the number of times the animation repeats.
  6. Lastly we set the rate. A rate of 1 is standard. A rate of 2 is twice as fast. Every time we click on the button, the animation speeds up.  The gears turn faster.
Controlling animation with LodeStar Script

Animation synced with narration

In the next example, graphics are synced to different points in the narration with the use of the timeline.

Modern browsers now offer us fast and efficient animations and audio support with the ability to synchronize the two.  This is a significant development in the web platform.

In the example below, the author added a narration and a SVG graphic to the same LodeStar page.  In the SVG graphic, the piston, connecting rod, crankshaft, valve, etc. are all SVG elements.  As you can see in the screenshot, the crankshaft is selected and shows an ID of ‘crank2’.  This helps to identify the element in the animation editor.

Synching animation to narration

After the audio narration was imported and the SVG graphic created, the author launched the animation editor.  The play button now plays the audio narration and the animation.  The author can pause the narration and add keyframes to control the position of the piston, the position and rotation of the connecting rod and so forth.

The pivot point or anchor point of the connecting rod is changed with the following buttons TL, TR, C, BL, BR.  These buttons place the pivot point top-left, top-right, center, bottom-left, and bottom-right respectively.  Essentially, we are pinning down the center or a corner so that the rotation happens around this point.  Under the hood, we are really changing the transformation origin.  The transformation origin is the point around which a transformation such as a rotation is applied.

Listening to narration in Animation Editor and adding keyframes to control position, opacity, scale and rotation

Finally, once a timeline is created for an element it can be given a unique identifier and applied to any element with script. For example, if I rotated a rectangle with a timeline on one page, I could apply that animation to a triangle on another page — with the use of script. The script function is webAnimate. (See appendix A)

Conclusion

In the sampler linked below, we can see multiple uses of animation.  On page one, we see a simple decorative animation of an attitude indicator or artificial horizon used in airplanes.  You can easily imagine how this can be applied to a simulation.

One page two, we illustrate how an SVG graphic is moved from left to right while its elements (the tires) are rotated.

On page three is the gear example.  Click on the faster button repeatedly to see a demonstration of how we controlled speed programmatically by changing rate..

On page four we have a simple graphic with foreground and background synced to an audio file.

One page five, we can immerse the viewer in a scene with the use of parallax. Parallax is a visual effect where the background of a web page appears to stay still or move in the opposite direction of the foreground.

Finally, on page six we show a narration synced to an animation.  Pausing or replaying the narration causes the animation to pause or reset.  The narration and animation are synchronized.

(Best viewed in Chrome, Edge, and Safari)

Animation Sampler

https://lodestarlearning.github.io/Animations/index.htm

Author’s Note:

Animations were done in LodeStar 10 (Beta Build 5). Secondly the script for the 4-cycle engine explanation was generated by ChatGPT, and the narration was text-to-speech using Amazon Polly.

Appendix A

webanimate(elementid, timelineid, duration (optional), direction (optional), currentTime (optional), position (optional), callBack (optional))

animates the element’s css properties based on a timeline creating with the animation editor, where id is element ID (no hashtag), timelineid is the id of an existing timeline created in the editor, duration is the length of the animation in seconds, currentTime is where to start the animation in seconds, position is the css position property which is usually set to ‘absolute’ to support top and left movement, and callBack is the name of a page whose branch options will be called when animation finishes.

The Role of Artificial Intelligence in Learning Experience Design

Introduction

I recently asked ChatGPT to list the five most influential instructional designers. 

ChatGPT, of course, is an artificial intelligence chatbot that has grabbed the attention of the world.  Locally, it was put to the test by University of Minnesota Law School. It answered questions from law school exams in several legal subject areas and when judged blindly, it earned a C+.  On every front, businesses and institutions are assessing opportunity – and risk. (New York City has blocked ChatGPT on school devices and networks*.)  Every news outlet is talking about it. Artificial intelligence has stepped off the silver screen and into our lives in a new way.

ChatGPT Lists 5 Influential Instructional Designers

I’ve asked ChatGPT several questions over the past two months.  But last evening, I turned its attention on instructional design and learning experience design.

So, I ask it:  who are the five most influential instructional designers.

Here was ChatGPT’s reply:

Influential Instructional Designers:

  1. Richard Mayer – Known for his work on multimedia learning and cognitive theory of multimedia learning, Mayer has made significant contributions to the field of instructional design.
  2. Clark Aldrich – A pioneer in the field of simulation-based learning, Aldrich has designed numerous training programs and simulations that are used widely in corporate and military settings.
  3. Michael Allen – The founder of Allen Interactions, Allen has designed over 1,000 e-learning courses and is known for his work in instructional design and performance support.
  4. Julie Dirksen – Dirksen is a leader in the field of user experience (UX) design for learning, and has written several influential books on instructional design and UX.
  5. Will Thalheimer – A researcher and consultant, Thalheimer is known for his work on evidence-based instructional design and his writing on instructional design best practices.

The five designers listed are certainly influential. In our Learner Experience Design graduate courses at Metropolitan State University, we’ve included the writings and/or texts of Clark Aldrich, Michael Allen and Julie Dirksen.  In working with faculty, I’ve used research from both Richard Mayer and Will Thalheimer. 

And so, I liked its answer and my curiosity was piqued.  To explore further the role of AI in Learning Experience Design, I wondered if ChatGPT could provide content for a quick lesson on that very topic — Learning Experience Design. (It also gave me the opportunity to take the latest nightly build of LodeStar 10 on a little road test.)

The query produced interesting results.

Disclaimers

Before I show that content, a few admissions and disclaimers.  First, I’m not advocating for the use of AI in the instructional design process.  In the near future, I will certainly evaluate its utility.  Secondly, in my example I offered ChatGPT no directives.  I didn’t specify grade level, target audience, prerequisite knowledge, cultural or situational context.  I simply asked it to spit out information.

A further admission is that I’m very uneasy about the topic of artificial intelligence.  AI obviously presents both opportunity and challenge.  I have a dystopian dread over AI and yet I recognize the many opportunities that it presents.  I’ve long held out hope for truly adaptive learning systems that could individualize the scope and sequence of instruction for the benefit of learners.  Many systems have appeared on the market with ‘semantic’ engines that don’t quite measure up to a system that can learn from a students’ successes, missteps, preferences and confidence level, and make appropriate decisions. Adaptive Learning is one area that benefits tremendously from AI. 

The other thing I recognize is that the information that ChatGPT prints out is, at best, raw material for an instructional designer.  The design must include learning experiences that help students engage with the material, think critically, make decisions, make meaning, and make memory.

David Wiley blogs about this more eloquently in AI, Instructional Design, and OER – improving learning (opencontent.org).  Here is an excerpt from his blog:

“What distinguishes an educational resource from an informational resource is that the latter shows some intentional application of knowledge about how people learn. I have previously argued that the minimum amount of effort you could invest to convert an informational resource into an educational resource was to add practice with feedback. That simple sounding task quickly explodes in complexity as you consider the research on what form that practice should take, how long it should last, when it should be scheduled, what kind of feedback should be provided, whether the feedback should come immediately or after some delay, etc.”

                              David Wiley 

That leads to my biggest disclaimer.  Just for fun, I wanted to see what ChatGPT would produce and how I could use it in a project with minimum effort. 

David Wiley writes:

“LLMs will dramatically increase the speed of creating the informational resources that comprise the content infrastructure. Of course the drafts of these informational resources will need to be reviewed and improvements will need to be made – just as is the case with all first drafts – to insure accuracy and timeliness. But it appears that LLMs can get us 80% or so of the way to reasonable first drafts orders of magnitude faster, eliminating the majority of the expense involved in this part of the process.” 

I took the ChatGPT output and sprinkled in some activities.  The disclaimer is that these are not activities that any of the above authors would endorse.  The design is just a simple present-and-check.  But I would hazard to say its not unlike the content that gets pumped out by course mills.  It’s not unlike the content that one consumes to meet a compliance check or earn continuing professional education credit.  It’s not unlike most of the stuff out there that our 5 influential instructional designers decry. 

In doing this short exercise, I was reminded of David Thornburg’s often quoted “Any teacher that can be replaced by a computer, deserves to be.”  Perhaps, any instructional designer who can be replaced by an artificial intelligence chatbot deserves to be.  Or perhaps the real line of enquiry should be:  can a chatbot be trained to make decisions that imitate an instructional designer?  Can a chatbot be trained to pass the LXD equivalent of a Turing test? (See appendix A for what happened when I gave ChatGPT more information about the audience.)

But disclaimers aside, it is intriguing to me to view what ChatGPT produced. 

Rules of Engagement

Here were my rules when applying the content:

  • Spend very little time (under two hours)
  • Retain all of ChatGPT’s text – even if there were errors (example, Bloom’s Taxonomy was produced in 1956, not the sixties.)
  • Format the content in different ways, including placing it in a time line
  • Add videos or audio from external sources to supplement ChatGPT and provide some human connection.
  • Return ChatGPT’s content to the engine and ask ChatGPT to create a final quiz from that content

You can see this for yourself by clicking on the following link and/or joining me on a screen-by-screen tour (not all screens shown).

Link to content: Learner Experience Design ChatGPT (lodestarlearning.github.io)

Screen by Screen Commentary

First, in my title screen, I give full credit to ChatGPT.

I asked ChatGPT to explain instructional design versus learning experience design.  I simply presented the information as a choice between two explanations: one that matched Learning Experience Design and the other that matched Instructional Design. ChatGPT provided the content; I added this little warm-up learner engagement.

LodeStar Question Widget

I added ChatGPT’s definition of What is Learning Experience Design.

I then added a video from YouTube so that we have an LXD person describe the characteristics of LXD.  This reinforces the information and humanizes it.

LodeStar Video Widget

I asked ChatGPT to list the important events in the history of Instructional Design.  It put out a list of events.  Interestingly, the earliest event was the publishing of Didactica Magna in 1600s. I took that information and presented it in an interactive timeline.

LodeStar Timeline Widget

ChatGPT output the importance of Learner’s Needs and Goals.  I added a video where the host really challenges his guests on how LXD designers actually translate goals and user research into meaningful experiences. 

ChatGPT output an explanation of instructional methodology.  I did a strike-through on key words and asked LodeStar to convert them into fill-in-blanks.  I then provided a randomized word list.   So rather than just reading, the learner has to think about the context and type in the appropriate word.

LodeStar Fill-in-the-Blank Page Type

To supplement ChatGPT’s explanation of Technology and Tools, I added a podcast of instructors talking about the use of technology and tools in response to COVID.

LodeStar Audio Widget

In the Assessment and Feedback session I added a word problem related to calculating the volume of a cylinder (as an example).  The dimensions of the cylinder (radius and height) are randomly generated from a defined range.  Students can work on these problems all day and always get a new set when they refresh the lesson.

LodeStar Word Problem Widget

For User Experience and Interaction Design, ChatGPT spat out content and I added a slideshow viewer of three interesting designs.

LodeStar SlideViewer Widget

Finally, I gave back ChatGPT ‘s content and asked it to produce an assessment.  I then took the question stem and answer options and fed them into a Multiple Choice/Multiple Select Question Widget.

Most of the activities were assigned a point value.  I then added a report page to send this off to either a Learning Management System or a Learner Record Store (CMI5/xAPI).

LodeStar Report Page supports SCORM and CMI5/xApi reporting

Finally,

I submitted the content to Open AI’s text classifier so that it could determine whether or not the content was possibly AI-generated.

Here is the result:

*OpenAI announced the AI Text Classifier to help determine whether or not content was AI-generated

Conclusion

OpenAI already offers specific application programming interfaces (API) to its language processing engine. For example it offers auto completion in its Text Completion API. OpenAI is now inviting developers to its waiting list to use ChatGPT in new and existing applications. One can easily imagine the day when toolmakers will invoke the API commands in order to, in David Wiley’s words, “dramatically increase the speed of creating the informational resources that comprise the content infrastructure”.

We’ve just completed LodeStar 10 and continue to test it. We have no current plans to interface with AI but have been added to the waiting list by OpenAI to explore ChatGPT API capabilities. Who knows where that will lead? In the meantime, our focus is to help designers start with simple projects and then offer them more capability as their designs grow from simple present-and-checks to short simulations, story-telling, decision-making scenarios and CCAF (Context Challenge Activity Feedback) projects that the five influential instructional designers would endorse. As far as AI is concerned, we have a lot to learn about the role of AI in learning experience design. I would love to read your comments.

Doubtless, the course mills will go into overdrive on this technology. Caveat emptor.

Appendix A

In a second attempt I gave ChatGPT a little more information about the target audience.

Here is my prompt:

Write about Learning Experience Design for an audience of high school students who know very little about instructional design or cognitive science. Write at an eighth grade level.

Here is ChatGPT’s response:

Learning Experience Design is all about creating educational experiences that are engaging, effective, and enjoyable. It’s a field that combines ideas from instructional design and cognitive science to help people learn new things.

Think about it like this: have you ever taken a boring class where you couldn’t stay focused or retain information? That’s where Learning Experience Design comes in! The goal is to create lessons that grab your attention and make you remember what you’ve learned.

The designers use research from cognitive science to understand how people learn best. For example, they know that people remember information better when they’re actively engaged in the learning process, instead of just passively listening. They also understand that people remember information better when it’s presented in a way that makes sense to them and connects to their own experiences.

So, Learning Experience Design combines these ideas to create learning experiences that are tailored to the way people learn. This might include interactive activities, simulations, games, videos, and more. The goal is to make learning a fun and memorable experience that sticks with you, so you can use what you’ve learned in the future.

Overall, Learning Experience Design is a field that’s all about making learning better. If you’re interested in making education more exciting and effective, this might be a great field for you to explore!

When I asked ChatGPT to comment on the reading level of this example, it answered:

This content is appropriate for high school students or early college students.

When I applied the Flesch-Kincaid Reading Grade Level test, it scored 10.4. Just saying.

Top Influences on the Development of LodeStar 10

Introduction

Modern web pages offer designers a rich palette of media types and standards to create engaging learning experiences. The web page has become an amazing success story.  It started as a battle ground of competing standards and self-interests and has arguably matured to become a meeting ground. 

If you aren’t on the bleeding edge, you’ll benefit from the convergence of standards. In CanIUse.com, browser support for many enabling technologies such as SVG and WebGL (explained later) shows up as green tiles across the table of browsers.  Green means these technologies are commonly supported, which is good news because learning experience designers can put these technologies to work.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Many of today’s eLearning projects are essentially webpage applications with additional standards that support communication to learning management systems or learner record stores. Many of the technologies that make the web interactive, responsive, accessible, and expressive are the same technologies used in eLearning applications.  Most of the major eLearning authoring systems are web page design systems for web pages that are hosted in learning management or content management systems.  There are many exceptions, of course, which include augmented reality systems, gaming engines and environments, and other virtual spaces that are not built on HTML5.  But let’s stay focused, for a moment, on the web.

For maturing standards, the web has become a place of agreement.  In the not-too-distant past, basic HTML markup and styling had to address the many differences between browsers and how they interpreted the World Wide Web Consortium (W3C) standards.  Even a technology that most of us take for granted, the audio file, was once unsupported by a single file format that every browser could play.  Designers had to choose both an audio format and a fallback format. Thankfully that has changed.  All browsers can now legally play the .mp3 file or the HTML5 Ogg Vorbis audio format.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Soon the .m4a audio file (AAC) will be supported by all browsers and offer even higher quality audio at less the data cost.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

But audio is only the beginning.  All modern browsers (IE 11 excluded) support GIF, animated GIF, JPEG, PNG images, animated PNG, and motion video in the MPEG-4/H.264 video format. 

All browsers support the language features of the last major revision to JavaScript.  JavaScript is the code that makes the web interactive.  It is the code that makes eLearning projects interactive.  Standardization allows all of us to benefit from the interactions that eLearning authoring tools produce with less worry about browser and device differences.  (I emphasize less worry because there is always something to worry about.)

Interactive 3D has become a new frontier for eLearning.  All major browsers support WebGL, which is a method of generating 3D graphics using JavaScript and hardware acceleration.  In the early 90’s, when I first created 3D worlds, I needed an entire lab of computers dedicated to rendering three-dimensional meshes into an animation of three-dimensional images that we would transfer onto a laser disc.  Today, WebGL enables us to render a mesh into a rotatable, scalable image in real-time, all in a browser.  If you’re not familiar with WebGL, please read on.

In short, Learning Experience designers, instructors and trainers can now use audio, video, imagery, text, three-dimensional graphics, scalable vector graphics, math mark-up, interactivity, and logic to realize their grandest designs and create engaging experiences for their learners.

On the eve of LodeStar 10’s release, I am taking stock of these standards and other influences that had a strong bearing on where our product is headed.  Like all toolmakers, I am keeping an eye on effective strategies as well as emerging and maturing technologies and am imagining the opportunities for designers as we work to make these technologies practical and accessible.

Here is a list of standards and strategies that are central to LodeStar’s current development.

Scalable Vector Graphics

A lot of our development has focused on Scalable Vector Graphics.  SVG offers the designer many benefits.  Simple graphics such as the famous SVG tiger pictured here keep their sharpness regardless of the display size and the resolution. They are scalable.  They also offer more opportunity for accessibility.  Scaling can help learners with low vision.  The SVG title is readable by most screen readers. Also importantly, the SVG graphic is made up of individual elements whose properties can be changed by program code or user interaction.

LodeStar displays SVG graphic

In the screenshot below, the tiger graphic is opened in an SVG editor in LodeStar.  The author has right-clicked on an eyeball and can now choose branch options based on selection, deselection, drag, hover over and hover out.  All of LodeStar’s branching options and script can be executed based on any of the above events.  For example, based on the click of an eye, things can happen: the eye color changes, an audio description plays, an overlay appears with a complete description of a tiger’s vision and so on.

With LodeStar, designers edit SVG graphics and add interactivity

Importing PowerPoint as SVG

We’ve never been huge fans of starting an eLearning Project as a PowerPoint.  That hasn’t changed, but LodeStar10 does support importing a single PowerPoint slide or an entire PowerPoint presentation as a series of SVG pages. 

PowerPoint supports exporting a slide or series of slides as SVG.

PowerPoint Presentation

LodeStar 10 adds support for importing a single SVG image or an entire folder of SVG images.  LodeStar interrogates each slide and looks for things like Base64-encoded images.  PowerPoint converts imported images to a long string of characters called Base64.  This is a great format for transporting images inside a single file but browsers tend to load and render Base64-encoded images very very slowly.  LodeStar detects the Base64 encoding and then translates the characters back into an image file that is loaded into the project. 

The result is that vector graphics are editable as SVG elements, and embedded images load and display quickly.  The designer can display the slide as is, edit elements and add branch options to elements.

Designer edits a PowerPoint slide in SVG editor

MathML

For a short while, all browsers supported the MathML markup language, enabling markup without the need for add-ons.

Rendered MathML in LodeStar HTML editor

But there have been setbacks. We’re looking forward to when MathML is once again available in all browsers. Given the likelihood of that, LodeStar continues to support MathML.

Support for MathML

MathML (Mathematical Markup Language) is supported by W3C as the preferred way of displaying mathematics on a web page or eLearning application. MathML describes structure and content of mathematical notation and provides for a higher level of accessibility than simply displaying an image.  Designers can quickly edit and manipulate the size of a MathML expression.  This is an improvement over taking a picture of an equation, for example, and pasting the image into a presentation.  In the past LodeStar automatically converted expressions into images or it used the MathJax library to convert expressions written in Latex to MathML.  But now we’re banking on full support for MathML in the near future.

SimpleSim

For years, LodeStar offered the Interviewer Page Type to support what we called decision-making scenarios and simple simulations.  We continue to offer that page type but have expanded the number of layout options for interactive decision-making. 

For starters, we added a new page type called the SimpleSim.  This page type supports graphics, interactive widgets, text and whatever else is needed to set the scene.  At center stage is the situational prompt and three decision options (as pictured below).   All of LodeStar’s branching options can be invoked based on the learner’s choice.   For example, the ‘Jump to Page’ branch option can bring up a scene that matches the choice and advances the narrative.   Branching options also allow us to add feedback, keep track of points, collect user responses and so forth.

To style the scene shown below, the author used a palette for the color scheme, added a header graphic through Tools>Themes, selected a layout style that set the window width and navigation at the top, and added a background graphic.   The use of palettes, themes, layouts and page types enables the author to control every aspect of this simple simulation, including the interactivity.

Look and feel is controlled by Layout, Theme, and Palette

CCAF

It’s no secret that we are huge fans of Dr. Michael Allen’s Context-Challenge-Activity-Feedback model.  In a project that was intended to improve employees’ social sales index (SSI) in LinkedIn, we set the context as a simulated LinkedIn.  For the challenge, the learner must improve the main character’s SSI score by providing the right advice and interacting with a simulated profile, notifications, messaging, etc. – just like LinkedIn!

LinkedIn Simulation

CCAF projects are not page turners or Present-and-Checks.  They can be quite advanced.  To support a more sophisticated interaction than the display of content and multiple-choice questions, LodeStar offers LodeStarScript, which can be written in the Execute Command branch option.

LodeStarScript enables designers to change the properties of graphics on the fly, including SVG graphics.  Properties can include color, position, image source, rotation, opacity, etc.  LodeStarScript offers the designer the power of conditional logic, loops, local and global scoped variables, and a long long list of functions.

In the simulation below, the learner can select a camera aperture and control exposure.  The effects of exposure are simulated with the simple change of the color and opacity properties of an SVG element.

Camera simulation with LodeStarScript

xAPI/CMI5

Megan Torrance, a veteran of learning design, authored a research paper sponsored by the Learning Guild.  I won’t steal her thunder and encourage you to read the paper for yourself, but I’ll cite two statistics from her research that tell the story of xAPI.

In a survey of 368 respondents, the majority of whom belong to organizations that create or purchase learning solutions, 44.9% of the respondents indicated that ‘We are interested in xAPI but have not used it at all.” 

Version 1.0 of xAPI was released way back in 2013, and yet 10 years later adoption is not widespread.

So what is xAPI, how does it relate to CMI5, and why are we so interested in it?  In short, xAPI and CMI5 are game changers.  They are not the same thing but they are close cousins.  An eLearning activity that uses CMI5 can generate an xAPI statement, which gets recorded in a Learner Record Store.  CMI5 can also tell the LMS whether the learner passed or failed. 

So, let me be a little more specific.

With these technologies, I can store my eLearning projects in my own repository — GitHub for example.  I can then import a very lean and simple file to the Learning Management System, which tells the LMS from where to launch the activity.  The LMS then passes to the activity learner information and a token for secure communication.

CMI5 uses xAPI technology but it also understands the vocabulary that LMSs require.  Pass/Fail.  Incomplete/Complete.  xAPI reports to a learner record store any statement that the designer has added to the eLearning activity.  ‘Learner has reached Level Two.  Learner completed a video.  Learner attempted Level Three four times.’  CMI5 can generate any kind of xAPI statement in the form of learner actions.  In addition, CMI5 can tell the LMS whether the learner passed and/or completed the module.

 But among the reasons why people don’t yet use it are:  lack of knowledge, lack of Learner Record Store, and LMS does not support it.

I am extremely fortunate in that our Learning Management System is Prolaera.  It is designed for the CPA industry.  Prolaera can import a CMI5 activity.  As a result, I can do the following:

  1. Send a statement about the learner reaching Level 5 to the learner record store.
xAPI statement

2.

Read a list of learner experiences from the Learning Management System’s Learner Record Store. (the learner’s name has been erased from the screenshot).

Learner Record Store

From the screenshot above, you can see that we can report on any learner experience.  For example, the learner first experienced the results page with a score of 200 points.  We can also see that the learner passed, satisfied the requirements, completed the module and terminated the activity.  These are all terms that the Learning Management System understands.

It may take time but CMI5/xAPI will eventually be widely adopted.  These standards are incredibly important to the advancement of eLearning.  It begins with awareness. The more designers learn about it, the more they can encourage their learning management system vendors to support it.  In the meantime, we are ready for it!

3D

Glen Fox’s Littlest Tokyo is a great example of what is possible with Three Dimensional objects viewed jn a browser.  The object is beautifully detailed with a running streetcar animation as an integral part of the 3D object.

Littlest Toyko, by Glen Fox

Designers will be able to use free tools like Blender, TinkerCAD, Sketchup or even their smartphones to produce 3D meshes.  

Smartphones like iPhone 12 come equipped with LIDAR.  LIDAR emits a laser pulse that reflects off of solid surfaces and returns to a sensor on the smartphone.  The round-trip duration is noted. From that, the software can accurately position the solid surface in three-dimensional space. LIDAR has been available in specialty instruments for a long time but for designers to be able to use this technology practically, the software needed to improve.

In whatever way the 3D model gets created (3D graphics software, downloaded from a warehouse, generated by LIDAR) it can then be loaded into a viewer and manipulated (scaled, rotated, navigated) by the learner.  Imagine vital organs or historical places or complicated machines as manipulable objects. 

Currently, we’re working on a loader and viewer for 3D Models.  The first LodeStar 10 release won’t include a 3D model viewer, but we’ll introduce it later in a minor release.

In the meantime, we do support photospheres.  Photospheres use the same underlying technology: WebGL. WebGL enables hardware-accelerated physics and image processing and rendering onto the HTML5 canvas.  The hardware is a dedicated processor called the Graphical Processing Unit or GPU.

The photosphere that appears in the screenshot shows a distorted view of an art gallery.  The first art gallery image (shown below) was produced in Blender.  The second art gallery image was taken with an iPhone at the Minnesota Marine Art Museum in Winona.

Photosphere created in Blender
Photosphere created in iPhone

The image appears distorted – in fact, spherical.

Once in LodeStar, the designer can add images, markers, and hotspots to the photosphere.  All of these things get correctly positioned on the sphere.

In the LodeStar editor below, I am adding Lawren Harris’ paintings to the gallery as well as hotspots.  A hotspot click takes the learner to another room in the gallery.  A click on the painting brings up an image overlay.  A click on the video graphic starts a video. 

LodeStar editor adds interactivity to Photosphere

The end result:

Interactive Art Gallery on the Group of Seven

Conclusion

2023 marks the twentieth anniversary of LodeStar Learning.  We filed with the Minnesota Secretary of State on March 11, 2003.  I’m pleased that LodeStar has adapted to all of the technology changes over the years.  LodeStar began as code embedded in Lotus’ LearningSpace.  It then enabled instructors to create rich learning activities in ActionScript and Flash.  In 2013, LodeStar Learning pivoted to a whole new generation of software that used HTML5.  LodeStar10 continues that progression and harnesses the power of HTML5, SVG, 3D and so much more to help designers create great learning experiences.