Designing with Animation

Animation can enrich the learning experience and, when used appropriately, improve learning outcomes.  New standards and new technology under the hood of the modern browser give learning experience designers a whole new set of tools and techniques to apply to their designs.  I’ll parse out the different types of animations and their uses, discuss the underlying technologies and present examples. 

To start, let’s reprise Richard Mayer, the author of Multimedia Learning. 

Mayer defines a concise narrated animation.  He explains that concise refers to a focus on the essential steps in the process. If the objective is related to understanding how a four-cycle engine works, then a concise animation would include only the details that related to the objective.  For example, I might include a simple animation of a piston traveling up and down a cylinder, compressing gas, power stroking from the combustion, exhausting the spent fuel and then refilling with fresh gas.

Wikimedia Commons ((CC BY-SA 3.0)

Concise in this example means that we focus on the crankshaft, the traveling piston, the ignition, the intake and the exhaust.  The learner is not distracted by anything not related to the objective

The image above is an animated GIF.  Most authoring tools can easily import an animated GIF. This technology, however, has its uses and its limitations.  It is difficult to use the animated GIF in a concise narrated animation because it is difficult synchronizing the 4 cycles with the narration.  If you understand four-cycle engines, the GIF makes sense.  If not, then narration will help learners understand each cycle of the process.  Fortunately, there are easy ways to sync animation with narration.  This brings us to the timeline animation.

Timeline Animation

The basic idea behind a timeline is that you control animation effects according to time.  In tools like Storyline, each row of the timeline represents a different screen element.  You can then apply entrance and exit animations to the screen element at specific times.

The timeline in our authoring tool, LodeStar, works differently.   LodeStar displays one timeline per graphical element.  Each row represents a different property such as left, top, opacity, rotation and scale.   With these five properties you can control the position of an element on the x and y axis at a specific time and you can control fade-in and fade-out, rotation and size.  These are the properties that designers typically want to animate.

In the screenshot below, you can see that I have two gears.  A left gear and a right gear.  When I select a gear or choose it from the lower right pull-down menu, I get its corresponding timeline.

As you can see on the timeline, each row represents a different property.

LodeStar Animation Editor with SVG graphic

To understand the positioning properties, we need to understand the difference between the SVG image and its sub-elements.  We also need to understand other types of images like PNGs, GIFs, and JPEGs.  These are called bitmapped or raster graphics because that is precisely what the graphic is:  a map of binary digit (bit) values for every colored pixel.  The rules for positioning bitmapped graphics and SVG elements differ.

To understand this, let’s first tackle the SVG graphic.

SVG

SVG stands for Scalable Vector Graphics. It is a vector image format that can be scaled up or down without losing any image quality. This is different from the bitmap or raster graphics which are made up of pixels and can become pixelated or blurry when they are resized.

SVG graphics are great for logos, icons, and other types of graphics that need to be scalable and look crisp at any size. They can also be easily edited with text editors or graphic design software, and they support interactivity and animation through script.

We can think of an SVG graphic in two ways: as a whole image or as a collection of elements.  Examine the SVG image of an old Buick below.  The automobile can be animated from left to right, for example, like any bitmapped graphic.

SVG Graphic on a LodeStar page

We can do that with a timeline:

Timeline applied to graphic

The timeline shows a duration of 5 seconds.  At 0 seconds the car’s left property is 1.  That means 1% of the width of the window.  At 5 seconds, the left value is 100%.  This means that the left edge of the graphic will be at the right edge of the window – in other words the car rolls off the screen.

This animation would be less than satisfactory because the tires don’t rotate.  So, if we only animated the entire graphic we’d get an inferior result.

That now brings us to SVG elements.  Loosely described, these are the sub elements of the SVG graphic.  They consist of polygons, lines, rectangles, ellipses, paths, layers, groups, and more. In the screenshot below of LodeStar’s SVG editor, we see that the rear tire is selected.  It has a cryptic id, which we can change to an easier name.  Whatever the name, this element is both programmatically addressable (meaning we can change it with a simple script) and it is separately animate-able.  For example, we can rotate the tire.  Now we can move the whole graphic from left to right and rotate two of its sub-elements to improve the animation. (You will see this in an animation sampler introduced in the conclusion.)

SVG graphic with element selected

Bitmapped Graphics

Bitmapped graphics, also known as raster graphics, are digital images made up of tiny colored squares called pixels. Each pixel represents a small portion of the image and can be assigned a specific color value. Generally, only programs like Photoshop allow us to manipulate bitmapped graphics at the pixel level.  Examples of bitmapped graphics are PNGS, JPEGs, and GIFs.

With bitmapped graphics we can animate the entire graphic’s position, opacity, rotation and scale.  But we can’t take one of its subparts (a small sub-section of pixels) and independently animate that section.  At least, not without sophisticated code. Nevertheless, bitmapped graphics have their advantages. Any photorealistic image is best captured in a bitmapped graphic.

The animate-able properties

In LodeStar, the meaning of left and top differ between SVG graphics and bitmapped graphics.  To best explain this, we need to place images in three categories:  the entire SVG graphic, an SVG element (sub-element) and bitmapped graphics that are not inside an SVG graphic.

For SVG elements, left means a translation or change along the x axis.  1 means that the graphic has shifted 1 pixel to the right.  100 means that the graphic has shifted 100 pixels to the right.

For images (including JPEGS, PNGS, GIFS and the SVG graphic as a whole), left means the percentage of the window.  0 means that graphic is painted at the very left of the window.  50 means that the graphic is painted half-way across the window on the x axis.  The reason for the difference is that LodeStar projects maintain their responsiveness to different devices with different screen widths whenever possible.

Technically, when we reposition an image, we are removing it from its normal place in the HTML document.  When we assign a timeline to an image, we are removing it from the HTML flow and assigning it an absolute position. 

If you didn’t want to remove the image from the flow (its position in the document), then you can lock the image position in the image dialog.

An absolute position means a position relative to its parent.  If its parent is 1000 pixels wide, a left position of 10 places the image 100 pixels to the left of its parent’s left edge. If the image is positioned beyond its parent’s boundaries, it is hidden.

SVG elements are displayed inside a viewbox.  We transform the position, scale and rotation of these elements without removing them from the flow. They are painted or shown inside the viewbox.  If we shift the position beyond the boundaries of the viewbox, the element is clipped or hidden.

For SVG elements it is important, when adding a left, top or rotate keyframe to also add a keyframe at the same time offset for the other two properties. SVG transformations (change of position and rotation) are defined by all three properties.

Controlling animations with script

LodeStar animations can be controlled by the timeline, as we’ve seen in the example above.  They can also be controlled by script or by a combination of timeline and script.

Let’s return to the gears example. In this example there are two SVG elements inside an SVG graphic.  In the example, the SVG graphic is not animated at all.  However, its elements (the gears) are positioned and rotated.  One gear is rotated from 0 to 360, the other gear is rotated from 35 to -325.  This causes the gears to rotate in opposite directions at a slight offset from one another so that they mesh.

A separate timeline for each graphic or SVG element

In the example we also positioned a rectangle with rounded corners at the bottom of the viewbox.  We are treating this graphic as a button. We added a branch option to the rectangle, which converts it into a button that responds to clicks.

The branch option that we applied to the rectangle is called a ‘Select Branch Option’.  When clicked, the button executes the following script:

appendValue(“rate”, 1);
var rate= getValue(“rate”)
updateAnimation(“1681686408319”, “play”, “”, “”,10, rate);

In this script, we are adding 1 (appending) to a stored value named “rate”.  We then get that value from storage and assign it to a variable named ‘rate’.  ‘var’ means variable. 

In the third line we use the variable in a function called updateAnimation().  This function allows us to

  1. Identify a page by a page unique identifier called a UID.
  2. Set the state of the animation:  play, pause, or reverse
  3. Optionally we can set the current time in the animation.  By default it starts at 0 seconds. That is why we use “” in the function and don’t bother setting the current time.
  4. Optionally, we can also state the duration of the animation.  By default, the duration is set by the timeline.  We therefore use “” in the function.  We could shorten or lengthen the duration.
  5. We set the number of iterations or the number of times the animation repeats.
  6. Lastly we set the rate. A rate of 1 is standard. A rate of 2 is twice as fast. Every time we click on the button, the animation speeds up.  The gears turn faster.
Controlling animation with LodeStar Script

Animation synced with narration

In the next example, graphics are synced to different points in the narration with the use of the timeline.

Modern browsers now offer us fast and efficient animations and audio support with the ability to synchronize the two.  This is a significant development in the web platform.

In the example below, the author added a narration and a SVG graphic to the same LodeStar page.  In the SVG graphic, the piston, connecting rod, crankshaft, valve, etc. are all SVG elements.  As you can see in the screenshot, the crankshaft is selected and shows an ID of ‘crank2’.  This helps to identify the element in the animation editor.

Synching animation to narration

After the audio narration was imported and the SVG graphic created, the author launched the animation editor.  The play button now plays the audio narration and the animation.  The author can pause the narration and add keyframes to control the position of the piston, the position and rotation of the connecting rod and so forth.

The pivot point or anchor point of the connecting rod is changed with the following buttons TL, TR, C, BL, BR.  These buttons place the pivot point top-left, top-right, center, bottom-left, and bottom-right respectively.  Essentially, we are pinning down the center or a corner so that the rotation happens around this point.  Under the hood, we are really changing the transformation origin.  The transformation origin is the point around which a transformation such as a rotation is applied.

Listening to narration in Animation Editor and adding keyframes to control position, opacity, scale and rotation

Finally, once a timeline is created for an element it can be given a unique identifier and applied to any element with script. For example, if I rotated a rectangle with a timeline on one page, I could apply that animation to a triangle on another page — with the use of script. The script function is webAnimate. (See appendix A)

Conclusion

In the sampler linked below, we can see multiple uses of animation.  On page one, we see a simple decorative animation of an attitude indicator or artificial horizon used in airplanes.  You can easily imagine how this can be applied to a simulation.

One page two, we illustrate how an SVG graphic is moved from left to right while its elements (the tires) are rotated.

On page three is the gear example.  Click on the faster button repeatedly to see a demonstration of how we controlled speed programmatically by changing rate..

On page four we have a simple graphic with foreground and background synced to an audio file.

One page five, we can immerse the viewer in a scene with the use of parallax. Parallax is a visual effect where the background of a web page appears to stay still or move in the opposite direction of the foreground.

Finally, on page six we show a narration synced to an animation.  Pausing or replaying the narration causes the animation to pause or reset.  The narration and animation are synchronized.

(Best viewed in Chrome, Edge, and Safari)

Animation Sampler

https://lodestarlearning.github.io/Animations/index.htm

Author’s Note:

Animations were done in LodeStar 10 (Beta Build 5). Secondly the script for the 4-cycle engine explanation was generated by ChatGPT, and the narration was text-to-speech using Amazon Polly.

Appendix A

webanimate(elementid, timelineid, duration (optional), direction (optional), currentTime (optional), position (optional), callBack (optional))

animates the element’s css properties based on a timeline creating with the animation editor, where id is element ID (no hashtag), timelineid is the id of an existing timeline created in the editor, duration is the length of the animation in seconds, currentTime is where to start the animation in seconds, position is the css position property which is usually set to ‘absolute’ to support top and left movement, and callBack is the name of a page whose branch options will be called when animation finishes.

Advertisement

The Role of Artificial Intelligence in Learning Experience Design

Introduction

I recently asked ChatGPT to list the five most influential instructional designers. 

ChatGPT, of course, is an artificial intelligence chatbot that has grabbed the attention of the world.  Locally, it was put to the test by University of Minnesota Law School. It answered questions from law school exams in several legal subject areas and when judged blindly, it earned a C+.  On every front, businesses and institutions are assessing opportunity – and risk. (New York City has blocked ChatGPT on school devices and networks*.)  Every news outlet is talking about it. Artificial intelligence has stepped off the silver screen and into our lives in a new way.

ChatGPT Lists 5 Influential Instructional Designers

I’ve asked ChatGPT several questions over the past two months.  But last evening, I turned its attention on instructional design and learning experience design.

So, I ask it:  who are the five most influential instructional designers.

Here was ChatGPT’s reply:

Influential Instructional Designers:

  1. Richard Mayer – Known for his work on multimedia learning and cognitive theory of multimedia learning, Mayer has made significant contributions to the field of instructional design.
  2. Clark Aldrich – A pioneer in the field of simulation-based learning, Aldrich has designed numerous training programs and simulations that are used widely in corporate and military settings.
  3. Michael Allen – The founder of Allen Interactions, Allen has designed over 1,000 e-learning courses and is known for his work in instructional design and performance support.
  4. Julie Dirksen – Dirksen is a leader in the field of user experience (UX) design for learning, and has written several influential books on instructional design and UX.
  5. Will Thalheimer – A researcher and consultant, Thalheimer is known for his work on evidence-based instructional design and his writing on instructional design best practices.

The five designers listed are certainly influential. In our Learner Experience Design graduate courses at Metropolitan State University, we’ve included the writings and/or texts of Clark Aldrich, Michael Allen and Julie Dirksen.  In working with faculty, I’ve used research from both Richard Mayer and Will Thalheimer. 

And so, I liked its answer and my curiosity was piqued.  To explore further the role of AI in Learning Experience Design, I wondered if ChatGPT could provide content for a quick lesson on that very topic — Learning Experience Design. (It also gave me the opportunity to take the latest nightly build of LodeStar 10 on a little road test.)

The query produced interesting results.

Disclaimers

Before I show that content, a few admissions and disclaimers.  First, I’m not advocating for the use of AI in the instructional design process.  In the near future, I will certainly evaluate its utility.  Secondly, in my example I offered ChatGPT no directives.  I didn’t specify grade level, target audience, prerequisite knowledge, cultural or situational context.  I simply asked it to spit out information.

A further admission is that I’m very uneasy about the topic of artificial intelligence.  AI obviously presents both opportunity and challenge.  I have a dystopian dread over AI and yet I recognize the many opportunities that it presents.  I’ve long held out hope for truly adaptive learning systems that could individualize the scope and sequence of instruction for the benefit of learners.  Many systems have appeared on the market with ‘semantic’ engines that don’t quite measure up to a system that can learn from a students’ successes, missteps, preferences and confidence level, and make appropriate decisions. Adaptive Learning is one area that benefits tremendously from AI. 

The other thing I recognize is that the information that ChatGPT prints out is, at best, raw material for an instructional designer.  The design must include learning experiences that help students engage with the material, think critically, make decisions, make meaning, and make memory.

David Wiley blogs about this more eloquently in AI, Instructional Design, and OER – improving learning (opencontent.org).  Here is an excerpt from his blog:

“What distinguishes an educational resource from an informational resource is that the latter shows some intentional application of knowledge about how people learn. I have previously argued that the minimum amount of effort you could invest to convert an informational resource into an educational resource was to add practice with feedback. That simple sounding task quickly explodes in complexity as you consider the research on what form that practice should take, how long it should last, when it should be scheduled, what kind of feedback should be provided, whether the feedback should come immediately or after some delay, etc.”

                              David Wiley 

That leads to my biggest disclaimer.  Just for fun, I wanted to see what ChatGPT would produce and how I could use it in a project with minimum effort. 

David Wiley writes:

“LLMs will dramatically increase the speed of creating the informational resources that comprise the content infrastructure. Of course the drafts of these informational resources will need to be reviewed and improvements will need to be made – just as is the case with all first drafts – to insure accuracy and timeliness. But it appears that LLMs can get us 80% or so of the way to reasonable first drafts orders of magnitude faster, eliminating the majority of the expense involved in this part of the process.” 

I took the ChatGPT output and sprinkled in some activities.  The disclaimer is that these are not activities that any of the above authors would endorse.  The design is just a simple present-and-check.  But I would hazard to say its not unlike the content that gets pumped out by course mills.  It’s not unlike the content that one consumes to meet a compliance check or earn continuing professional education credit.  It’s not unlike most of the stuff out there that our 5 influential instructional designers decry. 

In doing this short exercise, I was reminded of David Thornburg’s often quoted “Any teacher that can be replaced by a computer, deserves to be.”  Perhaps, any instructional designer who can be replaced by an artificial intelligence chatbot deserves to be.  Or perhaps the real line of enquiry should be:  can a chatbot be trained to make decisions that imitate an instructional designer?  Can a chatbot be trained to pass the LXD equivalent of a Turing test? (See appendix A for what happened when I gave ChatGPT more information about the audience.)

But disclaimers aside, it is intriguing to me to view what ChatGPT produced. 

Rules of Engagement

Here were my rules when applying the content:

  • Spend very little time (under two hours)
  • Retain all of ChatGPT’s text – even if there were errors (example, Bloom’s Taxonomy was produced in 1956, not the sixties.)
  • Format the content in different ways, including placing it in a time line
  • Add videos or audio from external sources to supplement ChatGPT and provide some human connection.
  • Return ChatGPT’s content to the engine and ask ChatGPT to create a final quiz from that content

You can see this for yourself by clicking on the following link and/or joining me on a screen-by-screen tour (not all screens shown).

Link to content: Learner Experience Design ChatGPT (lodestarlearning.github.io)

Screen by Screen Commentary

First, in my title screen, I give full credit to ChatGPT.

I asked ChatGPT to explain instructional design versus learning experience design.  I simply presented the information as a choice between two explanations: one that matched Learning Experience Design and the other that matched Instructional Design. ChatGPT provided the content; I added this little warm-up learner engagement.

LodeStar Question Widget

I added ChatGPT’s definition of What is Learning Experience Design.

I then added a video from YouTube so that we have an LXD person describe the characteristics of LXD.  This reinforces the information and humanizes it.

LodeStar Video Widget

I asked ChatGPT to list the important events in the history of Instructional Design.  It put out a list of events.  Interestingly, the earliest event was the publishing of Didactica Magna in 1600s. I took that information and presented it in an interactive timeline.

LodeStar Timeline Widget

ChatGPT output the importance of Learner’s Needs and Goals.  I added a video where the host really challenges his guests on how LXD designers actually translate goals and user research into meaningful experiences. 

ChatGPT output an explanation of instructional methodology.  I did a strike-through on key words and asked LodeStar to convert them into fill-in-blanks.  I then provided a randomized word list.   So rather than just reading, the learner has to think about the context and type in the appropriate word.

LodeStar Fill-in-the-Blank Page Type

To supplement ChatGPT’s explanation of Technology and Tools, I added a podcast of instructors talking about the use of technology and tools in response to COVID.

LodeStar Audio Widget

In the Assessment and Feedback session I added a word problem related to calculating the volume of a cylinder (as an example).  The dimensions of the cylinder (radius and height) are randomly generated from a defined range.  Students can work on these problems all day and always get a new set when they refresh the lesson.

LodeStar Word Problem Widget

For User Experience and Interaction Design, ChatGPT spat out content and I added a slideshow viewer of three interesting designs.

LodeStar SlideViewer Widget

Finally, I gave back ChatGPT ‘s content and asked it to produce an assessment.  I then took the question stem and answer options and fed them into a Multiple Choice/Multiple Select Question Widget.

Most of the activities were assigned a point value.  I then added a report page to send this off to either a Learning Management System or a Learner Record Store (CMI5/xAPI).

LodeStar Report Page supports SCORM and CMI5/xApi reporting

Finally,

I submitted the content to Open AI’s text classifier so that it could determine whether or not the content was possibly AI-generated.

Here is the result:

*OpenAI announced the AI Text Classifier to help determine whether or not content was AI-generated

Conclusion

OpenAI already offers specific application programming interfaces (API) to its language processing engine. For example it offers auto completion in its Text Completion API. OpenAI is now inviting developers to its waiting list to use ChatGPT in new and existing applications. One can easily imagine the day when toolmakers will invoke the API commands in order to, in David Wiley’s words, “dramatically increase the speed of creating the informational resources that comprise the content infrastructure”.

We’ve just completed LodeStar 10 and continue to test it. We have no current plans to interface with AI but have been added to the waiting list by OpenAI to explore ChatGPT API capabilities. Who knows where that will lead? In the meantime, our focus is to help designers start with simple projects and then offer them more capability as their designs grow from simple present-and-checks to short simulations, story-telling, decision-making scenarios and CCAF (Context Challenge Activity Feedback) projects that the five influential instructional designers would endorse. As far as AI is concerned, we have a lot to learn about the role of AI in learning experience design. I would love to read your comments.

Doubtless, the course mills will go into overdrive on this technology. Caveat emptor.

Appendix A

In a second attempt I gave ChatGPT a little more information about the target audience.

Here is my prompt:

Write about Learning Experience Design for an audience of high school students who know very little about instructional design or cognitive science. Write at an eighth grade level.

Here is ChatGPT’s response:

Learning Experience Design is all about creating educational experiences that are engaging, effective, and enjoyable. It’s a field that combines ideas from instructional design and cognitive science to help people learn new things.

Think about it like this: have you ever taken a boring class where you couldn’t stay focused or retain information? That’s where Learning Experience Design comes in! The goal is to create lessons that grab your attention and make you remember what you’ve learned.

The designers use research from cognitive science to understand how people learn best. For example, they know that people remember information better when they’re actively engaged in the learning process, instead of just passively listening. They also understand that people remember information better when it’s presented in a way that makes sense to them and connects to their own experiences.

So, Learning Experience Design combines these ideas to create learning experiences that are tailored to the way people learn. This might include interactive activities, simulations, games, videos, and more. The goal is to make learning a fun and memorable experience that sticks with you, so you can use what you’ve learned in the future.

Overall, Learning Experience Design is a field that’s all about making learning better. If you’re interested in making education more exciting and effective, this might be a great field for you to explore!

When I asked ChatGPT to comment on the reading level of this example, it answered:

This content is appropriate for high school students or early college students.

When I applied the Flesch-Kincaid Reading Grade Level test, it scored 10.4. Just saying.

Top Influences on the Development of LodeStar 10

Introduction

Modern web pages offer designers a rich palette of media types and standards to create engaging learning experiences. The web page has become an amazing success story.  It started as a battle ground of competing standards and self-interests and has arguably matured to become a meeting ground. 

If you aren’t on the bleeding edge, you’ll benefit from the convergence of standards. In CanIUse.com, browser support for many enabling technologies such as SVG and WebGL (explained later) shows up as green tiles across the table of browsers.  Green means these technologies are commonly supported, which is good news because learning experience designers can put these technologies to work.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Many of today’s eLearning projects are essentially webpage applications with additional standards that support communication to learning management systems or learner record stores. Many of the technologies that make the web interactive, responsive, accessible, and expressive are the same technologies used in eLearning applications.  Most of the major eLearning authoring systems are web page design systems for web pages that are hosted in learning management or content management systems.  There are many exceptions, of course, which include augmented reality systems, gaming engines and environments, and other virtual spaces that are not built on HTML5.  But let’s stay focused, for a moment, on the web.

For maturing standards, the web has become a place of agreement.  In the not-too-distant past, basic HTML markup and styling had to address the many differences between browsers and how they interpreted the World Wide Web Consortium (W3C) standards.  Even a technology that most of us take for granted, the audio file, was once unsupported by a single file format that every browser could play.  Designers had to choose both an audio format and a fallback format. Thankfully that has changed.  All browsers can now legally play the .mp3 file or the HTML5 Ogg Vorbis audio format.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Soon the .m4a audio file (AAC) will be supported by all browsers and offer even higher quality audio at less the data cost.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

But audio is only the beginning.  All modern browsers (IE 11 excluded) support GIF, animated GIF, JPEG, PNG images, animated PNG, and motion video in the MPEG-4/H.264 video format. 

All browsers support the language features of the last major revision to JavaScript.  JavaScript is the code that makes the web interactive.  It is the code that makes eLearning projects interactive.  Standardization allows all of us to benefit from the interactions that eLearning authoring tools produce with less worry about browser and device differences.  (I emphasize less worry because there is always something to worry about.)

Interactive 3D has become a new frontier for eLearning.  All major browsers support WebGL, which is a method of generating 3D graphics using JavaScript and hardware acceleration.  In the early 90’s, when I first created 3D worlds, I needed an entire lab of computers dedicated to rendering three-dimensional meshes into an animation of three-dimensional images that we would transfer onto a laser disc.  Today, WebGL enables us to render a mesh into a rotatable, scalable image in real-time, all in a browser.  If you’re not familiar with WebGL, please read on.

In short, Learning Experience designers, instructors and trainers can now use audio, video, imagery, text, three-dimensional graphics, scalable vector graphics, math mark-up, interactivity, and logic to realize their grandest designs and create engaging experiences for their learners.

On the eve of LodeStar 10’s release, I am taking stock of these standards and other influences that had a strong bearing on where our product is headed.  Like all toolmakers, I am keeping an eye on effective strategies as well as emerging and maturing technologies and am imagining the opportunities for designers as we work to make these technologies practical and accessible.

Here is a list of standards and strategies that are central to LodeStar’s current development.

Scalable Vector Graphics

A lot of our development has focused on Scalable Vector Graphics.  SVG offers the designer many benefits.  Simple graphics such as the famous SVG tiger pictured here keep their sharpness regardless of the display size and the resolution. They are scalable.  They also offer more opportunity for accessibility.  Scaling can help learners with low vision.  The SVG title is readable by most screen readers. Also importantly, the SVG graphic is made up of individual elements whose properties can be changed by program code or user interaction.

LodeStar displays SVG graphic

In the screenshot below, the tiger graphic is opened in an SVG editor in LodeStar.  The author has right-clicked on an eyeball and can now choose branch options based on selection, deselection, drag, hover over and hover out.  All of LodeStar’s branching options and script can be executed based on any of the above events.  For example, based on the click of an eye, things can happen: the eye color changes, an audio description plays, an overlay appears with a complete description of a tiger’s vision and so on.

With LodeStar, designers edit SVG graphics and add interactivity

Importing PowerPoint as SVG

We’ve never been huge fans of starting an eLearning Project as a PowerPoint.  That hasn’t changed, but LodeStar10 does support importing a single PowerPoint slide or an entire PowerPoint presentation as a series of SVG pages. 

PowerPoint supports exporting a slide or series of slides as SVG.

PowerPoint Presentation

LodeStar 10 adds support for importing a single SVG image or an entire folder of SVG images.  LodeStar interrogates each slide and looks for things like Base64-encoded images.  PowerPoint converts imported images to a long string of characters called Base64.  This is a great format for transporting images inside a single file but browsers tend to load and render Base64-encoded images very very slowly.  LodeStar detects the Base64 encoding and then translates the characters back into an image file that is loaded into the project. 

The result is that vector graphics are editable as SVG elements, and embedded images load and display quickly.  The designer can display the slide as is, edit elements and add branch options to elements.

Designer edits a PowerPoint slide in SVG editor

MathML

For a short while, all browsers supported the MathML markup language, enabling markup without the need for add-ons.

Rendered MathML in LodeStar HTML editor

But there have been setbacks. We’re looking forward to when MathML is once again available in all browsers. Given the likelihood of that, LodeStar continues to support MathML.

Support for MathML

MathML (Mathematical Markup Language) is supported by W3C as the preferred way of displaying mathematics on a web page or eLearning application. MathML describes structure and content of mathematical notation and provides for a higher level of accessibility than simply displaying an image.  Designers can quickly edit and manipulate the size of a MathML expression.  This is an improvement over taking a picture of an equation, for example, and pasting the image into a presentation.  In the past LodeStar automatically converted expressions into images or it used the MathJax library to convert expressions written in Latex to MathML.  But now we’re banking on full support for MathML in the near future.

SimpleSim

For years, LodeStar offered the Interviewer Page Type to support what we called decision-making scenarios and simple simulations.  We continue to offer that page type but have expanded the number of layout options for interactive decision-making. 

For starters, we added a new page type called the SimpleSim.  This page type supports graphics, interactive widgets, text and whatever else is needed to set the scene.  At center stage is the situational prompt and three decision options (as pictured below).   All of LodeStar’s branching options can be invoked based on the learner’s choice.   For example, the ‘Jump to Page’ branch option can bring up a scene that matches the choice and advances the narrative.   Branching options also allow us to add feedback, keep track of points, collect user responses and so forth.

To style the scene shown below, the author used a palette for the color scheme, added a header graphic through Tools>Themes, selected a layout style that set the window width and navigation at the top, and added a background graphic.   The use of palettes, themes, layouts and page types enables the author to control every aspect of this simple simulation, including the interactivity.

Look and feel is controlled by Layout, Theme, and Palette

CCAF

It’s no secret that we are huge fans of Dr. Michael Allen’s Context-Challenge-Activity-Feedback model.  In a project that was intended to improve employees’ social sales index (SSI) in LinkedIn, we set the context as a simulated LinkedIn.  For the challenge, the learner must improve the main character’s SSI score by providing the right advice and interacting with a simulated profile, notifications, messaging, etc. – just like LinkedIn!

LinkedIn Simulation

CCAF projects are not page turners or Present-and-Checks.  They can be quite advanced.  To support a more sophisticated interaction than the display of content and multiple-choice questions, LodeStar offers LodeStarScript, which can be written in the Execute Command branch option.

LodeStarScript enables designers to change the properties of graphics on the fly, including SVG graphics.  Properties can include color, position, image source, rotation, opacity, etc.  LodeStarScript offers the designer the power of conditional logic, loops, local and global scoped variables, and a long long list of functions.

In the simulation below, the learner can select a camera aperture and control exposure.  The effects of exposure are simulated with the simple change of the color and opacity properties of an SVG element.

Camera simulation with LodeStarScript

xAPI/CMI5

Megan Torrance, a veteran of learning design, authored a research paper sponsored by the Learning Guild.  I won’t steal her thunder and encourage you to read the paper for yourself, but I’ll cite two statistics from her research that tell the story of xAPI.

In a survey of 368 respondents, the majority of whom belong to organizations that create or purchase learning solutions, 44.9% of the respondents indicated that ‘We are interested in xAPI but have not used it at all.” 

Version 1.0 of xAPI was released way back in 2013, and yet 10 years later adoption is not widespread.

So what is xAPI, how does it relate to CMI5, and why are we so interested in it?  In short, xAPI and CMI5 are game changers.  They are not the same thing but they are close cousins.  An eLearning activity that uses CMI5 can generate an xAPI statement, which gets recorded in a Learner Record Store.  CMI5 can also tell the LMS whether the learner passed or failed. 

So, let me be a little more specific.

With these technologies, I can store my eLearning projects in my own repository — GitHub for example.  I can then import a very lean and simple file to the Learning Management System, which tells the LMS from where to launch the activity.  The LMS then passes to the activity learner information and a token for secure communication.

CMI5 uses xAPI technology but it also understands the vocabulary that LMSs require.  Pass/Fail.  Incomplete/Complete.  xAPI reports to a learner record store any statement that the designer has added to the eLearning activity.  ‘Learner has reached Level Two.  Learner completed a video.  Learner attempted Level Three four times.’  CMI5 can generate any kind of xAPI statement in the form of learner actions.  In addition, CMI5 can tell the LMS whether the learner passed and/or completed the module.

 But among the reasons why people don’t yet use it are:  lack of knowledge, lack of Learner Record Store, and LMS does not support it.

I am extremely fortunate in that our Learning Management System is Prolaera.  It is designed for the CPA industry.  Prolaera can import a CMI5 activity.  As a result, I can do the following:

  1. Send a statement about the learner reaching Level 5 to the learner record store.
xAPI statement

2.

Read a list of learner experiences from the Learning Management System’s Learner Record Store. (the learner’s name has been erased from the screenshot).

Learner Record Store

From the screenshot above, you can see that we can report on any learner experience.  For example, the learner first experienced the results page with a score of 200 points.  We can also see that the learner passed, satisfied the requirements, completed the module and terminated the activity.  These are all terms that the Learning Management System understands.

It may take time but CMI5/xAPI will eventually be widely adopted.  These standards are incredibly important to the advancement of eLearning.  It begins with awareness. The more designers learn about it, the more they can encourage their learning management system vendors to support it.  In the meantime, we are ready for it!

3D

Glen Fox’s Littlest Tokyo is a great example of what is possible with Three Dimensional objects viewed jn a browser.  The object is beautifully detailed with a running streetcar animation as an integral part of the 3D object.

Littlest Toyko, by Glen Fox

Designers will be able to use free tools like Blender, TinkerCAD, Sketchup or even their smartphones to produce 3D meshes.  

Smartphones like iPhone 12 come equipped with LIDAR.  LIDAR emits a laser pulse that reflects off of solid surfaces and returns to a sensor on the smartphone.  The round-trip duration is noted. From that, the software can accurately position the solid surface in three-dimensional space. LIDAR has been available in specialty instruments for a long time but for designers to be able to use this technology practically, the software needed to improve.

In whatever way the 3D model gets created (3D graphics software, downloaded from a warehouse, generated by LIDAR) it can then be loaded into a viewer and manipulated (scaled, rotated, navigated) by the learner.  Imagine vital organs or historical places or complicated machines as manipulable objects. 

Currently, we’re working on a loader and viewer for 3D Models.  The first LodeStar 10 release won’t include a 3D model viewer, but we’ll introduce it later in a minor release.

In the meantime, we do support photospheres.  Photospheres use the same underlying technology: WebGL. WebGL enables hardware-accelerated physics and image processing and rendering onto the HTML5 canvas.  The hardware is a dedicated processor called the Graphical Processing Unit or GPU.

The photosphere that appears in the screenshot shows a distorted view of an art gallery.  The first art gallery image (shown below) was produced in Blender.  The second art gallery image was taken with an iPhone at the Minnesota Marine Art Museum in Winona.

Photosphere created in Blender
Photosphere created in iPhone

The image appears distorted – in fact, spherical.

Once in LodeStar, the designer can add images, markers, and hotspots to the photosphere.  All of these things get correctly positioned on the sphere.

In the LodeStar editor below, I am adding Lawren Harris’ paintings to the gallery as well as hotspots.  A hotspot click takes the learner to another room in the gallery.  A click on the painting brings up an image overlay.  A click on the video graphic starts a video. 

LodeStar editor adds interactivity to Photosphere

The end result:

Interactive Art Gallery on the Group of Seven

Conclusion

2023 marks the twentieth anniversary of LodeStar Learning.  We filed with the Minnesota Secretary of State on March 11, 2003.  I’m pleased that LodeStar has adapted to all of the technology changes over the years.  LodeStar began as code embedded in Lotus’ LearningSpace.  It then enabled instructors to create rich learning activities in ActionScript and Flash.  In 2013, LodeStar Learning pivoted to a whole new generation of software that used HTML5.  LodeStar10 continues that progression and harnesses the power of HTML5, SVG, 3D and so much more to help designers create great learning experiences.

Lens Flare Stillwater and Augmented Reality

 By Robert Molenda

Stillwater Opera House destroyed by fire

Introduction

In this article, I write about augmented reality and geolocation storytelling. I dive into the details of how I created the latest augmented reality/geolocation story in celebration of the 150th anniversary of the Stillwater fire department. I purposefully cover this in great detail for the benefit of anyone who wants to get started with this new medium.

Augmented Reality or geolocation storytelling often begins with curiosity. My latest tour was inspired by questions like: ever wonder what it was like to be a firefighter? How were fires handled in 1880? How long did it take to get the horses ready to pull the steam pumper? What did it sound like back in 1890?

The Fire Run Tour of Stillwater is the most recent Augmented Reality tour of this quaint city in Minnesota. We will show you how this tour was done using the LodeStar ARMaker Template. This will get you started on making a historical tour of your own unique place.

First a little background

Lens Flare Stillwater is a “not for profit” project under the fiscal sponsorship of The St. Croix Valley Foundation. It is all volunteer and provides historical content about Stillwater for its web site and the App Store. It offers the first Augmented Reality tour in the Midwestern United States. Its goal has been to get people in the community to work together and have fun with history.

Why Stillwater?

Stillwater is the oldest city in Minnesota. It is rich with the history of the lumber industry, the St. Croix River, the railroad industry, the territorial prison, manufacturing of steamboats and steam engine tractors. The local library, the local historical society and the state historical society are great resources from which to draw content. In particular, the St. Croix Valley was blessed by the presence of an early photographer, named John Runk. Mr. Runk is responsible for a photographic record of the area that includes his own photographs as well as photographs taken by others before his time. More importantly, he documented all these photographs and left them to all who come to the valley.  Stillwater also has good data signal strength and receives precise GPS signals, important to geolocation storytelling.

The Five W’s of History

We start with the Who, What, When and sometimes the Why of a story.  History and photographs always take place at some location to answer the “Where” question. We will be using GPS coordinates to do some amazing things, using the ARMaker template of the LodeStar eLearning authoring tool, produced by the Lodestar Learning Corporation.

It turns out that only some of the historical local artifacts (The “What”) are still actually present for anyone to enjoy. Most of the historical buildings, locations and artifacts are no longer here. What we can do with a little work is to show others where these buildings and artifacts were located and what they looked like; then use some technical magic to excite the imagination of any visitor as they walk nearby these unique locations.  Imagine walking by a location and have a historical story, picture and sound pop up out of your smart device as you cross an invisible fence that surrounds this site?  This constitutes the “How” part of the adventure.

The How?

The “How” is where Lodestar Learning Corporation comes into the equation.  Lodestar Learning Corporation has written software that allows all these magical things to happen within your smart device. One way to look at this is that our team of local volunteers take care of the Content, while Lodestar Learning Corporation provides the “How”, the software and technical pathway to bring the content to life on your device. LodeStar has made countless improvements to their product during the past six years. The reliability of the APP has been outstanding even during times of heavy use.  

Lens Flare Stillwater is using the ARMaker template of the software. The AR stands for “Augmented Reality”. This means that the coordinates of your device location will trigger a response that will provide enhanced information about that location. In the case of Lens Flare Stillwater, we want location to trigger the opening of graphic, text and audio files related to each historical site along the tour.

How To Organize a Tour

It is important that we organize the tour in a manner that allows users to navigate it by themselves. The tour will be available 24 hours per day, 7 days per week, and it can be used on-location or anywhere in the world on any device. This is scary stuff!  The power of the web always amazes me. You can go from nothing to worldwide activity.  This ability carries with it a lot of responsibility.

There is a mapping function in the software that shows the path of the tour along with map pins that identify the various points of interest along the tour route.  The pathway can be straight with branches or a circular route.  We have used bicycle trails (circular), historic streetcar lines and a pathway that takes a fire engine route to historic fires in Stillwater.

iPhone screen displays of the Fire Run Tour

The “Show Map” button is always located on the upper left part any user’s APP screen.  The tour can be taken virtually from your armchair from anywhere in the world and this map can be used to see the various points of interest before a visitor ever comes to Stillwater. For the “Augmented Reality” Tour, when users physically visit an historical location, the information comes to their devices automatically.

What is Different about the APP?

The LodeStar software functions on any mobile device, tablet, or computer. The project can be played back in a browser on all devices (i.e., browser-based) or as a mobile app on an IOS (i.e., iPad, or iPhone) device.

All of the Tour APPS for Lens Flare Stillwater can be secured from portals that are FREE to anyone. Again, the advantage of browser-based APPs is that they function on all devices and that they can be secured directly from a Web Site. We can launch the site from a printed ticket with QR code or an electronic Ad.  Apps can also be saved to your screen for future use.

Advertising your app and linking to it with QR code

In the case of the Fire Run Tour of Stillwater, a user can go directly to the APP by scanning the QR code, which is sourced from a “Free Ticket”, directly from a printed Ad, or even from the photo above that has the QR Code.

The real magic of the Lens Flare Stillwater Tours takes place when you are actually near the historic locations. Your mobile device already knows its location and whenever it enters a new historical site it signals the system to send information to the device that is related to the new location. If your mobile device is paired to your automobile radio, your passengers can take the tour from the comfort of your automobile while you safely drive them and listen to the sound files from your radio.  Another way to look at the tour is that the technology allows you to carry the history of Stillwater in your pocket or purse.

How Can Anyone Organize Such a Tour?

As mentioned, this year is the 150th Anniversary of the Stillwater Fire Department. In honor of this event, we put together a tour of Stillwater that features photographs of early fire stations, buildings, stories, sounds, building permits, fire reports, newspaper articles and photographs of fires that changed the city.  There are fifteen active locations on the Fire Run Tour. Each location has its own LodeStar Page. A Page can have many photographs, stories and links. Each Page is linked to its own unique location and sound characteristic.

Organizing Content

The first task is to organize our content and start to put together some kind of pathway/story through the city. This gives us a general outline of the map of the tour from which we can obtain more precise latitude and longitude coordinates for each of the historical locations.  These will be used for filling in the various fields that are part of the software. Our content will include photographs (jpeg, gifs or png files), text, audio files, location coordinates for the historical location points along the tour and links to web content such as You Tube Files or historical references.

Here is a list that we have used: Everything on the list can be directly loaded onto the Page.  

  • Page photos. Each historic location has its own Page.
  • Each page can have many
    •  Photographs, text stories and a unique sound characteristic.   
    • Links to other web sites
    • References, You Tube videos, Fire Videos, etc.
    • Jump Page Links
  • These are not part of the tour but are pages that contain more detailed information.
    • For example, a list of all firefighters since 1872
    • Jump Pages (i.e., continuation of text and stories)
  • Our app also included:
    • Sanborn Fire Maps
    • Google Maps Street View References to today’s Location views
    • Fire Run Reports for Insurance purposes
    • Ledgers
    • Crowd-sourced stories and photos, eg. St. Paul church fire and Wheeler House
    • Fire House move and building permits.
    • Newspaper stories.
    • Audio Narratives
    • Narratives, with music or characteristic sounds (e.g., horses neighing or     trotting).

Instructions for Starting A Tour

The very first thing is to do after launching LodeStar is to select the “ARMaker” Template.  You are prompted to give the Project a name. We used the name “Fire Run Tour”.  This project name will show on the top of the work page and is how we find our project in the future as we add content.

The first page comes to our screen and this is where our work begins. The first page is blank at this point.  It is a good idea to plan our pages with some uniform format. This applies to background color, font choice, size for various uses and font color. This should not stop anyone from getting started on a project.  Lodestar is very flexible and changes are easily made at a later time.  Everything we need to get started is on this first page.  Hovering your mouse over any of the icons in the tool bar will provide a function description.

For a given page, we do the photographs first, then the location coordinates, and the text narrative and other stories. The Audio Files (MP3) are generally done last. We used Apple Garage Band to produce our narratives. We used sound files from videvo.net for the music and sound effects.

The latitude and longitude coordinates are sourced from Google Maps for the Fire Run Tour.  Precision and detail are important in handling the location coordinates. The negative signs are really important, especially if they are left out. This is a surprise that will help anyone understand navigation better.  It is always best to check your work by saving it frequently and displaying it on your browser. Always give each page an id.

A photograph is inserted on a page by clicking on the image icon (for example on page 7 below). This will open up a screen that will help you get the image from your desktop to the LodeStar Page.

Example of Page 7 in Fire Runs of Stillwater

This is what the page looks like after you click on the image icon in the above screen.

Appearance of “Insert Image” Page in the LodeStar software

After changes are made, save your work by clicking on the icon on the lower right of the screen or going to “File” then click on “Save”. It is always a good idea to take a look and see what our changes look like on our browser. This is especially important when we make changes in the GPS Coordinates or Ranges. (We have made a lot of errors with these little gremlins). In our case we just click on the “Preview in Browser” button.

If we have many photographs on a Page, it may be a good idea to put them in a “Carousel”. We do this by going to the black gear-shaped icon and opening a series of “Widgets”, one of which would be the Image Slider or, as I call it, the “Carousel”. There are other interesting “Widgets” to explore for our reading pleasure. Among them would be the “Geolocation” widget, which is what we used for making the map pathway for the tour and the map ranges.

In the Fire Run Tour of Stillwater, we use many links. Some links show how important horses were to the early fire departments. Our links are all highlighted texts.

A particularly good set of tools are the Sanborn Fire Maps, which not only were a good resource for fires and details on Fire Departments, but were made for countless cities across the United States during the time periods between the 1880-1920’s. The maps were all hand drawn and showed a top down view of cities along with the dimensions and specific details of all the various buildings in cities like Stillwater.  These are treasured pieces of history that are available to the public from the Library of Congress.  Newspaper articles were sourced from the Digital Newspaper Collection of the Minnesota Historical Society.  As a matter of courtesy, it is important to properly acknowledge our sources and provide proper attribution.

As we added new historical locations, we put each on a separate Page. We added Pages by clicking on the “+” (Add Page) button on the tool bar on the bottom of the Page. There are 25 Pages total in the Fire Runs Stillwater project. All Pages can be moved around and placed in a different order at a future time. This means that we do not have to start with the first tour location on the First Page.  Just get started!  For safety and legal reasons, our first pages feature instructions, safety tips and helpful hints. Below is an example of Page 6.

Example of New Page for Fire Runs of Stillwater

How are “Background Tiles” done for the Tour?

Background tiles are the images that appear behind all the Page images. They only appear in the browser screens. The tiles give the Pages a sense of dimension and frame the tour with a familiar appearance. We have found it best to use historic maps as background tiles, or drawings of one of the story themes. In the Fire Run Tour of Stillwater, we used the formal photograph of the old fire station showing the horses and early equipment and firefighters. Have some fun with this one! 

After we selected a photo to be used as a tile, we placed it on our desktop, then opened the LodeStar file with our project name. We then picked a page and next opened the “Tools” icon then selected “Layouts”. The picture below shows the Page file and where the “Layout Manager” screen appears along with some simple instructions.

Layout Screen View

After we have selected our background tile photo and saved everything, we see a small thumbnail of the tile on the bottom left of the Layout Manager screen. The only way we can view the background tile is on our Browser, so it is important to click “Save” and then click on the “Preview in Browser” button.  The screen view below is how your browser screen will look. Notice the background image and the shadow around the main image frame. This background image will now appear behind all Pages on the Tour.

Appearance of Background Tile on Page on the Browser

Appearance of Background Tile on Page on the Browser

How are Map Ranges Made?

Map Markers are the pins that are located on the Map. Each will be located by its coordinates and Title.  The map pins are sourced from the Page Titles and Coordinates that are inputted when we made each of the Location Pages. This is what appears on the Map Page when it is opened by a user.  The title appears when the user hovers over a map pin on the map. It is also an active link that will take the user to that location page when it is clicked. 

You do not have to do anything else, and the map pins will appear on the map page when it is opened. The only problem with the map page is that there is a lot of “Map Noise” on the page from commercial locations on the same map. We are doing two things to have our map pins stand out in this noisy environment. The first trick is to create a “Map Range”.  This will provide a colored circle around each of the Locations on our tour.  

How Do We Make Map Ranges?

These are made by going to the one of the locations on the Fire Run Tour by opening the Fire Runs Stillwater file in Lodestar. Next, we go to one of the Pages. For this example, we have chosen the Page, “First Fire Station in Stillwater”. On this page, you will notice the black gear-shaped icon in the tool bar. When you click on this icon, it opens to a series of “Widgets”, one of which is called “Geolocation”.  When we click on Geolocation, it opens to the page below:

Activity Widget Page for “Geolocation”

There are some arrows that identify how we can make the “Range”.  There are choices of the color of the circle, the color and transparency of the circle and the radius of the circle. There also are two boxes for inputting the GPS Coordinates that locate the center of the range. It should be the same as the coordinates used for the page, itself.  After you have finished inputting the other items, click on “Add”, then you will need to scroll to the bottom of this page and click on “OK”, then “Save”. At this point, it is always a good idea to click on “Preview in Browser” and see how the changes look in the Browser view.  

How The “Map Page” Appears on Our Browser

On the Fire Run Tour, we are using map pins that are red in color along with a transparent red circle that provides a range for the location on the map. We have created a more distinctive page map by adding the ranges and colors to the map.

How Do We Make Polylines (Pathways for Tours)?

The second way we create a distinction in our map is to add “Polylines”. Polylines are the routes that get patrons to the various locations on the map. They show a broad picture of the  extent of the tour and provide a pathway for users. We have had some interesting experiences with making polylines. It is sometimes helpful to show everyone the wrong way to do something so that it becomes more memorable.  This will be the example that we serve up for this occasion.  Just like the previous example, our starting point is the ”Widgets”  icon that is on a page.  Again, we will go to the Geolocation button and open that page just like in the previous example. This time we will change the option at the top of the Activity Widgets page and change the option at the top left of the page to “Polylines”.  This time, instead of entering only one set of latitudes and longitudes, we are entering many sets.  These sets of coordinates mark points between which we create a transparent, colored “Polyline”.   They usually create a pathway along a street or trail.  This is how the completed polyline should look on a browser.

Example of Polylines on a Map and How They Should Appear on Our Browser

This involves a considerable amount of detail and precision. It is not for the faint of heart, because all this detail invites considerable error and loss of attention. For example, it took six sets of coordinates in a specific order to draw the polyline above.  From our experience, all of these latitudes and longitudes look the same after thirty minutes of work. It is easy to make errors. The best advice to anyone is to use the “Preview to Browser” button every time you add a new location or make a change.

If you forget to put the minus sign on the longitude coordinate, you might be interested to know that your new polyline map will take you to Mongolia, somewhere near the Chinese border. This will show up on your map (see the map below where the two lines head north to Mongolia) but it is not likely anyone will take the tour. On the other hand, it does make global geography/navigation kind of interesting.

Polyline map with error in longitude sign and order of locations

Staging and Going Live

Staging and going live with the Tour APP is done frequently while developing the tour. We do this to get a feel for how the tour is performing when we use our mobile devices.  We do this live, but do not publish the APP links to the public.  Sometimes we hide the link somewhere on our web site until we are ready to go public.  We are generally making changes, adding content, adding locations, correcting copy, changing coordinates or making the geo-fence ranges smaller or larger for each location on the tour.

Our “Staging” for Lens Flare Stillwater has two Levels.  The first level involves our sending the files that we developed to our staging person (John Moore). John takes our staged files for the Fire Run Tour of Stillwater and gets them in the proper form for interfacing with the servers that are used. 

John Moore is a computer guy. The rest of us know little about code and computers but learn enough from both of them to be somewhat dangerous if left unattended.

What we do when we stage a tour file is compress the file then save it in a file location. We then move this file to a Dropbox location that is shared by John Moore and me. We generally tell John what we want done to the file as it is sent to the server location. We also ask John to give us an “Address” so that we may go to the site and check its performance after it has been staged. The server service that we use is called “Site 44”. It works well for us, although LodeStar Learning can recommend other services. “Site 44” is a service that interfaces with Dropbox files of Content and Code then sends information to user devices when conditions are met by those who are using the tour site.  

Conclusion

So, in summary, we have described our service and what it does and how it works. We also have examples of how we provided content for one of our Pages and how anyone can do something similar using the LodeStar  ARMaker template. We have shown how to make our Pages distinctive by using a Tile Layout.  We have shown how we use the AR Maker template to make map marker ranges and polylines. Finally, we have shown how we stage our files.  If our team of inexperienced users can do this, anyone can do this even better. The LodeStar software is relatively easy to use and has available other features, good Help and Instructions. Enjoy History, get started and have some fun with this great set of tools.

 You can experience the Virtual Tour of Stillwater by going to this link to load the Fire Run Tour of Stillwater on any of your devices.

https://lensflareapps.site44.com/fireruns/index.htm

Link for Lens Flare Stillwater web site: Scroll to bottom of Home Page to get links to any of the tours: 

                                https://www.lensflarestillwater.org

Learning Experiences in the 3rd Dimension.

 

Introduction

Great learning experiences can be crafted from 3D technology. The simplest form of 3D technology is the photosphere. It is accessible to teachers and trainers and can be used quite effectively. In this article, I’ll show off a demonstration project and describe the use of 3D models, a photosphere, text and graphics, video, and audio.

Two years ago, I wrote about using photospheres in online courses. Today, ‘interactive’ photospheres are a critical strategy that designers of every stripe should master. Currently, the use of photospheres is supported by the proliferation of 3D models, photosphere projects, new services, improved technology, and new features in our own authoring software.

So, let me parse this mix media approach. To start, a photosphere is a 360-degree panoramic image that can be displayed in a viewer. Learners can ‘navigate’ the image by dragging the view in any direction and zooming in and out.  Google Street View is the best-known example, but photospheres abound in art museums, tourist bureaus, real estate sites, and social media.

The photosphere is deceptively simple and hides a more profound change in the web.  As we all know, browsers support the trinity:  HTML, JavaScript, and CSS.  All three technologies have been evolving.  Recently, JavaScript began supporting a variety of new technologies, including WebGL.  WebGL makes 3D rendering possible in a browser without the need for plug-ins.  In short, WebGL (Web Graphics Library) displays 3D and 2D. Because of WebGL, browsers can benefit from hardware graphics acceleration to display (render) complicated graphics.  The key is hardware acceleration. The processing of graphics in a dedicated graphics process unit is many times faster than in the main CPU. 

The photosphere uses WebGL or hardware acceleration.   To display a photosphere, a distorted image is mapped on the inside of a 3D sphere.  Our perspective is from the center of the sphere with a narrow field of view.  By dragging the image, we pan the sphere and bring hidden parts of the image into view.

With the help of LodeStar, an eLearning authoring tool, we can add interactivity.

To best illustrate interactive photospheres, I created this demonstration project based on one of my loves, the Group of Seven.

A little background:  I went to school in Canada. Until the thirteenth grade, the study of history was the study of British, American, and Russian history.  The study of literature was primarily European and British literature. The study of art was primarily of British and French art.  In grade 13, that all changed.  We studied Canadian history, literature, and art.  For me, that was transforming.  Central to Canadian art was the Group of Seven.  Their subject was primarily the Canadian landscape. Until recently, I could find Group of Seven paintings only in the McMichael Canadian Art Collection in Kleinberg, Ontario. So, I decided to create a gallery of my own.  Just a small one for demo purposes, featuring two of the artists associated with the Group of Seven.

Visit this link and if your curiosity is piqued, I will share the details of how I made this learning experience. Launch the demo and on the second page drag your mouse across the scene.

Art Gallery (lodestarlearning.github.io)

Virtual Group of Seven Gallery Demonstration Project

The details

First, I needed a model of an art gallery. I went to TurboSquid and bought one for $19.  I could have found a photosphere from Flickr or elsewhere, but I wanted control of the objects in my gallery.  I could have built a 3D model from the ground up – but wanted a short cut.

The model came in the form of a DAE, which is a 3D interchange format.  The DAE format is based on the COLLADA (COLLAborative Design Activity) XML schema.  (This is a standard format that can describe 3D objects, effects, physics, animation, and other properties. All the major 3D modeling tools can import it.)   I then brought the model into Blender.             

Blender is a free 3D modeling tool and it is quite incredible.           

3D Model in Blender

In Blender, I edited the model and added my own camera.  To render a photosphere, I made the camera panoramic and then equirectangular. Equirectangular is a projection type used for mapping spheres onto a two-dimensional plane. This results in a very distorted image when viewed normally.  Viewed in a photosphere viewer, the image looks spectacular.

Next, I imported the image into LodeStar. With the help of the LodeStar’s interactive image editor, I drew hotspots over the doors and imported images of paintings that I positioned in the art gallery. Technically, the images become image overlays. As the viewer moves the image up and down and across, the imported images adjust accordingly by scaling, skewing, and repositioning.

Interactive Image Editor in LodeStar

In the scene above, the imported images appear above the benches.  A hotspot sits over the doorway.  When a learner clicks on the doorway, LodeStar executes a branching option.  In this case, that means a jump to the next gallery.

In the example, two gallery rooms are featured. The first gallery exhibits two paintings of Lawren Harris.  The video icon displays a YouTube presentation on Harris’ work. The second gallery exhibits two paintings of Emily Carr, and a wonderful YouTube presentation on her work.

Conclusion


Photospheres are but one part of 3D technology.  Browser support for WebGL makes it possible for us to use 3D models interactively. Students can view 3D models from any perspective and manipulate them. The possibilities are endless. LodeStar and other tool makers must make it easier to load these models and make them useful for educational and training purposes.  Just as we support functions that can change an image or element’s rotation, position, opacity, and color, we must provide functions that can manipulate 3D objects.

We are currently working on some prototypes and would love to hear from you and what would most benefit students. Please send us your comments.

Meeting the CCAF Challenge

By Robert “Bob” Bilyk

Introduction

I recently watched Ethan Edwards present ‘Cracking the e-Learning Authoring Challenge’.  This post is my attempt at cracking the e-Learning authoring challenge.

But first a little background.

As many of you have the privilege of knowing, Ethan Edwards is the Chief Instructional Strategist for Allen Interactions.  Cracking the challenge is all about building interactivity in an authoring tool – specifically, CCAF interactivity.  CCAF is an acronym for Context-Challenge-Action-Feedback.  The four components of CCAF are part of Michael Allen’s CCAF Design Model for effective learning experiences.  Michael Allen is the founder of Allen Interactions, the author of numerous books on eLearning, and the chief architect of Authorware and ZebraZapps.  Both authoring systems were designed for people with little technical expertise to be able to build – you guessed it — CCAF learning experiences.

In Ethan’s presentation, he demonstrates building a CCAF activity with Articulate Storyline.  In a nutshell, the CCAF learning experience is the experience of “doing”.  Rather than reading or viewing content, the learner experiences first-hand the application of principles, concepts, strategies, and problem-solving in completing a task and succeeding at a challenge.

In Ethan’s demo, his task is to detect a refrigerant leak.  The learner is shown refrigeration equipment and given a leak detector.  The learner doesn’t at first read a pdf or watch a video but performs an action.  In CCAF activities, text and videos might come in the form of feedback to a learner’s action.

Some of the CCAF learning experiences that I designed include running a multiple hearth wastewater incinerator, troubleshooting a cable network, supporting the adoption of a special needs child, designing an online class, assessing risk of recidivism, and, most recently, searching for documents in a document management system.  In all cases, most of the learning came from being immersed in a ‘real world’ setting’, presented with a challenge, and getting feedback because of learner actions. 

Ethan’s presentation piqued my curiosity and a bit of self-reflection.  He lists things that are essential in an authoring tool to enable the design of a CCAF learning experience.  As a toolmaker, I explored each of the items on his list and I applied them to a small project built with our own LodeStar eLearning authoring tool. 

As we explore each item on Ethan’s list, I’ll illustrate with LodeStar.  If you follow along, you’ll see the development of a simple CCAF application.  You’ll learn about the components of CCAF.  And you’ll also learn a little about LodeStar and its capabilities.

But first an important caveat. CCAF comes in all forms, shapes and sizes. Ethan’s example and my example happen to be very simple simulations. The principles of CCAF are not limited to simulations. They can be applied to anything that requires action on the part of the learner — which includes making a decision, crafting a plan, analyzing and solving a problem — a host of things.

This is but one example of CCAF to illustrate its principles and test whether or not our authoring tool is up to the challenge.

Introduction to the Demo Application

The objective of the application is for learners to test an electrical outlet and determine which wires are hot or ‘energized’.  In completing this task, the learner must turn on an electrical multimeter and connect its probes to the various wires in an electrical outlet.  A multimeter is a measuring instrument that typically measures voltage, resistance, and current.  Once someone has learned the difference between these things, the practical skill is in choosing the right setting for the task and safely using the meter to complete the task. 

So that’s the challenge:  find the hot wire with a multimeter.  The context is a simple residential electrical outlet. 

Typical eLearning applications would use text, graphics and video to illustrate the use of the multimeter and explain underlying concepts.  CCAF applications challenge learners to complete the task in a manner that is an educational approximation of the ‘real thing’.  Text, graphics and video can offer explanations but not in lieu of the real-world task and often as a form of feedback. 

A LodeStar Application: Testing an Electrical Circuit

Basic Capabilities

But let’s start with an overview of the basic requirements.  To paraphrase Ethan, an authoring tool must have these capabilities:

  • Complete visual freedom
  • Variables
  • Alternative branching
  • Conditional logic
  • Action/response structures

I’ll elaborate on each of these requirements in my demonstration. 

Complete Visual Freedom

LodeStar combines HTML flow layout and SVG layout.  Images imported into the HTML editor are placed in the HTML flow and are laid out according to the rules of HTML.  Images can also be taken out of the flow and applied with a CSS rule so that text flows around the image.

In addition, LodeStar authors can use the Scalable Vector Graphics (SVG) canvas to layout out graphics freely in any position on the x and y axis. 

LodeStar’s SVG Canvas

In other words, the graphical elements on the SVG canvas are laid out freely.  The SVG canvas itself is just another HTML element.  Depicted below is a flow of HTML elements like text, images, divs, tables, etc.  The SVG canvas is in the ‘flow’ right along with them. Inside the canvas, graphical elements can be positioned anywhere, but the canvas itself follows the HTML document flow. shrinking and expanding as needed.

The visual freedom is that LodeStar combines the benefits of a responsive HTML flow with the precise positioning of an SVG canvas.

HTML elements are laid out on the page in a flow. If the page width narrows, the element isn’t by default clipped. It’s just bumped to the next line. The SVG canvas flows right along with the other elements. Its contents, however, are positioned with local XY coordinates.

I started with a multimeter image that I took from Pexels.com, a repository of free stock photos.  I used Photoshop to cut out the dial and imported it in the SVG canvas as a separate image.  I did this because I wanted the learner to be able to rotate the switch to place the multimeter in the right mode.  I also imported the image of an electrical box so that I could draw wires overtop.

Variables

 As I wrote in the Humble Variable (The Humble Variable | LodeStar Web Journal (wordpress.com)), variables are critical to some eLearning designs.  In this example, I need to store the position of the multimeter switch.  That’s what variables do.  They are storage places in the computer memory.  As the learner clicks on the switch, the dial rotates.  As an author, I must store the value of that rotation.  If the value of the rotation is 40 degrees, the code judges the switch to be in the right position.

To enter the code that uses the variable, I right-click on the switch and select, ‘Select Branch Options’.  Branch Options are basically things that happen as a result of displaying a page or clicking on a button or choosing a multiple-choice option or doing one of many things.

Branch Options can be as simple as turning a page or as complex as executing a whole list of instructions. The following is a basic example of the latter:

The Multimeter code

var rotation = getValue(“dialRotation”);

rotation+= 10;

setValue(“dialRotation”, rotation);

changeRotation(“dial”,rotation ,13,27);

if(rotation % 360 == 40){

changeOpacity(“display”, 1);

appendValue(“actions”, “Turned on multimeter. <br>”);

}

else{

changeOpacity(“display”, 0);

}

This code looks complicated to a non-programmer.  But it is not.  It just takes practice to write.  It’s on the same difficulty level as an Excel formula.

Here is the same code but with an explanation (in italics) underneath:

var rotation = getValue(“dialRotation”);

get the value of dialRotation from long-term memory and assign it to a local or temporary variable named ‘rotation’

rotation+= 10;

add 10 degrees to value of rotation.  In other words, rotation = the old value of rotation plus 10.

setValue(“dialRotation”, rotation);

store the new value in long-term memory in a location called ‘dialRotation’

changeRotation(“dial”,rotation ,13,27);

change the property of a graphic with the ID of ‘dial’.  All LodeStar graphics can be assigned an ‘ID’.

More specifically, change the rotation property by 10 degrees (the value of rotation).  Pivot the rotation at the precise point that equals 13% of the width of the SVG canvas and 27% of the height of the canvas.  That point is the center of the dial in its current position on the canvas.  If the dial were in the dead center of the canvas we would use 50, 50.

if(rotation % 360 == 40){

This line can be simplified to if(rotation == 40)   I used the modulo operator (that is, ‘%’) in case the learner kept rotating the dial around and around.  If rotation = 400, then 400 % 360 would equal 40.  360 divides into 400 once with a remainder of 40.  So, if rotation is equal to 40, then do the following:

changeOpacity(“display”, 1);


change the opacity of a graphic with the id of ‘display’  This is the text box used to show the voltage.

appendValue(“actions”, “Turned on multimeter. <br>”);

store the learner’s actions in long-term memory in a place called ‘actions’

}

else{

changeOpacity(“display”, 0);

if the rotation of the dial does not equal 40, then shut off the display by changing its opacity to 0.

}

The Probe Code

I won’t explain the probe code in as much detail.  Basically, when you drag the red or black probe, then the following code is executed.  It essentially checks whether or not the probes are in the right spot.  If they are, the multimeter display shows 110 volts.

var  condition1 = isOverlap(“RedProbeTip”, “BlackWireBTarget”);

var  condition2 = isOverlap(“BlackProbeTip”, “box”);

if(condition1 == true && condition2 == true){

  changeText(“display”, “110.0”);

appendValue(“actions”, “Moved red probe to correct position. Black probe in correct position.<br>”);

}

else if(condition1 == true){

changeText(“display”, “0”);

appendValue(“actions”, “Moved red probe to correct position.<br>”);

}

else{

changeText(“display”, “0”);

appendValue(“actions”, “Moved red  probe to incorrect position.<br>”);

}

These are the drag branch options that are tied to an object with a specific ID. 

Red probe in place; black probe is not. Therefore the meter shows ‘0’.
Red probe in place. Black probe in place. Meter shows 110 volts.

Alternative branching

Once the learner has tested the wires with the probes, with one probe connected to the wire and the other grounded, then the learner must select A, B, C, or D.  Here’s where alternative branching comes in.  Learners who select the right answer might go on to a more difficult scenario.  The above scenario is as easy as it gets.  Perhaps they must do a continuity test to detect where there is a break in the circuit.  Learners who select the wrong answer can be branched to a simple circuit or given an explanation that one black wire is coming directly from the power source, and the second black wire is passing on that power to the next outlet or switch.

CCAF applications accommodate the differences in learners.  The application can alter the sequence of experiences based on learner performance.  This is a profoundly different thing than typical eLearning applications where every learner reads the same text, watches the same videos, and completes the same quiz.

Conditional Logic

Ethan also lists conditional logic as a basic requirement of CCAF applications.  Conditional logic comes in the form of if-else statements as evidenced by the code.  Conditional logic also comes in the form of alternative branching.  Select the wrong answer and then get help.  In LodeStar, conditional logic is supported by not only its language and branch options but also by logic gates. 

In the display below, we see what happens when the learner reaches a gate.  (Incidentally, learners don’t actually see a gate.  When they page forward, the application checks the gate’s logic and then branches them according to some condition.  In this example, the author might configure the Gate with a pass threshold.  Let’s say 80%.  If the learner meets or exceeds a score of 80%, they are branched to the ‘Results’ page’.  If not, they may be routed to Circuit Basics. Follow the dotted lines.

Branches at the ‘page’ level are visualized in the Branch View.

Action/response structures

In our example, the learner moves the probes around.  If the multimeter is turned on, the learner sees a voltage display.  The action is moving the probe. The response is a voltage display. 

First, this a ‘real world’ action and ‘real world’ response.  I write ‘real-world’ in contrast to what happens in a typical multiple-choice question.  In a multiple-choice question, the learner clicks on a radio button and possibly sees a checkmark.  That’s only ‘real-world’ to an educational institution.  The world doesn’t present itself as set of multiple-choice questions. 

Second, when the learner sees a voltage display, that is feedback in the CCAF sense of the word.  The learner does something and then gets feedback.   Now, in our example, we did choose to combine ‘real-world’ feedback with a multiple-choice question.  Ultimately, the learner is asked to choose the letter next to the ‘hot’ wire.  In our example, we logged the learner’s actions and can unravel how they arrived at their final decision.  Did they connect the red probe to the right wire and did they ground the black probe?  If they selected the right answer but didn’t perform the correct actions that would lead to the right answer, we know they haven’t learned anything at all.

Conclusion

Authoring tools that enable one to create CCAF must have these capabilities: complete visual freedom, variable support, alternative branching, conditional logic, and action/response structures.

The hot wire example is an example of a very simple simulation.  But, as I wrote, the concept of CCAF isn’t restricted to this type of simulation.  CCAF can be found in decision making scenarios, for example. The learner might be placed in a situation and challenged to make the right decision or say the right thing.  That too is CCAF.  CCAF lies at the heart of effective learning experiences.

eLearning Strategies to Support Memory Recall

Introduction

At the university where I worked for eight years, occasionally I observed non-traditional students in class well into the evening, struggling to stay alert, struggling to soak it in, trying to make something better for themselves. Several years earlier, I watched a new employee at a software company resign in utter defeat. Nothing he had studied before in terms of software language, database, and mathematics prepared him for a new domain of knowledge.  It was all foreign, and it was disheartening, and it was delivered in a manner that was all too much.

Late evening classes or eight-hour training days push more and more information at the learners, until they literally break down, quit, or somehow miraculously hang on to fight another day.

The tremendous tax on learners is not unusual in either the corporate or the academic environment.  Both schools and companies place a heavy demand on the learner’s ability to remember things. 

The constraints of human memory!  Our lack of understanding of memory would be almost humorous if it weren’t for the wasted effort of students and employees alike.  In this vacuum of understanding, myths and falsehoods and deceptive practices have filled in.   Fortunately we have people like Will Thalheimer (The Debunker Club : Debunking Resources – The Debunker Club) and the authors of The Urban Myths of Learning and Education to help set us straight.

The Forgetting Curve

What we do know, and what research supports, is that we are wired to forget.  Many of us cite Herman Ebbinghaus’ ‘Forgetting Curve’.  The forgetting curve is real and, in some cases, very steep depending on a number of factors, but as Dr. Thalheimer points out, you just can’t put a number on it.  You can’t say with any certainty, for example, that learners will forget 70% of what they have learned within a day. 

Let’s consider the forgetting curve just for a moment, and then we’ll turn to eLearning.

The forgetting curve was the outcome of research done in the late 1800s by Herman Ebbinghaus.  He scientifically observed his own recall of nonsense syllables.  He made up lists of three-letter nonsense words and committed them to memory.  Once he successfully memorized 100% of a list, he attempted to recall the list.  The forgetting curve shows that he forgot 42% of the words within 20 minutes.  After a day he retained only 33% of this list of nonsense.

Hermann Ebbinghaus’ Forgetting Curve

We know that people forget, perhaps at disheartening rates, but the rate of forgetfulness is based on dozens of factors.  Are these new employees who are being introduced to something new to them, or are they seasoned employees?  Do they have any prior knowledge that will help them organize new information?  Are they paying attention or are they distracted?  Are they motivated to learn – intrinsically or with an external reward?  Is there a threat if they don’t learn?  Is there too much of a threat that inhibits their learning?  Are they just trying to earn CPE credit?  Are they taught how to recall the information in the right place at the right time for the right reason?  Is the material difficult?  Are they asked to recall the information? How many times?

Try placing those variables in a formula.  It’s impossible. 

We know that the forgetting curve is real.  It has been replicated recently (Replication and Analysis of Ebbinghaus’ Forgetting Curve (nih.gov)) and it will accurately mirror our students’ or employees’ rate of forgetfulness if we do not:

  • Help learners recall prior knowledge
  • Help learners organize new knowledge
  • Provide storage and retrieval cues that will help them use the information in the right context
  • Practice retrieval of the new knowledge
  • Space the retrieval over time.
  • Integrate the new knowledge with other knowledge
  • Apply the new knowledge before forgetting

This is where eLearning plays a role. Oftentimes, trainers are busy workers or busy teachers who can’t address deficits in prior knowledge, for example, or even assess prior knowledge, or fit spaced practice or simulated application into their training.

That is where I think eLearning can shine. 

I know, I know.  I’m an eLearning developer and an eLearning authoring toolmaker.  But there are reasons why I chose this field.  This is one of them.

The design of eLearning experiences can help improve the training experience, even if the latter is traditional face-to-face teaching.  As I’ve observed, many people dread eLearning because of the page-turner drudgery they’ve been subjected to.  Medical workers, lawyers, and accountants, and anyone with continuing education demands, have had too many bad self-study experiences.   In my current company, group-live (face-to-face) instruction is preferred over eLearning. That doesn’t, however, eliminate the option of eLearning. As a pre-training preparation or a post-training reinforcement and application, eLearning can still play a role.

Against this backdrop, here are some strategies or designs that can help:

Plan the training or academic curriculum to include pre-training activities and post-training reinforcements.  Make room for recalling prior knowledge in the training or lesson plans of future courses.

Flip the training.  That means, use eLearning (or self-studies) to present the content and use face-to-face training time to observe student performance and provide feedback. Data from 317 studies shows that flipped classroom interventions produced positive gains across all three learning domains (To Flip or Not to Flip? A Meta-Analysis of the Efficacy of Flipped Learning in Higher Education – Carrie A. Bredow, Patricia V. Roehling, Alexandra J. Knorp, Andrea M. Sweet, 2021 (sagepub.com))

Pre-training

Let the post-training assessments for the last course or training session be the pre-training assignments for the new thing — not as assessments, but as highly scaffolded activities with prompts and hints and feedback and textbook references and video helps and whatever.  The point is to help recall and to prepare learners for what lies ahead. 

Design activities that help learners recall vocabulary, basic concepts, laws, principles and procedures.  Activities can help prompt that recall and reduce the cognitive load of the new stuff.  If an accounting teacher makes references to cash or accrual accounting, do you want students struggling to recall the terms or do you want them paying attention to the new information?  It’s hard for them to do both.

Use flashcards, crosswords, matching, categorization, and other activities.  They’re not as sophisticated as things I’ve discussed in past posts, but they can play a useful role in helping recall.

Embed a video or a short Powtoon presentation.

Use quizzes with circular queues (missed questions get repeated) or variable interval queues (missed questions get repeated at spaced intervals).

Make it fun.  Gamify it.

Post-training

All of the pre-training suggestions apply to post-training as well.  But you can do even more.


Interactive Storyboards

This strategy walks the learner through the presented content in a storyboard fashion.  In the interactive storyboard, however, the learner must fill in the missing pieces. Recently our HR department presented on employee feedback and the different roles that in-charges, supervisors and talent advisors play in giving feedback to accountants and auditors.  She talked about a process that included feedback in review notes, one-on-one meetings with supervisors, and regular meetings with talent advisors.  The post-training activity can follow along in the life of an accountant but leave blanks for the learner to fill or questions for the learner to answer. It causes the learner to retrieve important elements of the presentation and become an active participant in reconstructing the information. When the learner gets it wrong, that’s an opportunity for feedback!

An interactive Storyboard, created with the LodeStar eLearning authoring tool

An added benefit to the activity is that we can see how learners experienced the post-training activity through the xAPI statements that the (CMI5-conformant) activity generated.  In the following screenshot from the Learner Record Store, we can see that this employee missed the point that there is a connection between one-on-one meetings and talent development meetings.  We also see that this employee did hit the results page with a decent score the first time around.  The employee satisfied the requirements of the assignable unit (AU) and completed the course. That tells us a lot.  If we were to analyze all of the items that employees missed, we could either improve the presentation or improve the questions.

xAPI statements, generated by an activity authored in LodeStar

Embedded Discussions

Higher education instructors often invite students to discuss topics online after a presentation.  There is a reason for this. At the most elemental level, it forces recall of the presentation. At a higher level, it generates new knowledge as students hear differing perspectives.

In my time in higher ed, I’ve seen this done well and I’ve seen it done poorly.  My poster child for doing it right was a marketing instructor who simulated product advertising pitches in a discussion forum.  My hunch is that online discussion in corporate training environments is rarer.  To my point, our corporate Learning Management System (LMS) doesn’t even offer a discussion board. 

The following screenshot depicts an activity prototype with an embedded discussion board.  For this prototype, we used Tribe from Tribe | A Customizable Community Platform.   Tribe allows you to create and embed your discussion board.   (I’m not necessarily endorsing Tribe.) The strategy is to refresh employees on the fundamental principles of giving and receiving feedback and then ask them to discuss what works for them.

The key idea is to immerse learners in the content with enough information to prompt their recall of the training.  Then we invite them to share their insights or strategies with others.  They don’t need to leave the activity and log in to another service.  They can share their thoughts right there and then. 

This is an important idea in a general strategy that we’ve been working out called 3Di.  That means delivery of interactive content, discussion, and then decision.  Students apply what they have both learned and discussed to make a decision. 

A discussion forum embedded in an eLearning activity

Staged Journals

We first developed this strategy for a literature teacher.  She taught students how to be analytical of fairy tales.  She instructed them on the Propp analysis based on the work of Vladimir Propp.  In the staged journal technique, students would be presented with one step or stage of the analysis.  They would complete the step and go on to the next.  In the end, they had a journal that was compiled of all the steps.

The screenshot below depicts an employee who types in his greatest difficulty when asked to give a subordinate corrective feedback.  The learner brainstorms difficulties, and then brainstorms remedies. 

Here is an excerpt from a journal that compiles it all together in a feedback summary.

A compiled journal

Conclusion

Face-to-face instruction may have its supporters, but even this delivery type should include pre-training and post-training eLearning activities.  We know from research and from our own surveys that students and employees forget too much of what we teach.  The amount and rate of forgetfulness may not precisely follow Ebbinghaus’ curve but unless we address forgetfulness, students won’t achieve the desired outcomes of the training. 

More in-depth activities might include decision-making scenarios and simulations.  I’ve written about those in past articles but, in this post, I have featured activities that can be quickly and easily generated.  All three activities represent strategies that can help in the reflection and recall of training.   

Learner Experience Design

Introduction

Learner Experience Design has captured the attention and the imagination of just about everybody.  Some have cast learner experience design (LXD) as a discipline in direct opposition to instructional design; others consider LXD as a rebranded instructional design.

My own perspective comes directly from my community of practice.  For one, I worked as an instructional designer for creative studios who practiced learner experience design well before it became a thing.  We worked in teams that blended the disciplines of user experience design, cognitive psychology, learning technology, and design thinking, which included ideation and prototyping.  LXD as a discipline captures the very best of the principles that are espoused in the CCAF (Context-Challenge-Activity-Feedback) Model, the processes of design that include situational and user analysis, successive approximations, sketches, quick prototypes, a focus on the user, and a focus on doing.  The process of creating Allen Technology’s ZebraZapps, an eLearning authoring tool, included the best of design thinking and user experience design.

So what is Learner Experience Design?

So for me, LXD is what we’ve being doing for years and that is:

  • Centering on the learner versus the content (Dee Fink)
  • Focusing on the experience of the learner — on the doing (CCAF, problem-based learning)
  • Applying how people learn (cognitive science)
  • Empathizing, defining, idea-generating, prototyping, and testing (Design Thinking)
  • Following the principles of User Experience Design (Human Factors)
  • Collecting and analyzing data (Data analytics with the help of SCORM, and now xAPI, CMI5)
  • Using learning technology as enablers or affordances
  • Recognizing that formal training is but one part of improving human performance

In my view,  LXD is the power of all of these things combined under one label.  To illustrate the interplay of the learner, experience, cognition, behavior, UX, Design Thinking, data, technology, and human performance, I’ll draw upon a current project. 

An Example

The project goal is to help supervisors act more like coaches than formal evaluators.  The context is public accounting.  CPAs require deep technical skills and, as they progress in their careers, a host of success skills that include business development, leadership, supervision, and more.  In Minnesota, for example, CPAs complete 120 credits every three years to maintain their license.  They must also routinely attend trainings and updates related to changes in the law, technology, and business practices. 

In addition to this continuous training, the company seeks to improve employee retention, maintain good morale, and continue to grow rapidly.  To achieve its goals, the company adopted an employee engagement system that, among other things, helps supervisors collect feedback on employees from their tax reviewers or audit in-charges. More importantly, the company is switching from an annual review to monthly meetings that help supervisors and their reports improve their work.

There’s already a lot going on.  Learner Experience Design recognizes that all of these factors come into play:

  • Employees train a lot
  • New technology is in place
  • Industry is experiencing high turnover of staff
  • Company wants supervisors to be good coaches
  • Company is shifting from annual review to monthly meetings

At the heart of all of this lies a set of experiences shared between supervisors and their reports:

  • Requesting, providing, and organizing feedback with the employee engagement platform
  • Delivering effective feedback
  • Receiving feedback effectively

Let’s focus on one experience to illustrate the power of LXD.  Let’s focus on ‘giving feedback’.

There are underlying psychological principles as well as best and poor practices related to the giving feedback.  Giving feedback might elicit a perception of threat in the receiver and can easily be dismissed.  The feedback provider must use concrete examples, remain non-judgmental, draw from different perspectives, work toward a positive outcome and on and on. 

As designers, we can treat the topic of giving feedback in many different ways.  We can explain the function of the amygdala in the human brain and underscore its importance in decision making and emotional responses.  Feedback triggers those emotional responses and evokes a fight or flight response.  We could show video clips of good and bad practice or cartoon strips or excerpts from medical journals or any media that conveys information.  Our design might include this type of information sharing and then some form of assessment – a quiz or essay.

In contrast, LXD tends to favor placing the experience at the heart of the lesson.  In this case, the experience is the giving of feedback.  One design treatment might place the learner in a first-person scenario or simulation.  The context is the office with a new employee who is not performing well.  The learner acts as supervisor and selects the best thing to say in a conversation with the employee.  If the learner’s choices disagree with the principles and best practices of providing feedback, then the instruction may come in the form of an employee thought bubble, a pop-up outlining best practices, references to a text or a video, and other visual indicators of success or failure. 

In the prototype below some of these ideas come together.  The learner has selected one of three options.  The choice causes a change in the employee’s outward expression (full figure on the left), inward expression and thoughts, and in the information that is collected on the interaction.  In this prototype, the learner can access a transcript or review it at the end.  At this point in the scenario, the employee came in with the expectation of being coached only to be confronted by the reality that she is being evaluated (because of what the learner chose).  She outwardly smiles while inwardly expressing her concern about being evaluated.  A meter shows generally how things are going.

At the bottom of the screenshot, the learner has access to feedback given about the employee from two sources.  Just as in real life, the learner can consult that feedback to get different perspectives on the employee’s performance.

Giving Feedback prototype authored with the LodeStar eLearning authoring tool

The Design Thinking that led to this prototype included, to start, an analysis.  We must know something about the audience, their situation and the processes that were in place in the past.  In fact, while thinking about the actual problem we are trying to solve, we placed feedback ‘training’ on the back burner.  Other things needed to be in place first:  clear processes; and role definition between supervisors, audit in-charges and tax reviewers, and other personnel.  We also needed to work out how the workplace engagement platform will be used optimally to solicit and collect feedback in preparation for the one-on-one meetings between supervisors and their employees.

As we continue to think about people and processes, we’ll come up with new ideas, build new prototypes and test them out. 

Well…admittedly, to a point.  For a mid-sized company the return on time and effort is calculated quite differently than for a creative agency that plans training for thousands.  Design thinking still plays a role, but perhaps at a smaller scale.

The cognitive aspects of this training relate to how we can help the learners acquire and retain new knowledge without overload, how they can assimilate that new knowledge, and how they can apply the knowledge to their daily lives.  Human Performance Improvement considers any job aids or prompts that support the learner’s application of the principles and procedures.  User Experience Design challenges us to think about a lot of things on the screen (fonts, colors, layout, flow, navigation,  interactive elements, accessibility, desire paths) and off (cognitive overload, attention, memory, and more).

All of these things interplay and intersect.  Cognitive load might cause us to scaffold or plan out the curriculum differently (instructional design), or create a job aid (human performance), or map out the experience (UX) so that it doesn’t overwhelm the learner.  As we build prototypes or test the product, we collect data and analyze it.  Learning technology (xAPI, CMI5, SCORM) helps us collect the data from the learning experience.  xAPI and CMI5 are standards that are centered on experience.  (As I’ve written in the past, the x in xAPI is ‘experience’.)    Statistical methods help us make sense of the data.  For example, are learners benefiting from one design over another.

Conclusion

Since the term Learner Experience Design was first introduced, it has become part of our vocabulary and a rallying cry against content-centric designs, training-centric human performance improvement, and ineffective user interfaces.  LXD may not be anything new and yet it feels new and it feels exciting.

CMI5: A Call to Action

Introduction

Since 2000 a lot has changed. Think airport security, smart phones, digital television, and social media. In 2000, the Advanced Distributed Learning (ADL) Initiative gathered a set of eLearning specifications and organized them under the name of SCORM. In 2021, in a time of tremendous technological change, SCORM still remains the standard for how we describe, package, and report on eLearning.

However, finally, we are on the eve of adopting something new and something better: CMI5.

We no longer have landlines, but we still have SCORM

CMI5 Examples

To many, CMI5 is another meaningless acronym. To understand the power and benefit of CMI5, consider these very simple examples:


A Learning and Development specialist creates a learning activity that offers managers several samples of readings and videos from leadership experts. The activity allows the managers the freedom to pick and choose what they read or view; however, the specialist wants to know what they choose to read or watch as well as how they fare on a culminating assessment.

CMI5 enables the activity to capture both the learner experience (for example, the learner read an excerpt from Brené Brown’s Daring to Lead ) and the test score. CMI5 can generate a statement on virtually any kind of learner experience as well as the traditional data elements such as score, time on task, quiz questions and student answers. In this sense, CMI5 supports both openness and structure.

Let’s consider another example:

An instructor authors a learning activity that virtually guides students to places in Canada to observe the effects of climate change. She wants students to answer questions, post reflections and observe the effects of climate change on glaciers, Arctic ice, sea levels and permafrost. She sets a passing threshold for each activity. Once students have completed all of the units, then the learning management system registers that the course was mastered.

Let’s go further:

The instructor wants the learning activity to reside in a learning object repository or website outside of the learning management system – but still report to the learning management system. In fact, she wishes that no content reside on the learning management system. Regardless of where the content resides, she wants to know what sites students visited, how they scored on short quizzes, and how students reacted to the severe impact of climate change on Canada.

For students with disabilities, the instructor makes an accommodation and requests that the LMS administrator adjust the mastery score without editing the activity.

As the course becomes more and more popular, she anticipates placing the website and its activity onto CloudFlare or some content distribution network so that students all around the world can gain faster access to the learning activities.

The instructor works as adjunct for multiple universities and wants each of their learning management systems to get the content from a single location. In some cases, she wants the content locked for anyone who circumvents the Learning Management System and in other cases she openly lists the unlocked content with OER libraries like Merlot and OER Commons.


Before CMI5 much of this was difficult to achieve, if not impossible. So, let’s review what CMI5 offers us.


CMI5 captures scores in the traditional sense. But it also records data on learning experiences such as students virtually observing the change in the permafrost. CMI5 allows instructors and trainers to set the move-on criteria for each unit in a course (i.e. passing score before student moving on to the next unit).

CMI5 activities can reside anywhere – on one’s own website, for example, and still report to the learning management system. CMI5 enables an LMS administrator to change the mastery score from the LMS for the benefit of students who need accommodations and essentially trump what is set in the unit.

LodeStar’s CMI5 Implementation allows
authors to indicate where the content resides


CMI5 is a game changer. And yet for many – learning and development leaders, instructional designers, technologists and students – it doesn’t seem that way in 2021. CMI5 seems like a non-event. It feels like something we all talked about – a welcome change of weather on the horizon –and then nothing. Not a drop of rain.


We have been talking about and anticipating CMI5 for a long time – and yet, major learning management systems both in the corporate and academic worlds still don’t support it. CMI5 was envisioned in 2010, released to developers in 2015, and then released to the public in its first edition in 2016. We are now in the waning days of 2021—with limited adoption.


But that is likely to change.


For one, Rustici Software and ADL delivered on their promise of Catapult. Catapult is likely to accelerate adoption of CMI5. It provides many benefits to developers, including the ability to test if a CMI5 package conforms to the standard.

In my view, the learning technology architects have done their part. They brought us a meaningful set of specifications. They brought us the tools to test learning packages and to test the learning management system’s implementation of CMI5. Now’s it’s up to learning and development specialists and the instructional design community to cheer CMI5 on. It is my belief that once the community understands CMI5, spreads the word, and imposes its collective will on the LMS providers, CMI5 will become an important part of our tool bag. I urge you to share this article and others like it.


In the meantime, let’s take a deeper dive into CMI5’s potential.


Benefit One: Freedom to capture and report on any learner experience.


With CMI you can report on scores, completion status, and just about anything else. You can report on standard assessment results, and the not-so-standard learning experiences.


To understand this, we need to re-look at SCORM.


One should consider CMI5 as a replacement for SCORM – an improved specification. Conforming to SCORM was useful because a learning object or learning activity could be imported into just about any modern learning management system. As an instructor, if you created a game, quiz, presentation, simulation, whatever and exported it as a SCORM package, your activity could be imported into Moodle, BrightSpace, Canvas, Cornerstone, Blackboard, and any learning management system that supported SCORM. So, the benefit of SCORM was that it was a set of standards that most LMS systems understood. The standards that fell under the SCORM umbrella included metadata, a reporting data model, and standard methods for initializing an activity, reporting scores, reporting on interactions, and reporting passing or failing and completion status.

The data model included dozens of elements. One example of a data element is cmi.core.score.min. Related to score, SCORM conformant activities reported on the minimum score, the maximum score, the raw score (absolute number) and the scaled score ( a percentage between 0 and 1).


SCORM supported a lot of different data elements. A SCORM conformant activity could report on a variety of things. The limitation of SCORM, however, was that, despite the large number of elements, it was still a finite list. Take a Geolocation Storytelling activity as an example or an eBook reading. If I wanted to capture and report that the student virtually or physically visited location A, then B, and then C, I would have to work around the limitations of SCORM. I could not generate a statement such as, for example, ‘Student visited the Amphitheater in Arles’. If I wanted to capture a student’s progress through an eBook, SCORM would be problematic.


At this point, you might be protesting, but xAPI does that! xAPI? Another acronym! Yes. xAPI, or The Experience API is a new specification that makes it possible to report on a limitless range of things that a learner has experienced: such as, completed a chapter of an eBook; watched a video; toured a museum, and on and on. So, if we have this thing called xAPI, why CMI5?


The benefit of xAPI is that it supports the reporting of anything. The downside to xAPI is that, by itself, it doesn’t have a vocabulary that the LMS understands such as launched, initialized, scored, passed, completed. That is what CMI5 offers. CMI5 is, in fact, an xAPI profile that includes a vocabulary that the LMS understands. In addition, CMI5 can report on any type of learner experience. Here is the definition of CMI5 from the Advanced Distributed Learning Initiative:


cmi5 is a profile for using the xAPI specification with traditional learning management (LMS) systems

(Advanced Distributed Learning).


With CMI5, you can have your cake and eat it too. You can report on learner activity in a way that LMS understands and you can report on just about anything else that the Learning Management System stores in a Learner Record Store. The Learner Record Store or LRS is a database populated by statements about what the learner experienced.

xAPI Statements can capture an
any learner experience, including reading the instructions


Benefit Two: Freedom to put the learning activity anywhere


With CMI5, you can place a learning activity in a repository, in GitHub, on a web server, in a Site44 drop box site, in SharePoint, in a distributed network, wherever….without restricting its ability to connect with a learning management system. CMI5 content does not need to be imported. A CMI5 package can contain as little as one XML file, which among other things, tells the LMS where to find the content.


To appreciate this, we need to look back at SCORM once more (as if it were ancient history).


I’ll start with a pseudo technical explanation and then follow with why it matters.
The way SCORM works is that the learning activity sits in a window. The learning activity uses a simple looping algorithm to find the Learning Management System’s SCORM Adapter. It checks its parent window for a special object. If the window’s parent doesn’t contain the object, the activity looks to the parent’s parent, and so on. In other words, somewhere in that chain of parents, there must be that special object. Typically, the SCORM activity can only communicate to the learning management system if it is a child window of that system or if some server-side technology is used.

CMI5 works quite differently. CMI5 gives us freedom to leave our parents’ home. Whereas SCORM uses a Javascript Application Programmer Interface to communicate, CMI5 uses xAPI to reach across the internet and call a web service’s methods. Loosely, it’s like the difference between a landline and a cellular phone service. To use the landline you must be in the house; to use a cell phone, you must be in the network.

Benefit Three: A simplified sequencing model.

SCORM supported simple sequencing, which many say is not so simple. CMI5’s ‘move on’ property, in contrast, is very easy. A CMI course can contain one or more Assignable Units (AUs). The instructor spells out what the learner must achieve in an assignable unit before being able to move on. The move on property has one of the following values:


• Passed
• Completed
• Completed Or Passed
• Completed And Passed
• Not Applicable


Once the student has ‘moved on’ through all of the assignable units, the LMS notes that the course has been satisfied by that student.


Benefit Four: An assignable unit passing score can be overridden


In SCORM, the mastery score is hard-coded in the activity. In a SCORM activity, the instructor can base completion status on a passing score. But what if that hard-coded score were inappropriate for a group of students, for whatever reason? The specification enables an LMS to pass the mastery score to the Assignable Unit upon launch. So the LMS launches the AU, and sends it student name and mastery score (among other things). By specification, the AU cannot ignore the mastery score but must use it to trump what is hard-coded in the unit or refuse to run.


Benefit Five: Theoretically, CMI5 isn’t hamstrung by pop-up blockers.

When an LMS launches a SCORM activity, it either embeds the activity in an Iframe or launches a window. Both scenarios are problematic. The content may not be well suited for an iFrame and a pop-up blocker can obstruct the launched window.


Theoretically, CMI5 AU can replace the LMS with its own content. It’s not in an embedded iFrame and it’s not a pop-up window. When the LMS launches the AU, along with student name and mastery score, the LMS sends the AU a return URL. When ended, the AU returns the student to that return URL, which is the address of the LMS.


I write “theoretical” because the LMS should not but may ignore this requirement.

Benefit Six: CMI5 activities securely communicate to the Learner Record Store


As I wrote, the activity can send information about learner experiences clear across the internet to the learner record store. But how does the AU have the authorization to do this from, let’s say, a web site? And how does it happen securely?


This is the marvel of 2021 technology versus 2000 technology. Before 2000, we had difficult-to-use protocols for passing information securely across the internet. Oftentimes, special rules needed to be added to internet routers. Then along came a simpler protocol that the first version of CMI5 used (SOAP). Then came an even better way (OAUTH and REST). After launch, the LMS hands the AU a security token (kind of like a key that dissolves in time). The AU uses that key to gain access and to post information to the Learner Record Store.

Conclusion

CMI5 returns power to the instructor and to the L&D specialist. CMI5 allows one to choose where the content resides and to choose what the content reports. CMI5 captures learner experiences more completely and yet it communicates with Learning Management Systems with a vocabulary that LMSs understand. CMI5 supports accommodations for a special group of students without needing to change the code of the Assignable Unit. Finally, CMI5 uses current technology to send data over the internet.

The implications of this emerging specification are tremendous. It is better suited to mobile learning and it is better suited to the learner experience platforms that are emerging (e.g. LinkedIn Learning’s Learning Hub). Soon instructors may be able to organize content from a variety of providers (like LinkedIn Learning, Khan Academy, or OER Commons) but retain the learning management system as an organizer of content, data collector, and credentialing agent. Now instructors, average instructors, may be able participate in that content market from their own GitHub repositories and web sites.

But many LMSs have yet to adopt CMI5. The architects have done their part. Now it’s on us to understand this technology and advocate for it. Start by sharing this article. Thank you.

Appendix A — How it Works (A simplified flow)

For those interested in a deeper dive, let’s walk through the CMI5 process flow step-by-step. (See diagram)

To begin, the author (instructor, L&D specialist) exports content as a CMI5 package. The package can be a simple file that instructs the LMS where to find the content or it can include the content itself.

(1) When a student needs the content, the Learning Management System (LMS) launches the content and sends the Assignable Unit (a course can contain one or more Assignable Units) (2) information that includes student name, a fetch URL and the activity ID.

(3) The Assignable Unit (AU) uses the fetch URL to retrieve a security token. The security token enables the AU to communicate securely to the Learner Record Store (LRS).

(4) As the student interacts with the content, the AU can optionally send Experience API (xAPI) statements to the LRS . (5) At some point, the AU reports that the student passed and/or completed the unit.

(6) The LMS uses the ‘move-on’ information to determine whether or not the student can move on to the next assignable unit. The move-on options are passed, completed, passed and completed, passed or completed, or not applicable.

Finally, when all of the assignable units within a course are completed, the course is marked as satisfied for the specific learner.

A simplified process flow that starts with the
launch of the CMI5 Assignable Unit by the LMS

Geolocation Storytelling Revisited

We’ve observed an uptick in interest in Geolocation Storytelling. We’ll revisit the subject for those who know little about this medium as well as those who either want to design a project on paper (i.e. Word) or who want to go all the way and use the LodeStar Authoring tool to complete a working project.

To reach all audiences at some level, this article starts from the general and ends with the specific. Hop on and off at any point.

Introduction

Every place hides its own unique, rich story. Have you visited an unfamiliar town or area and wondered about its history,  geography, and points of interest? Have you ever wanted to connect to a place on a level deeper than a quick drive-by?

A new form of storytelling—geolocation storytelling—combines technology and traditional storytelling to connect visitors at a deeper level.  With the help of an app, the place where you’ve entered or visited on a map suddenly comes alive with narrative and imagery.  You may hear about the past or be guided to an unusual rock formation or the vantage point of a famous painter.   Geolocation stories can work on-site, guiding you from point to point or they can help you discover a place from the comfort of your home.  Geolocation stories can be both informative and entertaining.  They can involve the visitor in discovering why a place got put on the map, or solving a challenge, or even solving a murder mystery.  In short, geolocation stories can be about anything that piques the visitor’s interest about a place.

The Inspiration

Places inspire people to learn more about them.

A group of history buffs, known as Lensflare Stillwater, were inspired by the many untold stories of Stillwater, a Minnesota river town.  Stillwater was a lumber town with connections to Minnesota and Wisconsin pine lands by river and connections to Saint Paul by stage road and later by rail. 

Stillwater inspired a number of geolocation stories. The first stories were guided  tours of Stillwater’s historical downtown.   A subsequent story helped cyclists learn about the rich history from the vantage point of a bicycle trail.  Even later, another story recovered the lost memory of Stillwater’s streetcars.   

Thousands of miles from Stillwater, a geolocation project told the story of Vincent Van Gogh’s year in Arles, France, and what went horribly wrong for him.   Its authors first visited Arles to learn more about Van Gogh but were disappointed in the local tour booklets, which didn’t sufficiently tell the story. 

If your town or place has points of interest, a rich history, or geographical features, you will want to consider creating a geolocation story to help others see the place from a new point of view.  Visitors can walk to the specific places of interest and hear audio, see imagery, read text, scroll through time lines and learn more about this special place.

How it works

Typically the visitor launches a geolocation story (a web-based application) from a web address on a smartphone. The first page of the story provides instructions and a starting point. When the visitor reaches that point, she crosses an invisible geofence. Geofence is a just a metaphor. Actually, the visitor’s location is calculated from the signals of three or more satellites . Most modern smartphones are equipped with the hardware to detect these signals. Global positioning satellites constantly emit signals. The GPS receiver in the visitor’s phone listens for these signals. Once the receiver calculates its location from these satellites, it provides that information to the application. The logic of the application is constantly checking to see if the location matches a place of interest. If yes, then content in the form of audio, text and imagery is called up and presented.

Getting more specific: Best practices

If you already understand the power of the geolocation story and wish to get started, you’ll want to consider a few things.  These are not hard and fast guidelines.  As we gain more and more experience, we’ll learn about what works and what doesn’t.

  1. First, geolocation storytelling works best when the audience is on foot and out of doors.  Smartphones can’t receive satellite GPS signals from inside of buildings.  The technology works best outside with clear line-of-sight to the sky.
  2. Geolocation projects must be housed on a website that supports HTTPS.   Smartphones don’t reveal their locations to applications that run from websites that begin with http:// The web address must be https:// The ‘s’ means secure.  Information that is transported by HTTPS is encrypted in order to increase security of data transfer.  
  3. There is a limit to the distance that people will walk on a tour or the length of a tour in time.  Limit yourself to two miles completed within one hour.  Of course, this is a very loose rule of thumb.  Consider your audience when setting the limits.  Young adults will have no difficulty with 3 – 5 mile hikes.  Time and attention span, however, will remain a factor.  Senior citizens with mobility issues will find two miles too long.  The steepness of the terrain will be a factor. Use your discretion but keep it as short as possible.
  4. Some people’s interest may wane quickly.  A two mile tour should have at least a dozen points of interest.  Limit the distance and length of time between geolocation points.
  5. Present narrations in audio and text formats.  People like to hear a recorded narration but, without headphones, the narration could easily be drowned out by traffic or a rushing river. On the flipside, audio narration often works in situations (e.g. bright sun) where the screen is difficult to see. You’ll need to use your judgement.
  6. Consider the format of the tour.  Will you guide your audience from point to point or will you cluster points so that the audience will simply wander about and come upon points of interest? 
  7. Audio should be cleanly recorded.  The audience should not hear background noise or a muffled narration.
  8. Text must be spelled correctly, grammatically correct and short. 
  9. Favor more points of interest and shorter narration/text rather than fewer points of interest and narration that drones on.
  10. Have fun creating this story. You’ll learn a lot!

Get your Geolocations

Even if you’re starting with Word to capture your text, find the locations. You can use Google Maps.  This is a very accurate way of finding locations.  For example, if I wanted the location of the intersection of Myrtle and Water Streets in Stillwater, I would do the following:

  1. Go https://www.google.com/maps
  2. Search for Myrtle Street, Stillwater.
  3. Move the map to the location of interest.
  4. Click on the intersection.
  5. Either write down the location coordinates or click on them.  The coordinates will now appear in the address field at the top and can be copied and pasted into your Word document or directly onto a LodeStar page (see below).
Google Maps reveals latitude and longitude

About the Location Coordinates

In the example above the coordinates were 45.056745,-92.805510.  The first coordinate (45.056745) is the latitude.  The second coordinate is (-92.805510) is longitude.  Always use a coordinate with six digits of precision (six digits to the right of the decimal point).  The six digits will ensure an accuracy within a few inches but never rely on that.  In other words, allow the technology a slop factor. Use precise coordinates but allow for imprecision in the ability of device to calculate its location. Never create a geolocation story that relies on an accuracy of a few inches.  You control this by typing in numbers in the latitude and longitude proximity fields. The numbers spell out how close one needs to be to the precise location to trigger an event. In our geolocation stories we trigger something (e.g. show content) when the user is within 25 to 50 feet of a location.  We call that crossing the geofence.   The minus sign is important.  In latitude, the minus sign denotes the southern hemisphere (south of the equator).  In longitude, the minus sign denotes west of the prime meridian (Greenwich) and east of the antemeridian (roughly where the international date line resides).

If you want to grab your location while physically on the spot, use your smartphone’s Google Maps app. 

Current Location Arrow in Google Maps
  1. In Google Maps, click on the arrow to show your current location.
  2. Scroll down until you find the marker and the location.  See screenshot below.
  3. Copy and paste the coordinate into your notes so that you can transfer the coordinate to LodeStar.

Getting a location from Google Maps while on site

Preparing a Geolocation Story in Word

Your role might be to prepare the content. When you’ve completed the preparation, you can hand off the content in the form of a Word file. In Word, each location should be on a separate page. At the top of each page, key in the title and the latitude and longitude coordinates of the location. Add your text, graphics, image and narration. If your version of Word doesn’t support audio narration, use a free tool like Audacity to generate an MP3 audio file.

Even More Specific: Authoring a Geolocation Story with LodeStar

To create a geolocation tour in LodeStar, do the following:

Launch LodeStar and select the ARMaker template.  (AR stands for augmented reality.)

LodeStar’s ARMaker template
  1. Title your project.  The project will now reside on your hard drive in a folder with the same title.  It will be found in the LodeStar/Projects/[your title]  directory.
  2. Add your title to the first page.
  3. Add a page by clicking on the + button at the bottom of the app.
  • Ensure that the new page is a Text Page Type.  Examine the screenshot below.  The page should have a place to enter a latitude and longitude.
  • Add your content.  You can insert a widget (e.g. Image Layout Widget), text, audio, and more.
  • Add a page to add more content.
  • Then Preview in Browser (find button at the top).
  • When you are ready to publish,  Export as a SCORM 1.3 package and import to a Learning Management System or simply copy the LodeStar/Projects/[your title]  directory to a web server.
LodeStar authoring tool with ARMaker template. Click on image to view.

Below is what this page looks like in Preview.  Notice the audio control at top left and the Show Map at the top left.   Notice the navigation buttons top right (depending on layout).  Notice the how the image slider appears, created by the PWG Image Slider Widget.

Previewing a Geolocation story

If your audience clicks on the ‘Show Map’ button, a Google Map appears with all of the locations marked with red markers.  Again, each location represents a separate page in LodeStar. 

Each location (marked by red marker) matches a LodeStar page

Controlling the User Experience

If you allow users both to show map and navigate to content by clicking on a marker, then you need not adjust project settings.    If you want to restrict users’ access to the map and/or their ability to access pages of content from the map, select Tools > Project Settings.  Change the settings according to your needs.  (The important settings are marked with arrows. See screenshot below.)

Project settings in LodeStar allow control of application

Publishing your project

As a SCORM object

If you use a Learning Management System (LMS) and want to control access to your geolocation story, then, with your project opened in LodeStar, click on Export and export to SCORM 1.3.    Go to your LMS and import the story as a SCORM object.

As a website

If you have access to a web server, copy the project folder to the web server and use the index.htm file in your URL.  Once again, location services will only work on web servers that support https://

If you don’t have access to a web server, then read the following article that explains how you can use GitHub as a web server.

https://lodestarlearn.wordpress.com/2020/05/14/seven-steps-that-will-change-how-you-share-elearning/embed/#?secret=5b4inntyGg

Alternatively, you can use Site44 to convert your Dropbox folder to a published website:

See https://www.site44.com/

(We are not endorsing Site44 but LodeStar Learning has successfully used it on a number of projects.)

As an Open Education Resource (OER)

Publish the geolocation story as a web site, then register the URL (address) of that site with OER Commons, Merlot, or whatever OER repository you prefer.

 

Additional Details

If you are new to Geolocation Story-telling to learn more detail, visit:

Geolocation Storytelling: Van Gogh In Arles | LodeStar Web Journal (wordpress.com)

To see an example of a finished product as OER, visit:

https://www.oercommons.org/courses/vincent-van-gogh-s-arles/view

Or view the app at:

‎Van Gogh In Arles on the App Store (apple.com)

Conclusion

Geolocation stories are a great way to help visitors uncover the hidden wonders of place. Google Maps and the LodeStar Authoring tool are indispensable ways of authoring stories and publishing them either to Learning Management Systems or to the web.

If you complete a project, share your project. Drop a comment or drop a line to supportteam@lodestarlearning.com.