Top Influences on the Development of LodeStar 10

Introduction

Modern web pages offer designers a rich palette of media types and standards to create engaging learning experiences. The web page has become an amazing success story.  It started as a battle ground of competing standards and self-interests and has arguably matured to become a meeting ground. 

If you aren’t on the bleeding edge, you’ll benefit from the convergence of standards. In CanIUse.com, browser support for many enabling technologies such as SVG and WebGL (explained later) shows up as green tiles across the table of browsers.  Green means these technologies are commonly supported, which is good news because learning experience designers can put these technologies to work.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Many of today’s eLearning projects are essentially webpage applications with additional standards that support communication to learning management systems or learner record stores. Many of the technologies that make the web interactive, responsive, accessible, and expressive are the same technologies used in eLearning applications.  Most of the major eLearning authoring systems are web page design systems for web pages that are hosted in learning management or content management systems.  There are many exceptions, of course, which include augmented reality systems, gaming engines and environments, and other virtual spaces that are not built on HTML5.  But let’s stay focused, for a moment, on the web.

For maturing standards, the web has become a place of agreement.  In the not-too-distant past, basic HTML markup and styling had to address the many differences between browsers and how they interpreted the World Wide Web Consortium (W3C) standards.  Even a technology that most of us take for granted, the audio file, was once unsupported by a single file format that every browser could play.  Designers had to choose both an audio format and a fallback format. Thankfully that has changed.  All browsers can now legally play the .mp3 file or the HTML5 Ogg Vorbis audio format.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Soon the .m4a audio file (AAC) will be supported by all browsers and offer even higher quality audio at less the data cost.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

But audio is only the beginning.  All modern browsers (IE 11 excluded) support GIF, animated GIF, JPEG, PNG images, animated PNG, and motion video in the MPEG-4/H.264 video format. 

All browsers support the language features of the last major revision to JavaScript.  JavaScript is the code that makes the web interactive.  It is the code that makes eLearning projects interactive.  Standardization allows all of us to benefit from the interactions that eLearning authoring tools produce with less worry about browser and device differences.  (I emphasize less worry because there is always something to worry about.)

Interactive 3D has become a new frontier for eLearning.  All major browsers support WebGL, which is a method of generating 3D graphics using JavaScript and hardware acceleration.  In the early 90’s, when I first created 3D worlds, I needed an entire lab of computers dedicated to rendering three-dimensional meshes into an animation of three-dimensional images that we would transfer onto a laser disc.  Today, WebGL enables us to render a mesh into a rotatable, scalable image in real-time, all in a browser.  If you’re not familiar with WebGL, please read on.

In short, Learning Experience designers, instructors and trainers can now use audio, video, imagery, text, three-dimensional graphics, scalable vector graphics, math mark-up, interactivity, and logic to realize their grandest designs and create engaging experiences for their learners.

On the eve of LodeStar 10’s release, I am taking stock of these standards and other influences that had a strong bearing on where our product is headed.  Like all toolmakers, I am keeping an eye on effective strategies as well as emerging and maturing technologies and am imagining the opportunities for designers as we work to make these technologies practical and accessible.

Here is a list of standards and strategies that are central to LodeStar’s current development.

Scalable Vector Graphics

A lot of our development has focused on Scalable Vector Graphics.  SVG offers the designer many benefits.  Simple graphics such as the famous SVG tiger pictured here keep their sharpness regardless of the display size and the resolution. They are scalable.  They also offer more opportunity for accessibility.  Scaling can help learners with low vision.  The SVG title is readable by most screen readers. Also importantly, the SVG graphic is made up of individual elements whose properties can be changed by program code or user interaction.

LodeStar displays SVG graphic

In the screenshot below, the tiger graphic is opened in an SVG editor in LodeStar.  The author has right-clicked on an eyeball and can now choose branch options based on selection, deselection, drag, hover over and hover out.  All of LodeStar’s branching options and script can be executed based on any of the above events.  For example, based on the click of an eye, things can happen: the eye color changes, an audio description plays, an overlay appears with a complete description of a tiger’s vision and so on.

With LodeStar, designers edit SVG graphics and add interactivity

Importing PowerPoint as SVG

We’ve never been huge fans of starting an eLearning Project as a PowerPoint.  That hasn’t changed, but LodeStar10 does support importing a single PowerPoint slide or an entire PowerPoint presentation as a series of SVG pages. 

PowerPoint supports exporting a slide or series of slides as SVG.

PowerPoint Presentation

LodeStar 10 adds support for importing a single SVG image or an entire folder of SVG images.  LodeStar interrogates each slide and looks for things like Base64-encoded images.  PowerPoint converts imported images to a long string of characters called Base64.  This is a great format for transporting images inside a single file but browsers tend to load and render Base64-encoded images very very slowly.  LodeStar detects the Base64 encoding and then translates the characters back into an image file that is loaded into the project. 

The result is that vector graphics are editable as SVG elements, and embedded images load and display quickly.  The designer can display the slide as is, edit elements and add branch options to elements.

Designer edits a PowerPoint slide in SVG editor

MathML

For a short while, all browsers supported the MathML markup language, enabling markup without the need for add-ons.

Rendered MathML in LodeStar HTML editor

But there have been setbacks. We’re looking forward to when MathML is once again available in all browsers. Given the likelihood of that, LodeStar continues to support MathML.

Support for MathML

MathML (Mathematical Markup Language) is supported by W3C as the preferred way of displaying mathematics on a web page or eLearning application. MathML describes structure and content of mathematical notation and provides for a higher level of accessibility than simply displaying an image.  Designers can quickly edit and manipulate the size of a MathML expression.  This is an improvement over taking a picture of an equation, for example, and pasting the image into a presentation.  In the past LodeStar automatically converted expressions into images or it used the MathJax library to convert expressions written in Latex to MathML.  But now we’re banking on full support for MathML in the near future.

SimpleSim

For years, LodeStar offered the Interviewer Page Type to support what we called decision-making scenarios and simple simulations.  We continue to offer that page type but have expanded the number of layout options for interactive decision-making. 

For starters, we added a new page type called the SimpleSim.  This page type supports graphics, interactive widgets, text and whatever else is needed to set the scene.  At center stage is the situational prompt and three decision options (as pictured below).   All of LodeStar’s branching options can be invoked based on the learner’s choice.   For example, the ‘Jump to Page’ branch option can bring up a scene that matches the choice and advances the narrative.   Branching options also allow us to add feedback, keep track of points, collect user responses and so forth.

To style the scene shown below, the author used a palette for the color scheme, added a header graphic through Tools>Themes, selected a layout style that set the window width and navigation at the top, and added a background graphic.   The use of palettes, themes, layouts and page types enables the author to control every aspect of this simple simulation, including the interactivity.

Look and feel is controlled by Layout, Theme, and Palette

CCAF

It’s no secret that we are huge fans of Dr. Michael Allen’s Context-Challenge-Activity-Feedback model.  In a project that was intended to improve employees’ social sales index (SSI) in LinkedIn, we set the context as a simulated LinkedIn.  For the challenge, the learner must improve the main character’s SSI score by providing the right advice and interacting with a simulated profile, notifications, messaging, etc. – just like LinkedIn!

LinkedIn Simulation

CCAF projects are not page turners or Present-and-Checks.  They can be quite advanced.  To support a more sophisticated interaction than the display of content and multiple-choice questions, LodeStar offers LodeStarScript, which can be written in the Execute Command branch option.

LodeStarScript enables designers to change the properties of graphics on the fly, including SVG graphics.  Properties can include color, position, image source, rotation, opacity, etc.  LodeStarScript offers the designer the power of conditional logic, loops, local and global scoped variables, and a long long list of functions.

In the simulation below, the learner can select a camera aperture and control exposure.  The effects of exposure are simulated with the simple change of the color and opacity properties of an SVG element.

Camera simulation with LodeStarScript

xAPI/CMI5

Megan Torrance, a veteran of learning design, authored a research paper sponsored by the Learning Guild.  I won’t steal her thunder and encourage you to read the paper for yourself, but I’ll cite two statistics from her research that tell the story of xAPI.

In a survey of 368 respondents, the majority of whom belong to organizations that create or purchase learning solutions, 44.9% of the respondents indicated that ‘We are interested in xAPI but have not used it at all.” 

Version 1.0 of xAPI was released way back in 2013, and yet 10 years later adoption is not widespread.

So what is xAPI, how does it relate to CMI5, and why are we so interested in it?  In short, xAPI and CMI5 are game changers.  They are not the same thing but they are close cousins.  An eLearning activity that uses CMI5 can generate an xAPI statement, which gets recorded in a Learner Record Store.  CMI5 can also tell the LMS whether the learner passed or failed. 

So, let me be a little more specific.

With these technologies, I can store my eLearning projects in my own repository — GitHub for example.  I can then import a very lean and simple file to the Learning Management System, which tells the LMS from where to launch the activity.  The LMS then passes to the activity learner information and a token for secure communication.

CMI5 uses xAPI technology but it also understands the vocabulary that LMSs require.  Pass/Fail.  Incomplete/Complete.  xAPI reports to a learner record store any statement that the designer has added to the eLearning activity.  ‘Learner has reached Level Two.  Learner completed a video.  Learner attempted Level Three four times.’  CMI5 can generate any kind of xAPI statement in the form of learner actions.  In addition, CMI5 can tell the LMS whether the learner passed and/or completed the module.

 But among the reasons why people don’t yet use it are:  lack of knowledge, lack of Learner Record Store, and LMS does not support it.

I am extremely fortunate in that our Learning Management System is Prolaera.  It is designed for the CPA industry.  Prolaera can import a CMI5 activity.  As a result, I can do the following:

  1. Send a statement about the learner reaching Level 5 to the learner record store.
xAPI statement

2.

Read a list of learner experiences from the Learning Management System’s Learner Record Store. (the learner’s name has been erased from the screenshot).

Learner Record Store

From the screenshot above, you can see that we can report on any learner experience.  For example, the learner first experienced the results page with a score of 200 points.  We can also see that the learner passed, satisfied the requirements, completed the module and terminated the activity.  These are all terms that the Learning Management System understands.

It may take time but CMI5/xAPI will eventually be widely adopted.  These standards are incredibly important to the advancement of eLearning.  It begins with awareness. The more designers learn about it, the more they can encourage their learning management system vendors to support it.  In the meantime, we are ready for it!

3D

Glen Fox’s Littlest Tokyo is a great example of what is possible with Three Dimensional objects viewed jn a browser.  The object is beautifully detailed with a running streetcar animation as an integral part of the 3D object.

Littlest Toyko, by Glen Fox

Designers will be able to use free tools like Blender, TinkerCAD, Sketchup or even their smartphones to produce 3D meshes.  

Smartphones like iPhone 12 come equipped with LIDAR.  LIDAR emits a laser pulse that reflects off of solid surfaces and returns to a sensor on the smartphone.  The round-trip duration is noted. From that, the software can accurately position the solid surface in three-dimensional space. LIDAR has been available in specialty instruments for a long time but for designers to be able to use this technology practically, the software needed to improve.

In whatever way the 3D model gets created (3D graphics software, downloaded from a warehouse, generated by LIDAR) it can then be loaded into a viewer and manipulated (scaled, rotated, navigated) by the learner.  Imagine vital organs or historical places or complicated machines as manipulable objects. 

Currently, we’re working on a loader and viewer for 3D Models.  The first LodeStar 10 release won’t include a 3D model viewer, but we’ll introduce it later in a minor release.

In the meantime, we do support photospheres.  Photospheres use the same underlying technology: WebGL. WebGL enables hardware-accelerated physics and image processing and rendering onto the HTML5 canvas.  The hardware is a dedicated processor called the Graphical Processing Unit or GPU.

The photosphere that appears in the screenshot shows a distorted view of an art gallery.  The first art gallery image (shown below) was produced in Blender.  The second art gallery image was taken with an iPhone at the Minnesota Marine Art Museum in Winona.

Photosphere created in Blender
Photosphere created in iPhone

The image appears distorted – in fact, spherical.

Once in LodeStar, the designer can add images, markers, and hotspots to the photosphere.  All of these things get correctly positioned on the sphere.

In the LodeStar editor below, I am adding Lawren Harris’ paintings to the gallery as well as hotspots.  A hotspot click takes the learner to another room in the gallery.  A click on the painting brings up an image overlay.  A click on the video graphic starts a video. 

LodeStar editor adds interactivity to Photosphere

The end result:

Interactive Art Gallery on the Group of Seven

Conclusion

2023 marks the twentieth anniversary of LodeStar Learning.  We filed with the Minnesota Secretary of State on March 11, 2003.  I’m pleased that LodeStar has adapted to all of the technology changes over the years.  LodeStar began as code embedded in Lotus’ LearningSpace.  It then enabled instructors to create rich learning activities in ActionScript and Flash.  In 2013, LodeStar Learning pivoted to a whole new generation of software that used HTML5.  LodeStar10 continues that progression and harnesses the power of HTML5, SVG, 3D and so much more to help designers create great learning experiences.

Advertisement

Learning Experiences in the 3rd Dimension.

 

Introduction

Great learning experiences can be crafted from 3D technology. The simplest form of 3D technology is the photosphere. It is accessible to teachers and trainers and can be used quite effectively. In this article, I’ll show off a demonstration project and describe the use of 3D models, a photosphere, text and graphics, video, and audio.

Two years ago, I wrote about using photospheres in online courses. Today, ‘interactive’ photospheres are a critical strategy that designers of every stripe should master. Currently, the use of photospheres is supported by the proliferation of 3D models, photosphere projects, new services, improved technology, and new features in our own authoring software.

So, let me parse this mix media approach. To start, a photosphere is a 360-degree panoramic image that can be displayed in a viewer. Learners can ‘navigate’ the image by dragging the view in any direction and zooming in and out.  Google Street View is the best-known example, but photospheres abound in art museums, tourist bureaus, real estate sites, and social media.

The photosphere is deceptively simple and hides a more profound change in the web.  As we all know, browsers support the trinity:  HTML, JavaScript, and CSS.  All three technologies have been evolving.  Recently, JavaScript began supporting a variety of new technologies, including WebGL.  WebGL makes 3D rendering possible in a browser without the need for plug-ins.  In short, WebGL (Web Graphics Library) displays 3D and 2D. Because of WebGL, browsers can benefit from hardware graphics acceleration to display (render) complicated graphics.  The key is hardware acceleration. The processing of graphics in a dedicated graphics process unit is many times faster than in the main CPU. 

The photosphere uses WebGL or hardware acceleration.   To display a photosphere, a distorted image is mapped on the inside of a 3D sphere.  Our perspective is from the center of the sphere with a narrow field of view.  By dragging the image, we pan the sphere and bring hidden parts of the image into view.

With the help of LodeStar, an eLearning authoring tool, we can add interactivity.

To best illustrate interactive photospheres, I created this demonstration project based on one of my loves, the Group of Seven.

A little background:  I went to school in Canada. Until the thirteenth grade, the study of history was the study of British, American, and Russian history.  The study of literature was primarily European and British literature. The study of art was primarily of British and French art.  In grade 13, that all changed.  We studied Canadian history, literature, and art.  For me, that was transforming.  Central to Canadian art was the Group of Seven.  Their subject was primarily the Canadian landscape. Until recently, I could find Group of Seven paintings only in the McMichael Canadian Art Collection in Kleinberg, Ontario. So, I decided to create a gallery of my own.  Just a small one for demo purposes, featuring two of the artists associated with the Group of Seven.

Visit this link and if your curiosity is piqued, I will share the details of how I made this learning experience. Launch the demo and on the second page drag your mouse across the scene.

Art Gallery (lodestarlearning.github.io)

Virtual Group of Seven Gallery Demonstration Project

The details

First, I needed a model of an art gallery. I went to TurboSquid and bought one for $19.  I could have found a photosphere from Flickr or elsewhere, but I wanted control of the objects in my gallery.  I could have built a 3D model from the ground up – but wanted a short cut.

The model came in the form of a DAE, which is a 3D interchange format.  The DAE format is based on the COLLADA (COLLAborative Design Activity) XML schema.  (This is a standard format that can describe 3D objects, effects, physics, animation, and other properties. All the major 3D modeling tools can import it.)   I then brought the model into Blender.             

Blender is a free 3D modeling tool and it is quite incredible.           

3D Model in Blender

In Blender, I edited the model and added my own camera.  To render a photosphere, I made the camera panoramic and then equirectangular. Equirectangular is a projection type used for mapping spheres onto a two-dimensional plane. This results in a very distorted image when viewed normally.  Viewed in a photosphere viewer, the image looks spectacular.

Next, I imported the image into LodeStar. With the help of the LodeStar’s interactive image editor, I drew hotspots over the doors and imported images of paintings that I positioned in the art gallery. Technically, the images become image overlays. As the viewer moves the image up and down and across, the imported images adjust accordingly by scaling, skewing, and repositioning.

Interactive Image Editor in LodeStar

In the scene above, the imported images appear above the benches.  A hotspot sits over the doorway.  When a learner clicks on the doorway, LodeStar executes a branching option.  In this case, that means a jump to the next gallery.

In the example, two gallery rooms are featured. The first gallery exhibits two paintings of Lawren Harris.  The video icon displays a YouTube presentation on Harris’ work. The second gallery exhibits two paintings of Emily Carr, and a wonderful YouTube presentation on her work.

Conclusion


Photospheres are but one part of 3D technology.  Browser support for WebGL makes it possible for us to use 3D models interactively. Students can view 3D models from any perspective and manipulate them. The possibilities are endless. LodeStar and other tool makers must make it easier to load these models and make them useful for educational and training purposes.  Just as we support functions that can change an image or element’s rotation, position, opacity, and color, we must provide functions that can manipulate 3D objects.

We are currently working on some prototypes and would love to hear from you and what would most benefit students. Please send us your comments.

Meeting the CCAF Challenge

By Robert “Bob” Bilyk

Introduction

I recently watched Ethan Edwards present ‘Cracking the e-Learning Authoring Challenge’.  This post is my attempt at cracking the e-Learning authoring challenge.

But first a little background.

As many of you have the privilege of knowing, Ethan Edwards is the Chief Instructional Strategist for Allen Interactions.  Cracking the challenge is all about building interactivity in an authoring tool – specifically, CCAF interactivity.  CCAF is an acronym for Context-Challenge-Action-Feedback.  The four components of CCAF are part of Michael Allen’s CCAF Design Model for effective learning experiences.  Michael Allen is the founder of Allen Interactions, the author of numerous books on eLearning, and the chief architect of Authorware and ZebraZapps.  Both authoring systems were designed for people with little technical expertise to be able to build – you guessed it — CCAF learning experiences.

In Ethan’s presentation, he demonstrates building a CCAF activity with Articulate Storyline.  In a nutshell, the CCAF learning experience is the experience of “doing”.  Rather than reading or viewing content, the learner experiences first-hand the application of principles, concepts, strategies, and problem-solving in completing a task and succeeding at a challenge.

In Ethan’s demo, his task is to detect a refrigerant leak.  The learner is shown refrigeration equipment and given a leak detector.  The learner doesn’t at first read a pdf or watch a video but performs an action.  In CCAF activities, text and videos might come in the form of feedback to a learner’s action.

Some of the CCAF learning experiences that I designed include running a multiple hearth wastewater incinerator, troubleshooting a cable network, supporting the adoption of a special needs child, designing an online class, assessing risk of recidivism, and, most recently, searching for documents in a document management system.  In all cases, most of the learning came from being immersed in a ‘real world’ setting’, presented with a challenge, and getting feedback because of learner actions. 

Ethan’s presentation piqued my curiosity and a bit of self-reflection.  He lists things that are essential in an authoring tool to enable the design of a CCAF learning experience.  As a toolmaker, I explored each of the items on his list and I applied them to a small project built with our own LodeStar eLearning authoring tool. 

As we explore each item on Ethan’s list, I’ll illustrate with LodeStar.  If you follow along, you’ll see the development of a simple CCAF application.  You’ll learn about the components of CCAF.  And you’ll also learn a little about LodeStar and its capabilities.

But first an important caveat. CCAF comes in all forms, shapes and sizes. Ethan’s example and my example happen to be very simple simulations. The principles of CCAF are not limited to simulations. They can be applied to anything that requires action on the part of the learner — which includes making a decision, crafting a plan, analyzing and solving a problem — a host of things.

This is but one example of CCAF to illustrate its principles and test whether or not our authoring tool is up to the challenge.

Introduction to the Demo Application

The objective of the application is for learners to test an electrical outlet and determine which wires are hot or ‘energized’.  In completing this task, the learner must turn on an electrical multimeter and connect its probes to the various wires in an electrical outlet.  A multimeter is a measuring instrument that typically measures voltage, resistance, and current.  Once someone has learned the difference between these things, the practical skill is in choosing the right setting for the task and safely using the meter to complete the task. 

So that’s the challenge:  find the hot wire with a multimeter.  The context is a simple residential electrical outlet. 

Typical eLearning applications would use text, graphics and video to illustrate the use of the multimeter and explain underlying concepts.  CCAF applications challenge learners to complete the task in a manner that is an educational approximation of the ‘real thing’.  Text, graphics and video can offer explanations but not in lieu of the real-world task and often as a form of feedback. 

A LodeStar Application: Testing an Electrical Circuit

Basic Capabilities

But let’s start with an overview of the basic requirements.  To paraphrase Ethan, an authoring tool must have these capabilities:

  • Complete visual freedom
  • Variables
  • Alternative branching
  • Conditional logic
  • Action/response structures

I’ll elaborate on each of these requirements in my demonstration. 

Complete Visual Freedom

LodeStar combines HTML flow layout and SVG layout.  Images imported into the HTML editor are placed in the HTML flow and are laid out according to the rules of HTML.  Images can also be taken out of the flow and applied with a CSS rule so that text flows around the image.

In addition, LodeStar authors can use the Scalable Vector Graphics (SVG) canvas to layout out graphics freely in any position on the x and y axis. 

LodeStar’s SVG Canvas

In other words, the graphical elements on the SVG canvas are laid out freely.  The SVG canvas itself is just another HTML element.  Depicted below is a flow of HTML elements like text, images, divs, tables, etc.  The SVG canvas is in the ‘flow’ right along with them. Inside the canvas, graphical elements can be positioned anywhere, but the canvas itself follows the HTML document flow. shrinking and expanding as needed.

The visual freedom is that LodeStar combines the benefits of a responsive HTML flow with the precise positioning of an SVG canvas.

HTML elements are laid out on the page in a flow. If the page width narrows, the element isn’t by default clipped. It’s just bumped to the next line. The SVG canvas flows right along with the other elements. Its contents, however, are positioned with local XY coordinates.

I started with a multimeter image that I took from Pexels.com, a repository of free stock photos.  I used Photoshop to cut out the dial and imported it in the SVG canvas as a separate image.  I did this because I wanted the learner to be able to rotate the switch to place the multimeter in the right mode.  I also imported the image of an electrical box so that I could draw wires overtop.

Variables

 As I wrote in the Humble Variable (The Humble Variable | LodeStar Web Journal (wordpress.com)), variables are critical to some eLearning designs.  In this example, I need to store the position of the multimeter switch.  That’s what variables do.  They are storage places in the computer memory.  As the learner clicks on the switch, the dial rotates.  As an author, I must store the value of that rotation.  If the value of the rotation is 40 degrees, the code judges the switch to be in the right position.

To enter the code that uses the variable, I right-click on the switch and select, ‘Select Branch Options’.  Branch Options are basically things that happen as a result of displaying a page or clicking on a button or choosing a multiple-choice option or doing one of many things.

Branch Options can be as simple as turning a page or as complex as executing a whole list of instructions. The following is a basic example of the latter:

The Multimeter code

var rotation = getValue(“dialRotation”);

rotation+= 10;

setValue(“dialRotation”, rotation);

changeRotation(“dial”,rotation ,13,27);

if(rotation % 360 == 40){

changeOpacity(“display”, 1);

appendValue(“actions”, “Turned on multimeter. <br>”);

}

else{

changeOpacity(“display”, 0);

}

This code looks complicated to a non-programmer.  But it is not.  It just takes practice to write.  It’s on the same difficulty level as an Excel formula.

Here is the same code but with an explanation (in italics) underneath:

var rotation = getValue(“dialRotation”);

get the value of dialRotation from long-term memory and assign it to a local or temporary variable named ‘rotation’

rotation+= 10;

add 10 degrees to value of rotation.  In other words, rotation = the old value of rotation plus 10.

setValue(“dialRotation”, rotation);

store the new value in long-term memory in a location called ‘dialRotation’

changeRotation(“dial”,rotation ,13,27);

change the property of a graphic with the ID of ‘dial’.  All LodeStar graphics can be assigned an ‘ID’.

More specifically, change the rotation property by 10 degrees (the value of rotation).  Pivot the rotation at the precise point that equals 13% of the width of the SVG canvas and 27% of the height of the canvas.  That point is the center of the dial in its current position on the canvas.  If the dial were in the dead center of the canvas we would use 50, 50.

if(rotation % 360 == 40){

This line can be simplified to if(rotation == 40)   I used the modulo operator (that is, ‘%’) in case the learner kept rotating the dial around and around.  If rotation = 400, then 400 % 360 would equal 40.  360 divides into 400 once with a remainder of 40.  So, if rotation is equal to 40, then do the following:

changeOpacity(“display”, 1);


change the opacity of a graphic with the id of ‘display’  This is the text box used to show the voltage.

appendValue(“actions”, “Turned on multimeter. <br>”);

store the learner’s actions in long-term memory in a place called ‘actions’

}

else{

changeOpacity(“display”, 0);

if the rotation of the dial does not equal 40, then shut off the display by changing its opacity to 0.

}

The Probe Code

I won’t explain the probe code in as much detail.  Basically, when you drag the red or black probe, then the following code is executed.  It essentially checks whether or not the probes are in the right spot.  If they are, the multimeter display shows 110 volts.

var  condition1 = isOverlap(“RedProbeTip”, “BlackWireBTarget”);

var  condition2 = isOverlap(“BlackProbeTip”, “box”);

if(condition1 == true && condition2 == true){

  changeText(“display”, “110.0”);

appendValue(“actions”, “Moved red probe to correct position. Black probe in correct position.<br>”);

}

else if(condition1 == true){

changeText(“display”, “0”);

appendValue(“actions”, “Moved red probe to correct position.<br>”);

}

else{

changeText(“display”, “0”);

appendValue(“actions”, “Moved red  probe to incorrect position.<br>”);

}

These are the drag branch options that are tied to an object with a specific ID. 

Red probe in place; black probe is not. Therefore the meter shows ‘0’.
Red probe in place. Black probe in place. Meter shows 110 volts.

Alternative branching

Once the learner has tested the wires with the probes, with one probe connected to the wire and the other grounded, then the learner must select A, B, C, or D.  Here’s where alternative branching comes in.  Learners who select the right answer might go on to a more difficult scenario.  The above scenario is as easy as it gets.  Perhaps they must do a continuity test to detect where there is a break in the circuit.  Learners who select the wrong answer can be branched to a simple circuit or given an explanation that one black wire is coming directly from the power source, and the second black wire is passing on that power to the next outlet or switch.

CCAF applications accommodate the differences in learners.  The application can alter the sequence of experiences based on learner performance.  This is a profoundly different thing than typical eLearning applications where every learner reads the same text, watches the same videos, and completes the same quiz.

Conditional Logic

Ethan also lists conditional logic as a basic requirement of CCAF applications.  Conditional logic comes in the form of if-else statements as evidenced by the code.  Conditional logic also comes in the form of alternative branching.  Select the wrong answer and then get help.  In LodeStar, conditional logic is supported by not only its language and branch options but also by logic gates. 

In the display below, we see what happens when the learner reaches a gate.  (Incidentally, learners don’t actually see a gate.  When they page forward, the application checks the gate’s logic and then branches them according to some condition.  In this example, the author might configure the Gate with a pass threshold.  Let’s say 80%.  If the learner meets or exceeds a score of 80%, they are branched to the ‘Results’ page’.  If not, they may be routed to Circuit Basics. Follow the dotted lines.

Branches at the ‘page’ level are visualized in the Branch View.

Action/response structures

In our example, the learner moves the probes around.  If the multimeter is turned on, the learner sees a voltage display.  The action is moving the probe. The response is a voltage display. 

First, this a ‘real world’ action and ‘real world’ response.  I write ‘real-world’ in contrast to what happens in a typical multiple-choice question.  In a multiple-choice question, the learner clicks on a radio button and possibly sees a checkmark.  That’s only ‘real-world’ to an educational institution.  The world doesn’t present itself as set of multiple-choice questions. 

Second, when the learner sees a voltage display, that is feedback in the CCAF sense of the word.  The learner does something and then gets feedback.   Now, in our example, we did choose to combine ‘real-world’ feedback with a multiple-choice question.  Ultimately, the learner is asked to choose the letter next to the ‘hot’ wire.  In our example, we logged the learner’s actions and can unravel how they arrived at their final decision.  Did they connect the red probe to the right wire and did they ground the black probe?  If they selected the right answer but didn’t perform the correct actions that would lead to the right answer, we know they haven’t learned anything at all.

Conclusion

Authoring tools that enable one to create CCAF must have these capabilities: complete visual freedom, variable support, alternative branching, conditional logic, and action/response structures.

The hot wire example is an example of a very simple simulation.  But, as I wrote, the concept of CCAF isn’t restricted to this type of simulation.  CCAF can be found in decision making scenarios, for example. The learner might be placed in a situation and challenged to make the right decision or say the right thing.  That too is CCAF.  CCAF lies at the heart of effective learning experiences.

CMI5: A Call to Action

Introduction

Since 2000 a lot has changed. Think airport security, smart phones, digital television, and social media. In 2000, the Advanced Distributed Learning (ADL) Initiative gathered a set of eLearning specifications and organized them under the name of SCORM. In 2021, in a time of tremendous technological change, SCORM still remains the standard for how we describe, package, and report on eLearning.

However, finally, we are on the eve of adopting something new and something better: CMI5.

We no longer have landlines, but we still have SCORM

CMI5 Examples

To many, CMI5 is another meaningless acronym. To understand the power and benefit of CMI5, consider these very simple examples:


A Learning and Development specialist creates a learning activity that offers managers several samples of readings and videos from leadership experts. The activity allows the managers the freedom to pick and choose what they read or view; however, the specialist wants to know what they choose to read or watch as well as how they fare on a culminating assessment.

CMI5 enables the activity to capture both the learner experience (for example, the learner read an excerpt from Brené Brown’s Daring to Lead ) and the test score. CMI5 can generate a statement on virtually any kind of learner experience as well as the traditional data elements such as score, time on task, quiz questions and student answers. In this sense, CMI5 supports both openness and structure.

Let’s consider another example:

An instructor authors a learning activity that virtually guides students to places in Canada to observe the effects of climate change. She wants students to answer questions, post reflections and observe the effects of climate change on glaciers, Arctic ice, sea levels and permafrost. She sets a passing threshold for each activity. Once students have completed all of the units, then the learning management system registers that the course was mastered.

Let’s go further:

The instructor wants the learning activity to reside in a learning object repository or website outside of the learning management system – but still report to the learning management system. In fact, she wishes that no content reside on the learning management system. Regardless of where the content resides, she wants to know what sites students visited, how they scored on short quizzes, and how students reacted to the severe impact of climate change on Canada.

For students with disabilities, the instructor makes an accommodation and requests that the LMS administrator adjust the mastery score without editing the activity.

As the course becomes more and more popular, she anticipates placing the website and its activity onto CloudFlare or some content distribution network so that students all around the world can gain faster access to the learning activities.

The instructor works as adjunct for multiple universities and wants each of their learning management systems to get the content from a single location. In some cases, she wants the content locked for anyone who circumvents the Learning Management System and in other cases she openly lists the unlocked content with OER libraries like Merlot and OER Commons.


Before CMI5 much of this was difficult to achieve, if not impossible. So, let’s review what CMI5 offers us.


CMI5 captures scores in the traditional sense. But it also records data on learning experiences such as students virtually observing the change in the permafrost. CMI5 allows instructors and trainers to set the move-on criteria for each unit in a course (i.e. passing score before student moving on to the next unit).

CMI5 activities can reside anywhere – on one’s own website, for example, and still report to the learning management system. CMI5 enables an LMS administrator to change the mastery score from the LMS for the benefit of students who need accommodations and essentially trump what is set in the unit.

LodeStar’s CMI5 Implementation allows
authors to indicate where the content resides


CMI5 is a game changer. And yet for many – learning and development leaders, instructional designers, technologists and students – it doesn’t seem that way in 2021. CMI5 seems like a non-event. It feels like something we all talked about – a welcome change of weather on the horizon –and then nothing. Not a drop of rain.


We have been talking about and anticipating CMI5 for a long time – and yet, major learning management systems both in the corporate and academic worlds still don’t support it. CMI5 was envisioned in 2010, released to developers in 2015, and then released to the public in its first edition in 2016. We are now in the waning days of 2021—with limited adoption.


But that is likely to change.


For one, Rustici Software and ADL delivered on their promise of Catapult. Catapult is likely to accelerate adoption of CMI5. It provides many benefits to developers, including the ability to test if a CMI5 package conforms to the standard.

In my view, the learning technology architects have done their part. They brought us a meaningful set of specifications. They brought us the tools to test learning packages and to test the learning management system’s implementation of CMI5. Now’s it’s up to learning and development specialists and the instructional design community to cheer CMI5 on. It is my belief that once the community understands CMI5, spreads the word, and imposes its collective will on the LMS providers, CMI5 will become an important part of our tool bag. I urge you to share this article and others like it.


In the meantime, let’s take a deeper dive into CMI5’s potential.


Benefit One: Freedom to capture and report on any learner experience.


With CMI you can report on scores, completion status, and just about anything else. You can report on standard assessment results, and the not-so-standard learning experiences.


To understand this, we need to re-look at SCORM.


One should consider CMI5 as a replacement for SCORM – an improved specification. Conforming to SCORM was useful because a learning object or learning activity could be imported into just about any modern learning management system. As an instructor, if you created a game, quiz, presentation, simulation, whatever and exported it as a SCORM package, your activity could be imported into Moodle, BrightSpace, Canvas, Cornerstone, Blackboard, and any learning management system that supported SCORM. So, the benefit of SCORM was that it was a set of standards that most LMS systems understood. The standards that fell under the SCORM umbrella included metadata, a reporting data model, and standard methods for initializing an activity, reporting scores, reporting on interactions, and reporting passing or failing and completion status.

The data model included dozens of elements. One example of a data element is cmi.core.score.min. Related to score, SCORM conformant activities reported on the minimum score, the maximum score, the raw score (absolute number) and the scaled score ( a percentage between 0 and 1).


SCORM supported a lot of different data elements. A SCORM conformant activity could report on a variety of things. The limitation of SCORM, however, was that, despite the large number of elements, it was still a finite list. Take a Geolocation Storytelling activity as an example or an eBook reading. If I wanted to capture and report that the student virtually or physically visited location A, then B, and then C, I would have to work around the limitations of SCORM. I could not generate a statement such as, for example, ‘Student visited the Amphitheater in Arles’. If I wanted to capture a student’s progress through an eBook, SCORM would be problematic.


At this point, you might be protesting, but xAPI does that! xAPI? Another acronym! Yes. xAPI, or The Experience API is a new specification that makes it possible to report on a limitless range of things that a learner has experienced: such as, completed a chapter of an eBook; watched a video; toured a museum, and on and on. So, if we have this thing called xAPI, why CMI5?


The benefit of xAPI is that it supports the reporting of anything. The downside to xAPI is that, by itself, it doesn’t have a vocabulary that the LMS understands such as launched, initialized, scored, passed, completed. That is what CMI5 offers. CMI5 is, in fact, an xAPI profile that includes a vocabulary that the LMS understands. In addition, CMI5 can report on any type of learner experience. Here is the definition of CMI5 from the Advanced Distributed Learning Initiative:


cmi5 is a profile for using the xAPI specification with traditional learning management (LMS) systems

(Advanced Distributed Learning).


With CMI5, you can have your cake and eat it too. You can report on learner activity in a way that LMS understands and you can report on just about anything else that the Learning Management System stores in a Learner Record Store. The Learner Record Store or LRS is a database populated by statements about what the learner experienced.

xAPI Statements can capture an
any learner experience, including reading the instructions


Benefit Two: Freedom to put the learning activity anywhere


With CMI5, you can place a learning activity in a repository, in GitHub, on a web server, in a Site44 drop box site, in SharePoint, in a distributed network, wherever….without restricting its ability to connect with a learning management system. CMI5 content does not need to be imported. A CMI5 package can contain as little as one XML file, which among other things, tells the LMS where to find the content.


To appreciate this, we need to look back at SCORM once more (as if it were ancient history).


I’ll start with a pseudo technical explanation and then follow with why it matters.
The way SCORM works is that the learning activity sits in a window. The learning activity uses a simple looping algorithm to find the Learning Management System’s SCORM Adapter. It checks its parent window for a special object. If the window’s parent doesn’t contain the object, the activity looks to the parent’s parent, and so on. In other words, somewhere in that chain of parents, there must be that special object. Typically, the SCORM activity can only communicate to the learning management system if it is a child window of that system or if some server-side technology is used.

CMI5 works quite differently. CMI5 gives us freedom to leave our parents’ home. Whereas SCORM uses a Javascript Application Programmer Interface to communicate, CMI5 uses xAPI to reach across the internet and call a web service’s methods. Loosely, it’s like the difference between a landline and a cellular phone service. To use the landline you must be in the house; to use a cell phone, you must be in the network.

Benefit Three: A simplified sequencing model.

SCORM supported simple sequencing, which many say is not so simple. CMI5’s ‘move on’ property, in contrast, is very easy. A CMI course can contain one or more Assignable Units (AUs). The instructor spells out what the learner must achieve in an assignable unit before being able to move on. The move on property has one of the following values:


• Passed
• Completed
• Completed Or Passed
• Completed And Passed
• Not Applicable


Once the student has ‘moved on’ through all of the assignable units, the LMS notes that the course has been satisfied by that student.


Benefit Four: An assignable unit passing score can be overridden


In SCORM, the mastery score is hard-coded in the activity. In a SCORM activity, the instructor can base completion status on a passing score. But what if that hard-coded score were inappropriate for a group of students, for whatever reason? The specification enables an LMS to pass the mastery score to the Assignable Unit upon launch. So the LMS launches the AU, and sends it student name and mastery score (among other things). By specification, the AU cannot ignore the mastery score but must use it to trump what is hard-coded in the unit or refuse to run.


Benefit Five: Theoretically, CMI5 isn’t hamstrung by pop-up blockers.

When an LMS launches a SCORM activity, it either embeds the activity in an Iframe or launches a window. Both scenarios are problematic. The content may not be well suited for an iFrame and a pop-up blocker can obstruct the launched window.


Theoretically, CMI5 AU can replace the LMS with its own content. It’s not in an embedded iFrame and it’s not a pop-up window. When the LMS launches the AU, along with student name and mastery score, the LMS sends the AU a return URL. When ended, the AU returns the student to that return URL, which is the address of the LMS.


I write “theoretical” because the LMS should not but may ignore this requirement.

Benefit Six: CMI5 activities securely communicate to the Learner Record Store


As I wrote, the activity can send information about learner experiences clear across the internet to the learner record store. But how does the AU have the authorization to do this from, let’s say, a web site? And how does it happen securely?


This is the marvel of 2021 technology versus 2000 technology. Before 2000, we had difficult-to-use protocols for passing information securely across the internet. Oftentimes, special rules needed to be added to internet routers. Then along came a simpler protocol that the first version of CMI5 used (SOAP). Then came an even better way (OAUTH and REST). After launch, the LMS hands the AU a security token (kind of like a key that dissolves in time). The AU uses that key to gain access and to post information to the Learner Record Store.

Conclusion

CMI5 returns power to the instructor and to the L&D specialist. CMI5 allows one to choose where the content resides and to choose what the content reports. CMI5 captures learner experiences more completely and yet it communicates with Learning Management Systems with a vocabulary that LMSs understand. CMI5 supports accommodations for a special group of students without needing to change the code of the Assignable Unit. Finally, CMI5 uses current technology to send data over the internet.

The implications of this emerging specification are tremendous. It is better suited to mobile learning and it is better suited to the learner experience platforms that are emerging (e.g. LinkedIn Learning’s Learning Hub). Soon instructors may be able to organize content from a variety of providers (like LinkedIn Learning, Khan Academy, or OER Commons) but retain the learning management system as an organizer of content, data collector, and credentialing agent. Now instructors, average instructors, may be able participate in that content market from their own GitHub repositories and web sites.

But many LMSs have yet to adopt CMI5. The architects have done their part. Now it’s on us to understand this technology and advocate for it. Start by sharing this article. Thank you.

Appendix A — How it Works (A simplified flow)

For those interested in a deeper dive, let’s walk through the CMI5 process flow step-by-step. (See diagram)

To begin, the author (instructor, L&D specialist) exports content as a CMI5 package. The package can be a simple file that instructs the LMS where to find the content or it can include the content itself.

(1) When a student needs the content, the Learning Management System (LMS) launches the content and sends the Assignable Unit (a course can contain one or more Assignable Units) (2) information that includes student name, a fetch URL and the activity ID.

(3) The Assignable Unit (AU) uses the fetch URL to retrieve a security token. The security token enables the AU to communicate securely to the Learner Record Store (LRS).

(4) As the student interacts with the content, the AU can optionally send Experience API (xAPI) statements to the LRS . (5) At some point, the AU reports that the student passed and/or completed the unit.

(6) The LMS uses the ‘move-on’ information to determine whether or not the student can move on to the next assignable unit. The move-on options are passed, completed, passed and completed, passed or completed, or not applicable.

Finally, when all of the assignable units within a course are completed, the course is marked as satisfied for the specific learner.

A simplified process flow that starts with the
launch of the CMI5 Assignable Unit by the LMS

Geolocation Storytelling Revisited

We’ve observed an uptick in interest in Geolocation Storytelling. We’ll revisit the subject for those who know little about this medium as well as those who either want to design a project on paper (i.e. Word) or who want to go all the way and use the LodeStar Authoring tool to complete a working project.

To reach all audiences at some level, this article starts from the general and ends with the specific. Hop on and off at any point.

Introduction

Every place hides its own unique, rich story. Have you visited an unfamiliar town or area and wondered about its history,  geography, and points of interest? Have you ever wanted to connect to a place on a level deeper than a quick drive-by?

A new form of storytelling—geolocation storytelling—combines technology and traditional storytelling to connect visitors at a deeper level.  With the help of an app, the place where you’ve entered or visited on a map suddenly comes alive with narrative and imagery.  You may hear about the past or be guided to an unusual rock formation or the vantage point of a famous painter.   Geolocation stories can work on-site, guiding you from point to point or they can help you discover a place from the comfort of your home.  Geolocation stories can be both informative and entertaining.  They can involve the visitor in discovering why a place got put on the map, or solving a challenge, or even solving a murder mystery.  In short, geolocation stories can be about anything that piques the visitor’s interest about a place.

The Inspiration

Places inspire people to learn more about them.

A group of history buffs, known as Lensflare Stillwater, were inspired by the many untold stories of Stillwater, a Minnesota river town.  Stillwater was a lumber town with connections to Minnesota and Wisconsin pine lands by river and connections to Saint Paul by stage road and later by rail. 

Stillwater inspired a number of geolocation stories. The first stories were guided  tours of Stillwater’s historical downtown.   A subsequent story helped cyclists learn about the rich history from the vantage point of a bicycle trail.  Even later, another story recovered the lost memory of Stillwater’s streetcars.   

Thousands of miles from Stillwater, a geolocation project told the story of Vincent Van Gogh’s year in Arles, France, and what went horribly wrong for him.   Its authors first visited Arles to learn more about Van Gogh but were disappointed in the local tour booklets, which didn’t sufficiently tell the story. 

If your town or place has points of interest, a rich history, or geographical features, you will want to consider creating a geolocation story to help others see the place from a new point of view.  Visitors can walk to the specific places of interest and hear audio, see imagery, read text, scroll through time lines and learn more about this special place.

How it works

Typically the visitor launches a geolocation story (a web-based application) from a web address on a smartphone. The first page of the story provides instructions and a starting point. When the visitor reaches that point, she crosses an invisible geofence. Geofence is a just a metaphor. Actually, the visitor’s location is calculated from the signals of three or more satellites . Most modern smartphones are equipped with the hardware to detect these signals. Global positioning satellites constantly emit signals. The GPS receiver in the visitor’s phone listens for these signals. Once the receiver calculates its location from these satellites, it provides that information to the application. The logic of the application is constantly checking to see if the location matches a place of interest. If yes, then content in the form of audio, text and imagery is called up and presented.

Getting more specific: Best practices

If you already understand the power of the geolocation story and wish to get started, you’ll want to consider a few things.  These are not hard and fast guidelines.  As we gain more and more experience, we’ll learn about what works and what doesn’t.

  1. First, geolocation storytelling works best when the audience is on foot and out of doors.  Smartphones can’t receive satellite GPS signals from inside of buildings.  The technology works best outside with clear line-of-sight to the sky.
  2. Geolocation projects must be housed on a website that supports HTTPS.   Smartphones don’t reveal their locations to applications that run from websites that begin with http:// The web address must be https:// The ‘s’ means secure.  Information that is transported by HTTPS is encrypted in order to increase security of data transfer.  
  3. There is a limit to the distance that people will walk on a tour or the length of a tour in time.  Limit yourself to two miles completed within one hour.  Of course, this is a very loose rule of thumb.  Consider your audience when setting the limits.  Young adults will have no difficulty with 3 – 5 mile hikes.  Time and attention span, however, will remain a factor.  Senior citizens with mobility issues will find two miles too long.  The steepness of the terrain will be a factor. Use your discretion but keep it as short as possible.
  4. Some people’s interest may wane quickly.  A two mile tour should have at least a dozen points of interest.  Limit the distance and length of time between geolocation points.
  5. Present narrations in audio and text formats.  People like to hear a recorded narration but, without headphones, the narration could easily be drowned out by traffic or a rushing river. On the flipside, audio narration often works in situations (e.g. bright sun) where the screen is difficult to see. You’ll need to use your judgement.
  6. Consider the format of the tour.  Will you guide your audience from point to point or will you cluster points so that the audience will simply wander about and come upon points of interest? 
  7. Audio should be cleanly recorded.  The audience should not hear background noise or a muffled narration.
  8. Text must be spelled correctly, grammatically correct and short. 
  9. Favor more points of interest and shorter narration/text rather than fewer points of interest and narration that drones on.
  10. Have fun creating this story. You’ll learn a lot!

Get your Geolocations

Even if you’re starting with Word to capture your text, find the locations. You can use Google Maps.  This is a very accurate way of finding locations.  For example, if I wanted the location of the intersection of Myrtle and Water Streets in Stillwater, I would do the following:

  1. Go https://www.google.com/maps
  2. Search for Myrtle Street, Stillwater.
  3. Move the map to the location of interest.
  4. Click on the intersection.
  5. Either write down the location coordinates or click on them.  The coordinates will now appear in the address field at the top and can be copied and pasted into your Word document or directly onto a LodeStar page (see below).
Google Maps reveals latitude and longitude

About the Location Coordinates

In the example above the coordinates were 45.056745,-92.805510.  The first coordinate (45.056745) is the latitude.  The second coordinate is (-92.805510) is longitude.  Always use a coordinate with six digits of precision (six digits to the right of the decimal point).  The six digits will ensure an accuracy within a few inches but never rely on that.  In other words, allow the technology a slop factor. Use precise coordinates but allow for imprecision in the ability of device to calculate its location. Never create a geolocation story that relies on an accuracy of a few inches.  You control this by typing in numbers in the latitude and longitude proximity fields. The numbers spell out how close one needs to be to the precise location to trigger an event. In our geolocation stories we trigger something (e.g. show content) when the user is within 25 to 50 feet of a location.  We call that crossing the geofence.   The minus sign is important.  In latitude, the minus sign denotes the southern hemisphere (south of the equator).  In longitude, the minus sign denotes west of the prime meridian (Greenwich) and east of the antemeridian (roughly where the international date line resides).

If you want to grab your location while physically on the spot, use your smartphone’s Google Maps app. 

Current Location Arrow in Google Maps
  1. In Google Maps, click on the arrow to show your current location.
  2. Scroll down until you find the marker and the location.  See screenshot below.
  3. Copy and paste the coordinate into your notes so that you can transfer the coordinate to LodeStar.

Getting a location from Google Maps while on site

Preparing a Geolocation Story in Word

Your role might be to prepare the content. When you’ve completed the preparation, you can hand off the content in the form of a Word file. In Word, each location should be on a separate page. At the top of each page, key in the title and the latitude and longitude coordinates of the location. Add your text, graphics, image and narration. If your version of Word doesn’t support audio narration, use a free tool like Audacity to generate an MP3 audio file.

Even More Specific: Authoring a Geolocation Story with LodeStar

To create a geolocation tour in LodeStar, do the following:

Launch LodeStar and select the ARMaker template.  (AR stands for augmented reality.)

LodeStar’s ARMaker template
  1. Title your project.  The project will now reside on your hard drive in a folder with the same title.  It will be found in the LodeStar/Projects/[your title]  directory.
  2. Add your title to the first page.
  3. Add a page by clicking on the + button at the bottom of the app.
  • Ensure that the new page is a Text Page Type.  Examine the screenshot below.  The page should have a place to enter a latitude and longitude.
  • Add your content.  You can insert a widget (e.g. Image Layout Widget), text, audio, and more.
  • Add a page to add more content.
  • Then Preview in Browser (find button at the top).
  • When you are ready to publish,  Export as a SCORM 1.3 package and import to a Learning Management System or simply copy the LodeStar/Projects/[your title]  directory to a web server.
LodeStar authoring tool with ARMaker template. Click on image to view.

Below is what this page looks like in Preview.  Notice the audio control at top left and the Show Map at the top left.   Notice the navigation buttons top right (depending on layout).  Notice the how the image slider appears, created by the PWG Image Slider Widget.

Previewing a Geolocation story

If your audience clicks on the ‘Show Map’ button, a Google Map appears with all of the locations marked with red markers.  Again, each location represents a separate page in LodeStar. 

Each location (marked by red marker) matches a LodeStar page

Controlling the User Experience

If you allow users both to show map and navigate to content by clicking on a marker, then you need not adjust project settings.    If you want to restrict users’ access to the map and/or their ability to access pages of content from the map, select Tools > Project Settings.  Change the settings according to your needs.  (The important settings are marked with arrows. See screenshot below.)

Project settings in LodeStar allow control of application

Publishing your project

As a SCORM object

If you use a Learning Management System (LMS) and want to control access to your geolocation story, then, with your project opened in LodeStar, click on Export and export to SCORM 1.3.    Go to your LMS and import the story as a SCORM object.

As a website

If you have access to a web server, copy the project folder to the web server and use the index.htm file in your URL.  Once again, location services will only work on web servers that support https://

If you don’t have access to a web server, then read the following article that explains how you can use GitHub as a web server.

https://lodestarlearn.wordpress.com/2020/05/14/seven-steps-that-will-change-how-you-share-elearning/embed/#?secret=5b4inntyGg

Alternatively, you can use Site44 to convert your Dropbox folder to a published website:

See https://www.site44.com/

(We are not endorsing Site44 but LodeStar Learning has successfully used it on a number of projects.)

As an Open Education Resource (OER)

Publish the geolocation story as a web site, then register the URL (address) of that site with OER Commons, Merlot, or whatever OER repository you prefer.

 

Additional Details

If you are new to Geolocation Story-telling to learn more detail, visit:

Geolocation Storytelling: Van Gogh In Arles | LodeStar Web Journal (wordpress.com)

To see an example of a finished product as OER, visit:

https://www.oercommons.org/courses/vincent-van-gogh-s-arles/view

Or view the app at:

‎Van Gogh In Arles on the App Store (apple.com)

Conclusion

Geolocation stories are a great way to help visitors uncover the hidden wonders of place. Google Maps and the LodeStar Authoring tool are indispensable ways of authoring stories and publishing them either to Learning Management Systems or to the web.

If you complete a project, share your project. Drop a comment or drop a line to supportteam@lodestarlearning.com.

Serious eLearning: Use Interactivity to Prompt Deep Engagement

Elements of Interactivity

The Serious eLearning Manifesto challenges us to move beyond typical eLearning to the values  and principles of Serious eLearning.   One of those principles is, to quote the manifesto, ‘Use Interactivity to Prompt Deep Engagement’.  The sky is the limit in terms of what that actually means.  We know that it means something beyond page turners and roll overs.  Authoring tools offer us templates that have interactivity logic baked into the template.  The tools’ form-based interfaces allow us to provide information that feeds the template.  To do something original – outside of the constraints of a page turner presentation, or even an interaction template — requires a bit of code.  Few authoring tools allow you to realize your design fully without the knowledge and application of some basic coding.

ZebraZapps is  one of the notable exceptions.  ZebraZapps enables you to build complex interactions by wiring objects together.  A click, hover, drag or collision, for example, on one object could change the properties of another.  Dragging the earth and moon along their orbital path can cause the rise and fall of a tide graphic.  Authors connect the drag of an object constrained to a path to the height property of another object.  Expressing this relationship comes from wiring the drag event of one object to the property height of another object.  This expressiveness through the action of wiring is rare.  Most systems enable this expressiveness through language.  In other words, code.

If you google “should instructional designers learn to code” you’ll get more than 37 million results and many opinions.  My own view relates to the situation that many instructional designers find themselves in.  Whether they support a university department or mid-sized firm, they lack access to a programmer.  They are limited to what they know and how well they can work an authoring tool like Storyline or Captivate.  For them, a little knowledge of code can go a long way.  With a little knowledge, they can realize some pretty sophisticated designs.  They can do more than ‘click and present’. 

In the late 80s I was driving down a dark, country road listening to MPR.  The story was on Interactive Video.  Laserdiscs.  I was enthralled by the possibilities.  I asked my dean who was completing an advanced degree at the time in computer-based learning, what I needed to learn to control an interactive video laserdisc.  He answered “C”.  C was a programming language and his answer, which was actually incorrect, sealed my fate.  I began studying my first programming language oblivious to tools like TenCore and Course of Action (progenitor of Authorware) that afforded a much simpler way to control the laserdisc.

To finish this anecdote, I also began to study instructional design at the University of Minnesota.  At my first Wisconsin Distance Teaching and Learning Conference, I attended a pre-conference cracker barrel session.  Sitting around drinking wine were a bunch of researchers from Alberta’s Athabasca University.  I posed the question to them: “should instructional designers learn to code”.  The answer from at least one was unequivocal.  Become an instructional designer or a programmer.  You can’t do both.  There is too much to learn in either discipline.

So, I don’t necessarily take issue with that.  There is so much to learn in either discipline.  But modern authoring systems give us a way forward where we don’t have to totally geek out.  With just a few coding skills we can go long long way to realizing the serious eLearning principle:  “Use Interactivity to Prompt Deep Engagement.”

So let’s explore the basic prerequisites to interactivity.   There are three parts to this post.  First, this post discusses the relationships between computer code and this thing called interactivity.   Secondly, this video (LodeStar 9 — Elements Of Interactivity – YouTube) demonstrates a simple interaction that is made possible with the LodeStar eLearning authoring tool and its script (code) editor. Lastly, this DIY tutorial (Making your projects interactive and interesting with a little bit of code | LodeStar Help (wordpress.com)) walks through the video example step by step.

But first we need to look at ‘interactivity’ and understand where we benefit from some knowledge of coding.

The Serious eLearning Manifesto states that “We will use elearning’s unique interactive capabilities to support reflection, application, rehearsal, elaboration, contextualization, debate, evaluation, synthesization, et cetera”.   When we examine this list of strategies/activities and consider the unique interactive capabilities that will support them, we start with the following:

  • Ability to store information about the learners and their behavior.
  • Ability to offer something different and individualized based on this information.
  • Ability to create a visual, manipulatable, and functional learning environment that suggests an authentic (if not totally realistic) context.

That’s not an exhaustive list.  It’s a start.  It promises more than page turners and roll-overs.  Now, we need to match these capabilities with the authoring tool and the required code.

 

Ability to store information about the learners and their behavior.

Variables are used in code to store information.  The information can range from a number to a sentence to a list to a full essay.  Variables provide a human-friendly way to store and retrieve information.  They represent addresses in the computer’s memory.  As instructional designers we don’t need to know anything about those gobbledygook addresses or how the information is stored physically in the computer.  We usually need to know whether the variable is intended to store a number or a string of characters. (See Appendix A) 

So what can we store in a variable?  The answer is many things. 

  • Points scored
  • Type of question answered incorrectly
  • Number of tries
  • Learner’s journal entry
  • Bookmarked page where the learner left off
  • Much much more

In a recent eLearning program, our objective was to help the learners use LinkedIn effectively to promote their professional brand.  Their eLearning task was to help a fictitious character build up his Social Selling Index.  The index is made up of four components: brand, people, insights and relationships.  Successful completion of the activities increased the character’s brand index, people index, insights index, and relationships index.  We created four variables and, you guessed it, they were:  brand, people, insights, and relationships.  Each activity was categorized and affected one of these indices.  In other words, we increased the numerical value in the corresponding variable.

Variables included in a LodeStar authored eLearning module

This contributed to what the Serious eLearning Manifesto calls authentic context.  The performance objective was to help employees increase their SSI.  The activities in the eLearning module increased the character’s SSI.  We could have designed a presentation and a quiz.  We didn’t.  But to achieve that authentic context, we needed to store values in variables. 

To learn more about variables, complete the hands-on exercise shown in the video (mentioned above) and the accompanying tutorial.  You can download LodeStar 9 and use it at no charge to complete the exercise.  LodeStar Learning Corporation

Ability to offer something different and individualized based on this information.

In another recent project, we created a simple simulation of a workplace engagement platform.  The simulation helped guide employees through the steps of requesting feedback from their supervisor, co-worker or reports. A future simulation will be focused less on the procedural and more on the best practices of soliciting and giving feedback.  The first simulation was a post-training exercise. Our HR Director conducted the training.  The post-training exercise helped refresh participants’ memory on the basic steps.   The strategy was to add points for correct choices and subtract points for incorrect choices.  In response to wrong choices, feedback steered participants in the right direction.    A counter in the bottom left corner showed the result of correct and incorrect choices.  It was a bit of gamification but always with the intent to guide participants to the right choice.  In other words, guided practice.

So what role does code play?

This simple simulation wasn’t built from a template with some sort of pre-defined logic.  It was custom built for our purposes.  But it was a very simple construction. We began with a blank screen, uploaded screenshots and defined click/touch areas.

As a result of click, we wanted to a) add or subtract points and b) branch to a new screen or display an overlay.  We never subtracted points multiple times in response to multiple clicks on the same thing – but we always showed feedback.

Code can help us to:

  • Check if the item has been clicked before.  If no and if correct choice, add points and then branch.  If no and if not correct, subtract points and provide corrective feedback.  If yes and incorrect, increment a counter to provide another level of feedback with more urgency.
  • Store a value that enables us to check if item has been clicked.

These rules are simple.  They can be complex.  In this simple example, we use variables and conditional logic (i.e. if statements).  We also use branching, which, in this case means, display an overlay or display a new screen with hotspots and more code that gets executed when the invisible hotspot is clicked on.

A Simple eLearning Simulation

To be true to this section heading (i.e. Offer something individualized) , we could have gone further.   If the participant breezed through a scenario, we could have used conditional logic to increase the difficulty of the scenario.   If the participant stumbled through, we could have kept the level of difficulty the same (i.e. plateau).  The same tools apply: variables and if-then statements.  I’m tempted to say that this approach is simpler than trying to shoehorn a pre-programmed template to your needs.  

Ability to create a visual, manipulatable, and functional learning environment that suggests an authentic (if not totally realistic) context.

The screenshot below shows the beginnings of a tutorial on automatic direction finding (ADF), an older navigational method for airplane pilots.  There is just enough detail to make this panel somewhat realistic but the panel is a simple composition of ellipses, paths, rectangles and text.  The Scalable Vector Graphic (SVG) is composed of these elements.  Each element can generate a click event that can result in the execution of some code.  In the screenshot we are highlighting a switch that has the id of g2423.  When this switch is clicked, with a bit of code, we can cause something to happen.   The graphical element is tied to a LodeStar branch option.  The branch option executes commands that relate to a NDB (Non-Directional Beacon) that the pilot can tune in – in this case, the audio playback of Morse Code to identify the beacon.   As I’ve heard Ethan Edwards from Allen Interactions say many times, you just need enough realism to accomplish your learning objective.  Any more and you’re wasting your time or your client’s money or both.

Automatic Direction Finding — eLearning Module

To show another example, in the video and tutorial link referenced in the conclusion, I walk through a simple example of how to make Scalable Vector Graphics interactive.  I walk through an example of a traffic light switch.   I chose this example because it is a little easier to understand than the ADF on an airplane.

A LodeStar Learning tutorial on variables, conditional statements, functions, and SVG graphics

 

Conclusion

In the pursuit of serious eLearning and meaningful interactivity, I’ve noted LodeStar’s ability to support variables, conditional statements, branch options and the ability to change the properties of objects.  Other authoring systems also support these concepts and require the author to understand the basics behind variables, conditional statements and logic in general.  Allen Learning Technologies’ ZebraZapps requires no coding – but it does require the instructional designer to think logically.  Wiring replaces code, but logical reasoning is still required.  Articulate Storyline has the concept of triggers and supports events such as clicks, hovers and drags.  Those events can be tied to property changes of Storyline’s native vector format.  Storyline also supports variables and has an easy-to-use interface for building sophisticated conditional statements.  Adobe Captivate supports the association of actions with graphics.  For example, the learner can click on a rectangle associated with an action such as show/hide and increment/decrement.   Captivate also supports an interface that can apply conditional logic to an action.  For example, a variable might keep track of slide states.  Each state can house different text.  As the learner clicks a rectangle, an ‘if’ condition displays the matching text based on the current value of the variable.   In short, Storyline and Captivate support the idea of variables, events, conditional statements and the ability to dynamically change the properties of graphics.  ZebraZapps has the same ability but without requiring a line of code. 

Whatever the authoring tools’ approach, the ability to store information about the learners, to offer something different and tailored for the learner, and the ability to create a visual, manipulatable, and functional learning environment relies on the instructional designer’s logical thinking and the authoring tools’ ability to store values, change course based on conditions, and modify the visual environment in some way.

These resources can help you get started.  The first two, I’ve already mentioned.  The third is a terrific resource to learn the basics of coding.

LodeStar 9 — Elements Of Interactivity – YouTube

Making your projects interactive and interesting with a little bit of code | LodeStar Help (wordpress.com)

Learn to Code – for Free | Codecademy

Appendix A

To illustrate the concept of data type in variables, examine the following table:

Name                    Rank

Joe                         11

Anna                      2

Kim                        1

In the preceding table, Kim came in first place, Anna in second, and Joe in eleventh place.    A variable stores a person’s rank.  If we interpreted the information in the variable as a number, then this would be the sorted order:

Kim     1

Anna   2

Joe       11

If we treated the variable as a string of characters, this would be the sorted order:

Kim     1

Joe       11

Anna  2

In the second case, the value stored in the variable is treated as a character.  In the computer’s character table, ‘1’ is assigned the numerical value of 49.  ‘2’ is assigned the numerical value of 50.  The computer compares the first character 1 to the first character of 2.  It looks up the character value and processes the comparison as 49 to 50.  49 is lower, therefore, the computer places 11 before 2.    But that’s practically all there is to the complexity.  Variables store information.  It matters whether we interpret the information as numbers or as characters. This is known as the data type of the variable.

Using Photospheres in Online Courses

Introduction

If you read my last post, you’ll know that I love technology but am wary of it.  As an instructional designer and toolmaker, I’m selective about the educational technologies I choose to learn and integrate into our authoring tool, LodeStar.  My basic rule is that a little investment must pay large dividends.  My second rule is that instructors and trainers should be able easily to envision how the technology will apply to student learning.

One technology in particular has tempted me down the rabbit hole in the past:  virtual reality.  Until recently, I kept away from integrating VR into LodeStar.  Now, I concede that there are solid stepping stones to instructors using VR in eLearning applications.  The investment can be small; the dividends, with the right design, could be huge. One example of a stepping stone is the ‘photosphere.’

Photospheres

The photosphere is more commonly known as the 360-degree panoramic image, VR photo, and interactive panorama. A photosphere  is essentially a 360-degree scene that is viewed through a special viewer that transforms a two-dimensional, distorted image into something magical. 

Once upon a time, photospheres were difficult, time-consuming or expensive to produce.  Instructors needed special equipment and/or software to ‘stitch’ together many photographs into one viewable image.

Today, smartphone apps step instructors through the process of taking multiple images that are automatically mapped onto a sphere.  The sphere when projected onto a two-dimensional plane looks distorted.  When shown through a viewer it offers an undistorted 360-degree view of a scene. 

The Hermitage museum is a wonderful example of the use of photospheres (panoramas) to give visitors a virtual tour of the museum. 

https://www.hermitagemuseum.org/wps/portal/hermitage/panorama/virtual_visit/panoramas-m-1/?lng=

Now a photosphere can be created by any eLearning instructor with a dozen or so clicks.

I’ll suggest three simple ways that online instructors can get started using photospheres in their courses and conclude with a fourth, more sophisticated, example.  Each of these is illustrated in a LodeStar Learning activity found at:

https://lodestarlearning.github.io/VR-Demo/index.html

Suggestion One: Link to VR sites

An instructor can simply link to VR (360-degree panorama) sites.  Here are some examples:

Louvre
https://www.youvisit.com/tour/louvremuseum

Iceland
https://www.iceland360vr.com/map/

Rome
https://www.youvisit.com/tour/rome

Suggestion Two: Find and download images

Finding and downloading images for education is a bit of a challenge presently.  You’ll find photospheres on Facebook, Instagram, Flickr, virtual tour companies, museums, and  tourist bureaus.  But, you will be hard-pressed at the moment to find photospheres in Open Education Resource (OER) repositories.  We might be a little ahead of the curve.  I suspect that, for a variety of reasons, we’ll see an uptick in educationally useful photospheres in the most popular repositories like Merlot, OER Commons, and Curriki.

In the meantime, view and download examples from the following sites.

https://www.flickr.com/vr

https://commons.wikimedia.org/wiki/Category:Photo_Sphere

https://pixexid.com/search/360 panoramic

Suggestion Three: Use tablet or smartphone to generate an image

Photospheres are now easy to create.  As I mentioned, once upon a time, photospheres were difficult to produce.  Today, free software on a smartphone guides users by displaying dots on a screen.  The user moves the camera until a dot falls within a circle target.  The user follows the dots until a 360-degree ring of photos is created and then upwards and downwards in igloo-building fashion until all space is covered with images.  The software stitches all of the images together and produces what appears to be a distorted image when viewed without a photo sphere viewer.

Using Google Street View to Produce a Photosphere
Under the Golden Gate Bridge — a photosphere

Suggestion Four: Use Blender or other 3D software to generate a scene and render it as a photosphere

One of the more sophisticated uses of Photospheres is in creating them with 3D software.  In the early 90s when I worked with 3D software, the price tag was in the thousands of dollars.  Some complicated scenes required a room of twenty computers all working on some aspect of the image delegated to them by a rendering manager – a kind of orchestral conductor.

Today, students can download powerful software, like Blender (https://www.blender.org/),  for free.  Typically, instructors wouldn’t have the time to learn the software and build 3d models.  Some students, on the other hand, might be eager to support their teachers by learning the software and generating useful models.  Building 3D Models is a lot of fun and tremendously educational. 

In this example, I used a model produced by Marcin Lubecki.  Here is what the Blender environment looks like:

Blender 3D Software

Next I positioned a camera in the center of the kitchen.

Positioning a camera in a 3D Model created by Marcin Lubecki in Blender

Then I set the Blender tool’s rendering engine to ‘Cycles.’ I set the camera type to ‘panoramic’ and set the panorama type to ‘equirectangular’.   I then set the latitude from -90 to 90 and the longitude from -180 to 180.  I made a few more adjustments and then rendered the image.

The process renders one tile at time to produce this equirectangular projection and essentially stitches the whole thing together before your eyes. 

Rendering a model created by Marcin Lubecki

The result is in the linked LodeStar example found above.

Conclusion

This post focused mostly on finding, creating, and viewing photospheres. The first releases of LodeStar 9 will support the viewing of photospheres.  A near-future of version of LodeStar will enable instructors to add markers to a photosphere and connect the image to all of LodeStar’s branching options. 

Photospheres are easy to create.  Hopefully, in the future they will be easy to find by instructors in order to suit an instructional purpose.  One can easily imagine the applications:  virtual tours of places and items of interest in every discipline.  In a future post, I’ll tease out some of the possibilities and opportunities for the adventuresome online instructor.

Technology and Great Learning Experiences

Introduction:

As instructional designers, we understand that technology (even cool technology) can never substitute for the elemental motivations and emotions of a student engaged in a meaningful eLearning interaction.  Curiosity, exploration, challenge, suspense, resolution and revelation are all examples of experiences one strives to conjure when designing interactions.  Technology alone, once the novelty has worn off, doesn’t cut it.  Technology is just a means to an end – what researchers like to call an affordance.  Technology affords us the opportunity to create experiences that stimulate curiosity, present challenges and encourage learning.  Technology might take the form of videos, animations, audio, elaborate layouts, interactive maps, virtual worlds, and on and on.  But if it doesn’t motivate or result in an emotional experience or elicit the triumph of winning a challenge, or an ‘aha’ moment, the technology will soon leave learners cold. 

I learned that lesson from a computer game I played in the 80s.  It was called Space Quest and it was tremendously fun.  The first versions of the game were in black and white with simple graphics.  You had to solve a series of challenges to stay alive.  Those were addictive.  A group of our friends tried to solve the challenges together.  When it became too late to play any longer, our friends went home–only to return the next day.

Later versions of Space Quest began using a 256-color palette.  The graphics and animation became more colorful but often left you in this passive mode, more like watching a movie than playing an interactive game.  The first exposure to new technology was kind of exciting – but then the ‘movies’ lost their appeal. 

I think about a very exciting technology, geolocation storytelling, in the same way.  The technology is becoming more and more seductive.  Interactive maps can now feature 3D buildings, customized maps, and most recently, game objects.  You can create 3D models of dinosaurs, for example, and have them suddenly appear when you reach a location – like Central Park.  Imagine it: dinosaurs in Central Park or on the Mississippi river, for that matter.  As interesting, you can move around in real space, and see your location updated on a fictional map.  But what does this all mean to the busy instructor?

The answer is, typically, very little. Certainly, instructors and students can purchase or subscribe to off-the-shelf, ready-made products that use these technologies.  The benefits, however, will only outweigh the costs if the technology satisfies a significant instructional goal.  Often, there isn’t a good fit and that’s why I  am more interested in homespun.  I am interested in the instructor as creator and what the instructor can create.  I am more interested in how instructors can use sophisticated technology simply and get students to explore, complete a challenge or experience that ‘aha’ moment in a manner that precisely matches a course objective. 

A simple but effective example

The following example illustrates how instructors can use basic geolocation technology but avoid the pitfalls of spending time without the commensurate return on investment or not getting students to think, solve problems, explore or experience a new insight or gain a new perspective. You will need to use your imagination on how the underlying principle applies to your situation.

The example will show how you can draw on a map and relate that to content that will help students solve a problem. 

The example is inspired by Blue Zones, places where people live longer.  Blue Zones was developed by Dan Buettner whose work (e.g. AfricaQuest, MayaQuest, Blue Zones, etc.)  typically fosters the experiences that I’m discussing:  curiosity, exploration, decision-making, and problem-solving.  Visit https://www.bluezones.com/ for more information on his latest project.

To make our example come alive, I’ll choose two of the original five blue zones: Okinawa, Japan and Sardinia, Italy.  In a real application, I would choose five or more locations.  Our objective is to get students to visit the sites, look around with the help of Google Street View, collect statistics, compare and contrast the information and then propose a theory of why people live longer in these zones.  Dan Buettner, of course, summarizes this information in his books, but in our hypothetical application, we want students to think for themselves

Herein lies the crux of our strategy.  We could simply present the information.  The geolocation technology would then serve as another form of page turner.  If, instead, we get students to explore, collect data and attempt to solve a problem, we have caused students to think and experience firsthand the thrill of discovery.

Please note that we’ve covered geolocation storytelling in the past.  If you’re not familiar with this technology, I encourage you to visit the links below:

Geolocation Storytelling:  Van Gogh in Arles  (an application)
https://www.oercommons.org/courses/vincent-van-gogh-s-arles/view

Geolocation Storytelling:  Van Gogh in Arles  (a mobile app)
https://apps.apple.com/us/app/van-gogh-in-arles/id1489831732?ls=1

Geolocation Storytelling:  Van Gogh in Arles  (an article) https://lodestarlearn.wordpress.com/2019/11/07/geolocation-storytelling-van-gogh-in-arles/

Geolocation Storytelling (an article)
https://lodestarlearn.wordpress.com/2017/05/14/geo-location-storytelling/

The Van Gogh in Arles applications supports students’ visiting Arles and discovering the places where Vincent Van Gogh lived and worked.  It also supports students’ visiting Arles from the comfort of their desks.  The example below is more like the latter.  Students do not need to visit the location.  From their desks, they explore a map, collect information and visit the locations virtually.

How it’s done

So, let’s use the LodeStar eLearning authoring tool to set this up step by step.  (Full disclosure: I have been the chief architect of LodeStar and president of LodeStar Learning for the past two decades. LodeStar Learning offers a free trial of this tool at https://www.lodestarlearning.com so that you can immediately start a geolocation project. )

For this application I chose the ARMaker template.  The ARMaker template is geolocation aware.  The technology is baked right into the template.

LodeStar eLearning Authoring Tool (Version 8.0) Template Viewer

Typically in geolocation applications, one would type in a latitude and longitude of a location and then organize the page with text, graphics, imagery, audio and/or video.  When the student visits the location or, optionally, clicks on its marker on the map, the student is presented with the content.

Content on Text Pages can be tied to geographic locations by latitude and longitude

In our application, we don’t want students jumping from the map into the content.  Rather, we want the content to display on the map. 

In other words, our first page features instructions, but the instructions are not associated with a latitude or longitude.  Because these instructions are on the first page, they display when the application launches.

A page as it appears to the instructor

So, after I chose a layout, a theme, and a background image, our application looks like this when I preview it in a browser.

A page as it appears to the student

The astute LodeStar user will immediately notice some things are different.  I used Tools > Layouts to change the layout and background image.  I used Tools > Project settings to make other changes.

In Tools > Project Settings, I hid the navigation buttons; I allowed students to see the map; and I disabled students’ clicking on a marker to jump from map to content.

Here is where a different approach comes in.  The ‘Branches’ view and screenshot below begin to reveal the strategy.  I add a page with more background detail and link to it.  In LodeStar, any text on a Text page can link to any other page.  When students click on the words ‘click here’, they are taken to an information page.

I also linked to a Long Answer page.  That is where students will input their findings and their theory and submit their work to the instructor.

Also pictured, is a Wall page and two more Text pages on Sardinia and Okinawa.  The purpose of the wall is literally to wall off content.  Walled off content can only be accessed with a link or a branch or a third method that I’ll soon reveal.

Links can take students to other pages or external URLs.

Now here comes the fun part.

The Okinawa and Sardinia pages feature pie charts created by Blue Zones that show the percentages in an Okinawan or Sardinian diet that are made up of meat, fish, and poultry; legumes; added sugar; added fats; fruits; whole grains; and dairy.   In this application, I don’t make any statements.  I simply show the percentages.  I can also supply other information such as population density, family size, pollution index, climate data, and anything else that will enable students to make educated guesses about what contributes to longevity.

In our application, I’ll mark the Blue Zones.  When students click on a blue circle, the data will pop up.

Here is how I set it up:

  1. First, I added a Geolocation widget to a text page.  (LodeStar supports a variety of widgets that can be added to Text pages.)
  2. Second, I added a circle map object and set its properties (stroke color, fill color, radius, etc.) I could also add polygons, polylines, and rectangles.
  3. Third, I assigned a latitude and longitude to the circle to locate it on the map.

The Geolocation widget allows instructors to create circles, polygons, polylines, and rectangles, and display them on a map with precise coordinates

  • Finally, I associated a click on the circle to content.  The content could be housed on any page and not only the page that houses the Geolocation widget.

Map objects can be connected to page content

As pictured below, I also added latitude and longitude coordinates to the page.  This was not absolutely necessary.  Adding the coordinates at the page level (rather than the widget level) causes the red markers to display.  In Tools > Project Settings, I disabled the markers.  Their only function is to set the bounds of the map.  In our example, the markers conveniently set the boundaries around Okinawa and Sardinia.

(In normal geolocation applications, you would create content on a page and then set the latitude and longitude to mark the location on the map.  As I’ve mentioned, when students click on the marker or walk near the location, they are transported to the page.)

Pages can be tied to red markers by latitude and longitude

Here is what it looks like when the student clicks on ‘Show Map’.

Here is what it looks like, when the student clicks on a blue circle (i.e. a Blue Zone).

Now to explore further, the student drags the icon over Sardinia, and gets this:

The student has landed into a ‘street’ view of Sardinia and can look around.  Observant students will notice the water, the fishing boat, and the uneven terrain – all of which relate to factors that contribute to long life.

Once the student has made her observations and drawn some conclusions, she can submit her information to the instructor with the help of the long answer page.

Conclusion

One could easily imagine an application that simply displays the Blue Zones on a map with information on each site.  Our hypothetical application gives students something to do.  We challenge students to solve the mystery of long life that challenged Dan Buettner and the demographers Gianni Pes and Michel Poulain before him.   To present students with this challenge, we don’t need a degree in computer science or in art or in 3D modeling.  We need to boil things down to the essential elements of curiosity, exploration, challenge, suspense, resolution and revelation.  An instructor’s efforts should be focused on organizing the background information, the data, the locations and the assignment to make the most out of what this technology affords us as educators.  As importantly, we want the technology to bend to our educational objective–and not the other way around.

You can picture using maps, graphical objects and information in your own disciplines. When applications are set up in meaningful, problem-solving contexts in biology, geology, social sciences, history, or whatever, the possibilities are, dare I say,  boundless.

Seven Steps That Will Change How You Share eLearning

Introduction:

These steps might not rise to the level of the seven articles of the US Constitution but, hype aside, these seven steps will change how you store, version control, publish, and share your work with the eLearning community.  If you attempt these seven steps, you might get frustrated and even fail at first.   But, if you persist,  in time, you will become comfortable with the process and never do things the ‘old’ way again.

The Problem

Traditionally, instructors have worked on interactive learning activities and then published them to learning management systems like Moodle, BrightSpace and Blackboard.  The project sitting on the instructor’s hard drive lacks an easy-to-retrieve back up and the project uploaded to the  learning management system remains siloed.

By siloed, I mean that when the instructor wishes to share the project with a broader audience or register the project in learning object repositories like Merlot, OER Commons and Curriki , the problem becomes even greater.  Normally, you can’t share your project that is sitting in an LMS with an Open Educational Resources (OER) repository.  If you wish to publish to an OER repository, you must solve a number of problems:

Where does the project get stored? 

Most OER repositories are referential.  They don’t store; they reference material that is stored on the web somewhere outside the repository.  As an instructor who wishes to share with a larger community, you need a website.

How does the project get backed up? 

You need some sort of backup solution.

How does the project get versioned?

You need a version control system.  With a version control system you can revert changes,  create different versions of the same project, and much more.

How does the project get shared with other instructors? 

You must use DropBox, Google Drive, or OneDrive.  But none of these systems allows you to publish directly from their shared drives.  Creating websites from DropBox, Google Drive and OneDrive is disallowed.

One solution doesn’t address all of these problems.  You need a combination of things — or, you need GitHub.

Introducing GitHub

GitHub offers you a place to store, secure, version-control, publish and share your project with others.

GitHub allows you to publish your projects through the web and, optionally, share your project for collaboration with other instructors.

In GitHub,  you can store anything that you can create with tools like LodeStar, including learning activities, geolocation stories, interactive fiction, interactive case studies, WebQuests and eBooks – all for a nominal subscription fee payable to GitHub.

Collaboration

For more advanced users, you can invite collaborators to your project.  With the GitHub Pro plan, you can keep your authoring files private but still publish the project as a website for your students, colleagues, and OER repositories to see.   That means that your project files stay private and the public only sees the end result (the HTML).  You can keep your authoring files private and invite collaborators to help you work on the project.

What is GitHub?

GitHub has traditionally been a place for computer programmers to store, secure, manage and share versions of their code.  It has been the place for openly sharing code.

The very mechanisms that enable programmers to share their code will enable  instructors to publish their projects to the internet, and secure, store, backup and, optionally, share their work with other collaborators.   By default, under the GitHub Pro plan, projects are secure and private.  The instructor then has control over whether or not the project is published to the internet as a website.

Technically, GitHub is an open-source repository hosting service, which means cloud storage for code. That code can include projects created in LodeStar.  GitHub hosts your project and  keeps track of the various changes made to every submission or, in technical speak, commit. The service is able to do this by using git, a popular revision control system.

So GitHub is both powerful and sort of geeky sounding.  But, if instructors follow some very basic steps, they will harness the power of GitHub to store, publish, and optionally share their projects just like any computer programmer.

So how do I get started?

LodeStar 8.0 build 4 and later support GitHub.  This build is now available.

In  broad terms, you create projects such as Interactive Case Studies in LodeStar.  Each project is matched with a GitHub local repository (folder).   As the project is being developed, you export the project to the local GitHub repository.   You use GitHub Desktop to commit the project to a master and then push the project to the repository in the cloud.    When you’re ready, you publish your project to the web.

It looks like this:

2020-05-12_2149

Getting Started in Seven Steps

Step 1. Install and sign into GitHub Desktop

Download GitHub Desktop from https://desktop.github.com/

GitHub Desktop supports both Windows and Mac.

Launch GitHub Desktop and follow the initial welcome screen to sign into your GitHub account. You’ll see a “Configure Git” step, where you can set your name and email address.   Be very careful with selecting a name.  The name will appear in the web address for your projects.

Step 2. Create a new local repository

You’ll see a “Let’s get started!” view, where you will see some options, including create a new repository, or add an existing repository.

Select ‘Create a New Repository on your Hard Drive’

Remember our diagram?  You first create a local repository on your hard drive and then push the contents of that repository to the cloud.

Fill out the fields:

  • “Name” defines the name of your repository both locally and on GitHub in the cloud.
  • “Description” is an optional field that you can use to provide more information about the purpose of your project.
  • “Local path” sets the location of your repository on your computer. By default, GitHub Desktop creates a GitHub folder inside your Documents folder to store your repositories, but you can choose any location on your computer. Do not choose a LodeStar directory.   You will want to keep LodeStar projects and your repositories separate until you are ready to export.  Write down the location of the local repository.  You will need to point LodeStar to that repository in a latter step.
  • Your new local repository will be a folder inside the chosen location. For example, if you name your repository myEBook, a folder named myEBook is created inside the folder you selected for your local path.
  • Don’t worry about more advanced topics like Readme files, licensing and the ‘Ignoring files’ selection. Let’s stick to the basics.

Click Create repository.

When you have been working with GitHub for a while, you can add a new repository by selecting the ‘Add drop down menu’ to the right of the current repository.

image2

So that you can follow along, I will create a repository for the web version of the Arles Geolocation Story that I’ve written about in past blogs.

Here is what the dialog box looks like.  I’ll click on ‘Create Repository’ to create the folder.

image3

Side note. Understand GitHub Desktop

Below the menu is a bar that shows the current state of your repository in GitHub Desktop:

Current repository shows the name of the repository you’re working on. You can click Current repository to switch to a different repository in GitHub Desktop.   Pictured below is the repository I was working on before transferring my Arles project to a repository.

image4

In the screen shot above, I am working on a project named ‘CRM’.  That is the current repository that is selected.

If I clicked on the words ‘Current repository’, this is what I would see:

image5

The Arles in the listing is my Arles mobile app.  What I am about to demonstrate is the creation of a repository for my Arles Web app.   In the list are all my projects that are matched to their own local repositories.   If I wanted to work with a different local repository like Composter, I would click on its title  to make it the current local repository.

Side note.  Ignore the concept of Branch right now.

Branches is a term used in versioning systems like Git. This has nothing do with LodeStar branches.  Essentially you can clone your project and make independent changes to the clone (the branch) and the original.   For now, our current branch will always be master.  If you choose to become more skillful at using GitHub, you can learn all about branches and forks and pull requests.  But you don’t need to go there.  Making changes to the current branch labeled ‘master’ is sufficient.

image6

Step 3.  Publish Repository – but not quite yet

You will see Publish repository button on the right, but let’s leave that alone for a while.

image7

You are done with the initial set up.  Now, we’ll get into the regular flow of exporting a project and then pushing the local repository to the cloud.

 

Step 4. Set up a LodeStar project to export to the local repository

You will need LodeStar 8.0 Build 4 or later for this step.

Open an existing LodeStar project or start a new one.  Once you are in the project, select Tools > Repository Option.

In the screenshot below, I chose the directory that I created in Step Two: Create a new local repository.  In my case it is c:\git\Arles-Web but more typically it will be [username]/Documents/Git/repository name.

By selecting the repository directory, you are associating the LodeStar project with this repository.  Click on the ‘Save Repository Directory’ button.

image8

Please note: Each project is associated with its own repository directory.

Step 5: Work on your LodeStar Project then Export it to the Repository

You do not need to complete your project before exporting it to the repository.  Exporting to the repository, then pushing the changes to the cloud will serve as a backup of your project.  At this point, no one will see it but you.

Once you have done some work on your project, then select Export > Repository.

Fill in the fields and click on ‘Create Export’.

You are essentially copying your project to the local repository associated with this project.

image20

Disregard the exports directory that you see in the dialog above.  That is a more advanced topic.  The destination is the Repository Directory. You will see a confirmation that you are exporting to the repository directory in the following dialog.

image10

After the export, go to GitHub desktop.

Step 6:  View the Changes in GitHub Desktop

The Changes view in GitHub Desktop will now show all of the files in your LodeStar project.

image11

I’m not displaying all of the files in the screenshot above.  There are 189 of them.

In future exports, only the files that have changed will be listed.  The Changes view shows changes you’ve made to files in your current branch but haven’t committed to your local repository. At the bottom, you’ll also notice a box with “Summary” and “Description” text boxes and a ‘Commit to master’ button.

Type in a sentence for ‘Summary’, and a detailed explanation in ‘Description’.  Your first commit might be labelled as ‘Initial Commit’.  You can repeat that in the description or be more descriptive about the project.

Initially there are 189 files in this project, which includes all of the data files, html, css, scripts, audio files, and imagery that LodeStar manages in a project.

Again, fill in the summary and description.

image12

Click on the ‘Commit to master’ button.   This commits the files to the master branch in the local repository.  I know that I haven’t explained the concept of ‘master’,  but just know that, for our purposes, committing to the master is a good and necessary thing.

After all of the changes are processed, click on the Publish Repository button to send a copy of your local repository to the cloud.

image13

You will see this dialog:

image14

Review the name and description.  Keep the code private.  That means we are keeping the cloud version of this project private.   If you subscribe to GitHub at the Pro level, you can keep your repository private, but still publish to the web.  You cannot do this with the free version.   You must make your repository public in order to publish your web page.

Please note:  If you make your repository public, anyone can copy your project to their own.

 The Pro plan allows you to have your cake and eat it too.  You can keep your repository private, but still publish your project to the web.  In other words you can create a website from your private repository.  Specifically, you can create a public website from the master branch of your private repository.

You can create a private repository with the free plan, and then, when you are ready, upgrade the free plan to the pro plan.  (I’ll show you how at the end of this article.)   At the time of this writing, the Pro Plan is $4 per month.

Step 7:  Publish the index.html page

The index.html page is the launch page for your project.  It is currently private.

To see your project in the cloud.  Click on the ‘View on GitHub’ button as seen below.

image15

This is what you will see when you get to the cloud:

image16

Pictured above is the typical appearance of a GitHub project in the cloud repository.  It is starting to look really geeky and spooky, but don’t worry.  It’s just heads on stakes.  Ignore everything for now.  Click on Settings. Just focus on ‘Settings’.

In Settings, scroll down until you see GitHub Pages.   If you are on the Pro plan, you can now select ‘master-branch’ as the source for your GitHub Pages.  This means that Github will publish the index.html file that LodeStar automatically committed to master.  Remember, ‘master’ is good. If you’re not on the Pro plan, we’ll show you how to upgrade at the end of this article.

The publication takes a while for the first time.  The message reads:

Your site is ready to be published at https://bbilyk1234.github.io/Arles-Web/

Update:  the location is now

https://lodestarlearning.github.io/Arles-Web/index.html

 

Once the site is ready, the message will change.   The site will be slo-o-o-w the first time you access it, but that will change once Github caches your files for quicker access.

image17

How to upgrade from GitHub Free to GitHub Pro

At the time of this writing, GitHub Pro users are billed $4 per month.   With GitHub Free you can create private repositories but not publish them to the web.  You can publish public repositories, but your project can then be copied by any subscriber to GitHub.

To upgrade, log in to GitHub in the cloud at:

https://github.com

Click on the rightmost menu.  See the arrow on the far right in the picture below.  Then select ‘Settings’.

image18

Select Billing from the menu on the left, then click on the green Upgrade button.   GitHub Pro is likely all that you need.  It enables you to keep your project repositories private, but still publish them to the web.

image19

Uploading Changes

Once you’ve committed a project and uploaded it to the cloud repository, you are bound to make changes.

In my example, after I uploaded the Arles-Web project, I decided to add a link to the mobile app version.

After making changes to your project, do the following:

  1. Export to the Repository again.

image20

  • Open GitHub Desktop and make your project the current Repository.  I’ll make Arles-Web the current repository.  View the changes but be patient.  It might take a couple of minutes to place the changes in the repository.  The list of changed files will update.

image21

  • Fill in the summary and description for this commit. You do this to describe every commit.

image22

  • Click on Commit to master.
  • Now here is a new step! Click on Push origin either at the top or by clicking on the blue button.  Both are pictured below.  Technically, this is called pushing the commit to the origin.  But, basically you are copying the changed files in the local repository to the cloud repository.  If you published your project to the web in a previous step, your changes will be almost instantly published to the web.

image23

Conclusion

Seven steps will change your life.  At least it will change your approach to sharing eLearning.  You will be in control of your work like you never have before.  You will be able to safely back up your files, version control them, keep them private, publish them, share them with other instructors – all in one amazing platform, GitHub.

Once you are confident that you have mastered the basic steps, you can read dozens of articles and see dozens of YouTube tutorials on how to do the fancy stuff in GitHub.  Remember, however, that if you accomplish the seven steps, you’ve accomplished a lot.  Those seven steps alone will change how you work and interact with the eLearning community.

 

Geolocation Storytelling: Van Gogh In Arles

Introduction:

Because this is so personal, I’ll introduce myself.  I am Robert “Bob” Bilyk,  founder of LodeStar Learning.  I am passionate about the project I am about to describe and a proponent of instructional technology in general.

I recently heard an interview with Christopher Kimball, formerly of America’s Test Kitchen.  Two things he said that stuck with me: First, he described himself as being a home cook rather than a chef.  Secondly, he talked about introducing recipes to other home cooks that were slightly out of reach of their comfort zone and knowledge but not way out of reach.

My efforts are a modest version of that.  I’m interested in helping online instructors reach out and embrace new ways of interacting with their students.  I’m trying to connect to that inner instructional designer in all online teachers. And I’m trying to introduce strategies that are within reach but may require a stretch.

Geolocation storytelling is one such strategy.  It’s an incredible strategy that, I believe, is within reach of all online instructors.  Geolocation storytelling works for a broad range of disciplines: literature, history, biology, environmental studies, communications, urban planning, and on and on – wherever location is relevant. I use the term storytelling very loosely.  It can be fiction or non-fiction.

Geolocation storytelling reveals something about a location when the student visits the site either physically or virtually.   The student can see or hear the narrative on her smartphone when she physically visits a site or clicks on a map marker.

In this article I intend to share a project that I’m currently working on.  I intend to disclose the inspiration of the project, the brainstorming, and the nuts and bolts of how I am putting it all together.  It’s not completed. It truly is a work in progress.

 

Screenshot of a Geolocation project.

Screenshot of one page of a LodeStar Geolocation storytelling project situated in Arles, France, and focused on Vincent Van Gogh, the Dutch painter.

 

The idea

Recently my wife and I traveled to Iceland and France.  We had several ideas in mind for geolocation stories — ideas that would match up to educational needs.  Some of our ideas turned out to be impractical because of cell phone coverage issues. But one of our ideas hit the jackpot.

The keys to a good geolocation story are a)  locations where there is a strong cellular signal b) exterior locations with line of sight to the sky for the Global Positioning Satellite (GPS) signal c) a strong educational objective that is tied to location and d) somewhere to house the project like a learning management system.

For us, all of the elements came together in Arles, France.  Before arriving in Provence, in southern France where Arles is located, I imagined a GPS-guided walking tour of all the places that Vincent Van Gogh painted and sketched in Arles.  But I didn’t know whether or not it would be practical.

As it turned out, it was not only a practical idea (cell phone coverage was great and the buildings didn’t obstruct the satellite signal) but one that needed to be done.

The need

I’m sure there are dozens of guidebooks, brochures and pamphlets on Van Gogh’s Arles.  We didn’t immediately find any. The tourist office had a nicely illustrated guide in French, which we didn’t buy.  Instead,  we thought we’d start off with the obvious starting point — Fondation Vincent Van Gogh.

The mission of Fondation Vincent Van Gogh is wonderful — but it houses only a few of Van Gogh’s paintings.   If you are fresh off the train, boat or motorway, full of anticipation of all things Van Gogh, the Fondation is a bit of a disappointment.  (They do sell rubber ear erasers, however.)

We then thought of the next thing we knew.  The Yellow House!  That’s where Van Gogh stayed and painted and decorated in anticipation of the arrival of a fellow artist: Paul Gauguin.

As we soon learned, the Yellow House doesn’t exist.  We asked around. No Yellow House.

Arles is a wonderful place.  But it is difficult, at first, to make that Van Gogh connection.   If you know where to go, you’ll find panels of Van Gogh’s work at the locations where he painted some of his most famous works.  However, you need a guide to find them. Arles is a big place. The panels are helpful but you need to know something about Van Gogh to really appreciate them.

The opportunity

So here is the crux of the thing.  Van Gogh painted in locations. Location — with its people, rooted in the farms and neighborhoods, its colors, patterns, streets, trees, and flora — is an important part of the story.  As important is the perspective and knowledge of the educator. What the educator can bring to the story, superimposed on location, is the opportunity.   In our project, visitors to Arles would be guided by the story to important places and then presented with information related to the places.

The intrepid educator

I’m not a Vincent Van Gogh scholar.  In contrast, I think of the scholarship of educators with whom I have worked.  I think of educators like Dr. Carolyn Whitson, at Metropolitan State University, who recently published an eBook titled  ‘Understanding Medieval Last Judgment Art’* and I imagine what they could do with geolocation story telling. This strategy is within reach of educators like Dr. Whitson because she teaches online, she uses technology, and she has already embraced eBook technology (and other technologies) to make her text and photography accessible to a wide range of students.  (The link to her book can be found at the end of this post.)

I’m not a Van Gogh scholar, but I am an enthusiast.  Since I was a teen, I’ve been drawn to his sketches, paintings and personal life.  His ministry in the Borinage coal mining district, ‘The Potato Eaters’ and the sketch ‘Sorrow’ with its accompanying tortured love story hooked me from an early age.   His hope of renewal in Arles and the vibrancy of his paintings and the eventual devastation of his dreams and aspirations, in various ways, inspired me. I carved wood, painted, and wrote stories under the same melancholic humor as the artist.

And so it was with much enthusiasm that I approached this geolocation story-telling project.  But recognizing that I am not a Van Gogh scholar I limited myself to these few simple elements:  location, Van Gogh’s own words and paintings, photography, and (sparingly) some shared insights from an art historian, the late Jean Leymarie.   I added a few details to help bring significance to the location but kept those to a minimum.

Less is More

From an instructional perspective, less is more.  Writers like Leymarie can bring boatloads of insight to the subject, but what do the paintings and locations evoke in students?  Too much information in geolocation story telling cuts off the blood supply. The student needs to be aware of  her surroundings – with a modicum of interpretive assistance.  At several of the Arles locations, what is interesting is the contrast between the scene and the paintings.  How might students account for the contrast? In places, like the Rhone River, the scene is not nearly as interesting as the painting.  In other places, life imitated art. The hospital garden (now the library garden) and the Cafe Van Gogh had to be decorated to match the painting. In short, geolocation can be the convergence of location, media, the educator’s perspective and the students’ own thinking and imagination.

The Nuts and Bolts

The coordinates

To produce the geolocation tour of Arles, I used the ARMaker template in LodeStar 7.3.  Other tools are available that will create similar projects, but I’ll describe the tool that I designed and know.

Each page produced with the ARMaker template includes a rich text editor and geolocation fields that I’ll explain in a minute.  In the authoring tool, a page looks like this:

 

YellowHousePage

 

Note where the content sits, and where the coordinates are held.

To the student, the page will look like this:

 

yellowhousepageforstudents

 

The images that appear as thumbnails in the authoring tool are now rendered in full size in a slide viewer.  The coordinates now appear as markers on a map.

map

The student can either walk to the site and have the page content called up or, if the instructor allows, the student can simply click on a marker to bring up the content associated with the marker.

In other words, geolocation story telling can require students to visit sites or it can help organize content in a virtual tour that students can take from the comfort of the library or their homes.

In our project, we actually traveled to Arles to see the sights first hand and designed the application for a guided walking tour.  We meandered the streets, took photographs, took GPS readings, and absorbed the sights and sounds.  But a lot of this can be assembled by the instructor without leaving her office.

The GPS readings can just as accurately be obtained from Google Maps.  In the screenshot below I invoked the popup by keeping my mouse depressed on a location.

If you are interested in this approach, bring up a Google Map, click and keep your mouse button down.  If nothing pops up, click on a street away from any existing Google markers or building outlines.

The number that appears at the bottom of the popup is a coordinate.  For example 43.678610, 4.630738 means roughly 43.6 degrees latitude and 4.6 degrees longitude.   These coordinates have six numbers to the right of the decimal.  You need this level of precision so that your coordinates fall within a few feet of your target location.  Click on the coordinate and it appears at the top left of the screen, in a format that is easy to copy to your clipboard.

GoogleMap

Google map with the coordinates popup. Incidentally, La Maison Jaune is not the Yellow House and we only encountered Gilets Jaunes once and not in Arles.

 

The following is a screenshot of the LodeStar page with the coordinates pasted in. The next thing to add is proximity, which means how close do the students need to be to the location before they pass an invisible geofence that triggers the display of content.

coordinates

The content

The content can be in the form of audio, imagery, text, timelines, questions, and other assessment exercises.

In the following screenshot, the page features text and an inserted widget.  In the screenshot below, I clicked on the black sprocket. widget_sprocket , which brought up all of the widgets that can be inserted into the text.  I chose the image slider widget.

LodeStarWidgetDropDown

 

From there I could insert my images, caption them and dictate how they would be displayed – with a display list or without.

LodeStarImageWidget

 

The result could be something like this:

 

LodeStarScreenShot.png

 

Audio can be added with the help of the audio icon at the top right of a text page and the audio dialog, which supports the import of mp3 files. (Note that auto play policies in browsers prevent the auto play of sound files unless the user has interacted with the application first. Browser policies differ.)

 

audiodialog

 

Finally,  ARMaker (our template) is built on Google technology and so it supports what Google has afforded us, including the ability to map our location and mark it.  In this case, I scaled way up to a global view.  My current position is the black dot.  Arles is the red marker.  Normally, the student uses ‘My Location’ to mark how close they are to one of the locations.  The screenshot below shows that I’m 28,073,020 feet away from the nearest location, which is the Langlois Bridge, on the outskirts of Arles.  I have a bit of a walk ahead of me.

 

MyLocation

 

Google technology also allows us, in many locations, to switch to the satellite view or to drop down to the street view.

 

Satellite View

satelliteView

Satellite view of Arles

 

Street View

streetview

Street view of Place du Forum, in Arles

 

The red marker was placed on the street view by our coordinates in the LodeStar tool.  (LodeStar interacts with the Google Map.)  The white arrows and our mouse clicks enable us to navigate the streets.  In this view, we are in the Place du Forum, which was a plaza that dates back to the Roman times. We are facing the Café Van Gogh (yellow building), which was the location of a very famous and wonderful Van Gogh painting, ‘Café Terrace at Night’, that the artist described in his letter to his brother.  The second story of the Café recreates the scene of another famous painting named the Night Café.  The original site, Café de la Gare, was near the Yellow House and is now gone.

Conclusion

All of this can be housed in the instructor’s learning management system: D2L Brightspace, Moodle, Blackboard, Canvas, Schoology, wherever.  In fact, in order for the application to be able to receive location data, it must be launched from an address that begins with HTTPS//   The ‘s’ means secure. All learning management systems use this protocol to secure student data.

So technical stuff aside, imagine the possibilities.  With the combination of location and the instructor’s perspective or prima facie information shared through text, imagery, and audio, educators can use geolocation storytelling to transport their students to another place or they can get online students out of the house and into a neighborhood location that is of scientific, social, historical or artistic interest.

Again, the possibilities are endless.

As for ‘Van Gogh in Arles’, this project will be completed and published shortly after Thanksgiving, 2019.  You won’t need to go to Arles to view it  — but I highly recommend the trip.

 

References