Technology and Great Learning Experiences

Introduction:

As instructional designers, we understand that technology (even cool technology) can never substitute for the elemental motivations and emotions of a student engaged in a meaningful eLearning interaction.  Curiosity, exploration, challenge, suspense, resolution and revelation are all examples of experiences one strives to conjure when designing interactions.  Technology alone, once the novelty has worn off, doesn’t cut it.  Technology is just a means to an end – what researchers like to call an affordance.  Technology affords us the opportunity to create experiences that stimulate curiosity, present challenges and encourage learning.  Technology might take the form of videos, animations, audio, elaborate layouts, interactive maps, virtual worlds, and on and on.  But if it doesn’t motivate or result in an emotional experience or elicit the triumph of winning a challenge, or an ‘aha’ moment, the technology will soon leave learners cold. 

I learned that lesson from a computer game I played in the 80s.  It was called Space Quest and it was tremendously fun.  The first versions of the game were in black and white with simple graphics.  You had to solve a series of challenges to stay alive.  Those were addictive.  A group of our friends tried to solve the challenges together.  When it became too late to play any longer, our friends went home–only to return the next day.

Later versions of Space Quest began using a 256-color palette.  The graphics and animation became more colorful but often left you in this passive mode, more like watching a movie than playing an interactive game.  The first exposure to new technology was kind of exciting – but then the ‘movies’ lost their appeal. 

I think about a very exciting technology, geolocation storytelling, in the same way.  The technology is becoming more and more seductive.  Interactive maps can now feature 3D buildings, customized maps, and most recently, game objects.  You can create 3D models of dinosaurs, for example, and have them suddenly appear when you reach a location – like Central Park.  Imagine it: dinosaurs in Central Park or on the Mississippi river, for that matter.  As interesting, you can move around in real space, and see your location updated on a fictional map.  But what does this all mean to the busy instructor?

The answer is, typically, very little. Certainly, instructors and students can purchase or subscribe to off-the-shelf, ready-made products that use these technologies.  The benefits, however, will only outweigh the costs if the technology satisfies a significant instructional goal.  Often, there isn’t a good fit and that’s why I  am more interested in homespun.  I am interested in the instructor as creator and what the instructor can create.  I am more interested in how instructors can use sophisticated technology simply and get students to explore, complete a challenge or experience that ‘aha’ moment in a manner that precisely matches a course objective. 

A simple but effective example

The following example illustrates how instructors can use basic geolocation technology but avoid the pitfalls of spending time without the commensurate return on investment or not getting students to think, solve problems, explore or experience a new insight or gain a new perspective. You will need to use your imagination on how the underlying principle applies to your situation.

The example will show how you can draw on a map and relate that to content that will help students solve a problem. 

The example is inspired by Blue Zones, places where people live longer.  Blue Zones was developed by Dan Buettner whose work (e.g. AfricaQuest, MayaQuest, Blue Zones, etc.)  typically fosters the experiences that I’m discussing:  curiosity, exploration, decision-making, and problem-solving.  Visit https://www.bluezones.com/ for more information on his latest project.

To make our example come alive, I’ll choose two of the original five blue zones: Okinawa, Japan and Sardinia, Italy.  In a real application, I would choose five or more locations.  Our objective is to get students to visit the sites, look around with the help of Google Street View, collect statistics, compare and contrast the information and then propose a theory of why people live longer in these zones.  Dan Buettner, of course, summarizes this information in his books, but in our hypothetical application, we want students to think for themselves

Herein lies the crux of our strategy.  We could simply present the information.  The geolocation technology would then serve as another form of page turner.  If, instead, we get students to explore, collect data and attempt to solve a problem, we have caused students to think and experience firsthand the thrill of discovery.

Please note that we’ve covered geolocation storytelling in the past.  If you’re not familiar with this technology, I encourage you to visit the links below:

Geolocation Storytelling:  Van Gogh in Arles  (an application)
https://www.oercommons.org/courses/vincent-van-gogh-s-arles/view

Geolocation Storytelling:  Van Gogh in Arles  (a mobile app)
https://apps.apple.com/us/app/van-gogh-in-arles/id1489831732?ls=1

Geolocation Storytelling:  Van Gogh in Arles  (an article) https://lodestarlearn.wordpress.com/2019/11/07/geolocation-storytelling-van-gogh-in-arles/

Geolocation Storytelling (an article)
https://lodestarlearn.wordpress.com/2017/05/14/geo-location-storytelling/

The Van Gogh in Arles applications supports students’ visiting Arles and discovering the places where Vincent Van Gogh lived and worked.  It also supports students’ visiting Arles from the comfort of their desks.  The example below is more like the latter.  Students do not need to visit the location.  From their desks, they explore a map, collect information and visit the locations virtually.

How it’s done

So, let’s use the LodeStar eLearning authoring tool to set this up step by step.  (Full disclosure: I have been the chief architect of LodeStar and president of LodeStar Learning for the past two decades. LodeStar Learning offers a free trial of this tool at https://www.lodestarlearning.com so that you can immediately start a geolocation project. )

For this application I chose the ARMaker template.  The ARMaker template is geolocation aware.  The technology is baked right into the template.

LodeStar eLearning Authoring Tool (Version 8.0) Template Viewer

Typically in geolocation applications, one would type in a latitude and longitude of a location and then organize the page with text, graphics, imagery, audio and/or video.  When the student visits the location or, optionally, clicks on its marker on the map, the student is presented with the content.

Content on Text Pages can be tied to geographic locations by latitude and longitude

In our application, we don’t want students jumping from the map into the content.  Rather, we want the content to display on the map. 

In other words, our first page features instructions, but the instructions are not associated with a latitude or longitude.  Because these instructions are on the first page, they display when the application launches.

A page as it appears to the instructor

So, after I chose a layout, a theme, and a background image, our application looks like this when I preview it in a browser.

A page as it appears to the student

The astute LodeStar user will immediately notice some things are different.  I used Tools > Layouts to change the layout and background image.  I used Tools > Project settings to make other changes.

In Tools > Project Settings, I hid the navigation buttons; I allowed students to see the map; and I disabled students’ clicking on a marker to jump from map to content.

Here is where a different approach comes in.  The ‘Branches’ view and screenshot below begin to reveal the strategy.  I add a page with more background detail and link to it.  In LodeStar, any text on a Text page can link to any other page.  When students click on the words ‘click here’, they are taken to an information page.

I also linked to a Long Answer page.  That is where students will input their findings and their theory and submit their work to the instructor.

Also pictured, is a Wall page and two more Text pages on Sardinia and Okinawa.  The purpose of the wall is literally to wall off content.  Walled off content can only be accessed with a link or a branch or a third method that I’ll soon reveal.

Links can take students to other pages or external URLs.

Now here comes the fun part.

The Okinawa and Sardinia pages feature pie charts created by Blue Zones that show the percentages in an Okinawan or Sardinian diet that are made up of meat, fish, and poultry; legumes; added sugar; added fats; fruits; whole grains; and dairy.   In this application, I don’t make any statements.  I simply show the percentages.  I can also supply other information such as population density, family size, pollution index, climate data, and anything else that will enable students to make educated guesses about what contributes to longevity.

In our application, I’ll mark the Blue Zones.  When students click on a blue circle, the data will pop up.

Here is how I set it up:

  1. First, I added a Geolocation widget to a text page.  (LodeStar supports a variety of widgets that can be added to Text pages.)
  2. Second, I added a circle map object and set its properties (stroke color, fill color, radius, etc.) I could also add polygons, polylines, and rectangles.
  3. Third, I assigned a latitude and longitude to the circle to locate it on the map.

The Geolocation widget allows instructors to create circles, polygons, polylines, and rectangles, and display them on a map with precise coordinates

  • Finally, I associated a click on the circle to content.  The content could be housed on any page and not only the page that houses the Geolocation widget.

Map objects can be connected to page content

As pictured below, I also added latitude and longitude coordinates to the page.  This was not absolutely necessary.  Adding the coordinates at the page level (rather than the widget level) causes the red markers to display.  In Tools > Project Settings, I disabled the markers.  Their only function is to set the bounds of the map.  In our example, the markers conveniently set the boundaries around Okinawa and Sardinia.

(In normal geolocation applications, you would create content on a page and then set the latitude and longitude to mark the location on the map.  As I’ve mentioned, when students click on the marker or walk near the location, they are transported to the page.)

Pages can be tied to red markers by latitude and longitude

Here is what it looks like when the student clicks on ‘Show Map’.

Here is what it looks like, when the student clicks on a blue circle (i.e. a Blue Zone).

Now to explore further, the student drags the icon over Sardinia, and gets this:

The student has landed into a ‘street’ view of Sardinia and can look around.  Observant students will notice the water, the fishing boat, and the uneven terrain – all of which relate to factors that contribute to long life.

Once the student has made her observations and drawn some conclusions, she can submit her information to the instructor with the help of the long answer page.

Conclusion

One could easily imagine an application that simply displays the Blue Zones on a map with information on each site.  Our hypothetical application gives students something to do.  We challenge students to solve the mystery of long life that challenged Dan Buettner and the demographers Gianni Pes and Michel Poulain before him.   To present students with this challenge, we don’t need a degree in computer science or in art or in 3D modeling.  We need to boil things down to the essential elements of curiosity, exploration, challenge, suspense, resolution and revelation.  An instructor’s efforts should be focused on organizing the background information, the data, the locations and the assignment to make the most out of what this technology affords us as educators.  As importantly, we want the technology to bend to our educational objective–and not the other way around.

You can picture using maps, graphical objects and information in your own disciplines. When applications are set up in meaningful, problem-solving contexts in biology, geology, social sciences, history, or whatever, the possibilities are, dare I say,  boundless.

Online Learning After COVID

Robert N. Bilyk

As a Learning and Development Specialist, eLearning Toolmaker, former director of a Center for Online Learning, and founder of Cyber Village Academy, I’m observing education’s response to the current crisis with profound regret.

Introduction

I feel like the curator of an art gallery.  I’ve grown a collection of fine art and agonized over every detail of its presentation.  Then disaster strikes and my exhibition space is displaced by a thousand people who evacuated to the art gallery and sought shelter.  The space is suddenly overcrowded, the toilets overflowing, and the art hidden behind sweltering bodies.  When the danger passes, thousands of people will say that they’ve experienced firsthand my most treasured art gallery.

The COVID Crisis — Our Emergency Response

During the COVID crisis, thousands of teachers and tens of thousands of students evacuated to emergency online learning – and they can now say that they’ve experience online learning firsthand.

Image Credit – Dobrislava (Wikimedia Commons)

But in a recent survey of 7,238 K12 teachers (Network of Public Educators, April 2020), here is what they had to say about it:

  • 56% of the teachers felt overwhelmed by distance teaching
  • 55% of teachers said that their students will be further behind than in the classroom
  • A month into the crisis, 25% of teachers (n= 7238) hadn’t determined how they will assess student’s work.
  • 26% of teachers held video conferences with their students once per week and 36% did not video conference at all.
  • 30% struggled to adjust to distance learning
  • 56% of students struggled to adjust to distance learning
  • 64% used Google Classroom

The results are not good, and probably not unexpected.  The results in higher education are no better. In a recent survey of 826 faculty members conducted by the Babson Survey Research Group:

  • 55% of teachers (who had no previous online experience at the institution) lowered their expectations for the amount of work that students would be able to do
  • 34% of teachers (who had no previous online experience at the institution) lowered their expectations for the quality of work that students would be able to do

Educause writes:

Online learning carries a stigma of being lower quality than face-to-face learning, despite research showing otherwise. These hurried moves online by so many institutions at once could seal the perception of online learning as a weak option, when in truth nobody making the transition to online teaching under these circumstances will truly be designing to take full advantage of the affordances and possibilities of the online format. (Educause Review, March 2020)

The authors of the Educause article define what we are seeing. They propose a specific term for the type of instruction being delivered during the COVID crisis. They call it emergency remote teaching.

Online Learning versus Emergency Remote Teaching

I subscribe to that view.  Online learning and emergency remote teaching are not the same.  What we conclude about the one can’t be generalized to the other.

Effective and engaging online learning requires that a lot of things come together and work in harmony before we can hope for good outcomes.   It also has to start with inspiration and vision – and not an emergency measure.

My own inspiration comes from a deep appreciation of individualized instruction, adaptive learning, the power of interaction, the power of challenge, and the satisfaction of grasping new concepts.  Online learning has a place in every curriculum regardless of the primary modality, lecture or otherwise.

To underscore the distinction between online learning and emergency remote teaching, the Educause article cites  Learning Online: What Research Tells Us about Whether, When and How.  In Learning Online,  the authors identify nine dimensions, each of which has options, reflective of the complexity of the design and decision-making process.

The nine dimensions are modality, pacing, student-instructor ratio, pedagogy, instructor role online, student role online, online communication synchrony, role of online assessments, and source of feedback.

The authors also made another point that has stuck with me.

Yet an understanding of the important differences has mostly not diffused beyond the insular world of educational technology and instructional design researchers and professionals. 

What Lessons Have We Learned?

As a member of the instructional design community, I’m challenged with the question of how can we break out of this insular world and really make an impact?  Many are feeling the pain of this crisis and feeling dissatisfied with online teaching and learning.

From parents, I’ve heard:  Kids are getting stir crazy.  Parents are in a power struggle with their kids.  Both teachers and parents struggle to find good sites, good videos, and good activities.  Kids need organizers – from lockers to bulletin boards.

I’ve also heard a few triumphs.  Recently, a parent showed me an obstacle course that a seven year old built.  A required video recording showed the kid scrambling through this course three times.  To me this represented design, physical education, and communication all rolled into one activity.  To me, it seemed very clever and a credit to the teacher.

I have other observations.

Online learning needs some organizer: a place where students can find assignments, submit work, get feedback, etc. It is amazing that to me that K12 teachers adapted so quickly to such a variety of systems  — but they were on a learning curve at the worst possible time.

In the survey, 64% of the teachers used Google Classroom.  Google Classroom wasn’t around before May, 2014.  The Google statistic surprised me despite the reasons being obvious.  Google classroom is free and simple.  A fifteen-minute YouTube video can get teachers and students up and running.  In time, they can master slightly more sophisticated tasks like sharing the editing rights to documents, collaboration and integrating other Google applications.

Google classroom lacks many of the features of learning management systems like Moodle, Schoology and D2L Brightspace  — but again, its simplicity is attractive especially during an emergency response time.

After the COVID crisis, will simple and free be sufficient or will K12 school leaders inventory what worked and what was missing?  Will teachers get the professional development and support that they deserve?  They certainly accomplished a lot with what little they had in terms of training and resources.

In higher education, the story is a little different.  Most higher education institutions have adopted very sophisticated learning management systems, and have invested in media libraries, web conferencing, assessment platforms, quality-control processes and so forth.

And yet, we know that there are only 13,000 instructional designers (source: Online Learning Consortium)  with a wide range of roles spread over 5,500 higher education institutions.  Many institutions, like the University of Minnesota, have more than a dozen instructional designers.  That means many smaller institutions are lucky to have even one specialist whose role it is to help faculty design and build online courseware.  And how about professional development?  Are faculty getting the training they require?   After the COVID crisis, will school leaders double-down on online learning development, or dismiss it as a ‘lower-quality’ option?  Will online learning get properly funded or will it be down-sized in response to the enrollment crisis that appears to be hanging over institutions (https://www.bloomberg.com/opinion/articles/2020-05-11/what-colleges-must-do-to-survive-the-coronavirus-crisis)?

Just before the COVID crisis, higher education administrators participated in a survey administered by Quality Matters and Eduventures, titled “The Changing Landscape of Online Education, 2020” (CHLOE).  The majority of respondents said that they did not require students to complete an orientation before studying online.  Then suddenly, online learning became a necessity.  How well prepared were students to learn online?

Another survey statistic, although not surprising, also suggests room for improvement.  In the regional public universities, only 50% of the faculty who are approved to teach online received training on the learning management system.  Approximately 45% received training on resources and pedagogy.  The reason for that lack of participation is not known.  The authors of the survey write that more data is needed to understand the resistance of faculty to becoming more effective online teachers.

I’ll look forward to learning more after the crisis is over.  Did faculty simply substitute their lecture classes with Zoom, or did they take advantage of all that blended learning has to offer in synchronous and asynchronous environments?

It’s Time for a True Sea Change

I expect that a true sea change in online learning will require effort and resources at every level: from student and faculty development to school leaders supporting and rewarding that effort – and paying attention.

But a lot responsibility also falls on the tech providers.  For one, we need to make it easier for instructors to find, select, adopt, adapt and collect data from Open Educational Resources.

In a sense, we operate in a Balkanized environment.   Balkanization is the breakdown of a region into smaller autonomous units that are usually hostile to one another.  We can embed eLearning resources, but they don’t interoperate at any level.  That makes it particularly challenging for teachers with such little time.

As an example, Open Educational Resources are wonderful, and yet faculty must expend quite an effort to find something that matches their objectives. What is the free equivalent to expensive, proprietary systems, which do a good job mining OER and aligning OER to standards and objectives?

Image Credit : Merridy Wilson-Strydom (Wikimedia Commons)

From the perspective of a toolmaker, I am impressed with the work that repositories like OpenStax, Merlot, OERCommons have done.  But OER repositories don’t offer a place to store, share and collaborate on learning objects.  (Although, in some cases, they do allow you to store materials created with their own authoring tool.)

That’s why we began integrating our own tool with GitHub, which offers a place to store, publish, version control, and collaborate on projects. Seven Steps That Will Change How You Share eLearning  What is the EdTech equivalent of GitHub?  The closest that I’ve seen is OpenStax.

Perhaps only the largest of our tech companies can solve the problem of interoperability.  We need to be able to store, share, version control, and plug in learning resources into our learning environments.

Along with the technology, our teachers need professional development and reward and recognition for their efforts.  A single teacher might be able to create one quality resource in one academic year.  That single resource should be shared with the broader community and the teacher rewarded for her contribution.  In higher ed, the resource might factor into tenure and promotion.  In K12,  it might mean a cash reward equivalent to a coaching assignment.

A higher power needs to organize a learning activity exchange where reward comes with contribution.  Each and every resource needs to be interoperable with learning management systems and learner record stores (LRS).  Every resource needs to be tagged by standards specialists so that they are easily discovered and aligned to standards.

Conclusion

A lot of factors contributed to the poor results of emergency remote teaching.  At the fore are the lack of teacher and student preparation for online learning.  School leaders can help with that.  We are also asking teachers to operate in an environment that is somewhat hostile and not interoperable.  The onus is on the tech community to do better and to think through how learning materials can be stored, shared, collaboratively worked on, and plugged into environments that can capture student responses and performance data.

We need to improve not only for the sake of emergency response but for the betterment of education…even at the best of times.

References:

Emergency Remote Learning Survey Results

Perspectives: COVID-19, and the future of higher
education

http://www.onlinelearningsurvey.com/covid.html

Means, B., Bakia, M., & Murphy, R. (2014). Learning online: what research tells us about whether, when and how. New York: Routledge, Taylor & Francis Group.

 

The Humble Variable

Introduction

Instructional Designers are skilled at using text, media and graphics to help meet learner objectives.  But design often extends beyond the visible into the functional.  Designs might require tracking user performance, branching to an appropriate level of instruction, saving state, and creating highly individualized, interactive, learning experiences.

At the root of this functionality is the humble variable.  Understanding the variable and all of its implications in a learning design may seem a little out of reach of instructional designers.  That seems like programming…and programming is the domain of specialists like programmers or instructional technologists who know and, perhaps, even enjoy things like mathematics and logic.

But most instructors and many designers don’t have such specialists as a resource.  With a little knowledge, designers can expand their designs on their own and create better experiences for learners.

The Variable

As a start, there are some basic things about the variable that all instructional designers should know – some basic things that will help designers think about their designs more clearly.

First, a bit of unlearning.

We learned about the variable in elementary school.  We were asked to solve for x, given this type of equation.

6 + x = 10;

‘x’ was a challenge.  You had to manipulate things in your head like  x = 10 – 6.  You needed to learn about the dark art of algebra.

And so, something as arcane as this

image1

produced the graph below if you repeatedly plugged in a number for t, solved for x and scaled up the result:

image2

Probably not the traditional domain of instructional designers.

But, in instructional design, the variable isn’t a problem to solve.  It’s a tool.  It’s a tool like text and graphics and media.  And you can start simple.

The use of variables gives us the ability to save state (remember things like user performance on a question item) and to branch (go down one learning pathway versus another) and to evaluate performance (were the right things chosen and in the right order, perhaps).

So powerful is the variable that all major eLearning authoring systems not only use variables internally but give the author access to them.

Down below is a screenshot from Storyline, a popular authoring tool.    The author of a game is tracking how many correct answers the learner achieved (correctcounter), whether or not the learner has achieved a fail condition (fail), and other things not pictured here like whether or not the learner has attempted the question once, twice or three times, the overall score and the number of seconds on the timer (timer).

The variable is a storage place.   Some people like to use the analogy of a bucket – a place to dump data into.  I like the analogy of a mailbox.  The mailbox has both an address and a place to store stuff. Like a mailbox, variables have an address in computer memory; they have an easy name that we can use to refer to that place in memory; and they store a value. That storage place can typically hold numerical values, characters (as in a name) or true/false states.  There are fancy names for all these things like integers, floats, strings and booleans – but we are only concerned about basic things such as the value being stored as a number, set of characters or true/false.

Numbers versus characters versus true or false matter because they take up different amounts of computer memory, they enforce the type of data that is stored in the variable so that coding mistakes aren’t inadvertently made, and they are stored differently in the bowels of the computer.

2020-02-27_2050

The following screenshots also hint at another division between variables.  The first screenshot that follows shows user variables.  User variables, in this case, store information about the student name and id.

User Variables in Captivate

image4

In the next screenshot, system variables store program settings related to movie control.

System Variables in Captivate

image5

There is also another category often referred to as the user-defined or custom variable, not shown here.  In most programs, if you wanted to track something special, you would create your own variable.   For example, if I gave the learner a choice of tools to select in order to complete a task and wanted to track which tool was selected, I could create a variable called ‘toolSelected’ and assign the variable a value.

For example, toolSelected = ‘caliper’

Or, optionally, I could assign a number to the variable, as in  toolSelected = 1

image6

Alternatively, I could create a variable called ‘caliperSelected’ and set it to true or false. Or I could create a variable called ‘toolsSelected’ and in this case, set it to:

toolsSelected = “caliper; nippers”

In short, I have options.

So with that we are straying dangerously close to the wheelhouse of the computer programmer.  But for the instructional designer, what is important is an affordance — a capability.  We could give our learner a task and have the learner collect the appropriate tools.  Just knowing that variables can hold a bunch of values gives us a strategy to think about.  What if we placed learners in a situation where they could gather things to use in a problem-solving situation?  Thinking about variables and their capacity to store can inform our thinking – and give us a strategy or a way to accomplish our objective.

Let’s take this a bit further.

Conditional Statements

In my next example, I will use a custom variable and apply it to some branching logic.   In order to understand the example, we’ve already looked at the variable.  Now let’s look at some logic.  Branching logic can be achieved by either a conditional statement like one finds in Microsoft Excel or, in the example that follows, a ‘Gate’

Let’s think about logic.

In the spreadsheet below, we have scores in column B.  The logic is that if the score in column B is greater than 49, then the text in column C will show ‘Pass’.  Else, column C will show ‘Fail’

image7

The gobbly-gook language part of this, looks like:

=IF(B3 > 49, “Pass”, “Fail”)

B3 is the cell that lies at the intersection of column B and row 3.   So, if you can think of the first part inside of the parentheses as a condition, the second part is a value if the condition is true, and the third part is a value if the condition is false, then the gobbly gook reads like this:

If the condition is true, show “true”, else show “false”.

The condition:  is the value in B3 larger than 49?  If yes, show ‘Pass’; else if no, show ‘Fail’.

eLearning authoring systems present different ways of using the same type of logic.  You can imagine a branching scenario.  If the learner score is greater than 80, proceed down the ‘enrichment’ path.  If not, proceed down the ‘remedial’ path.   Branching is just a series of else if statements, like the one shown on the spreadsheet.

So now, let’s show an example that combines the use of the variable and some branching logic.

An Example

In the following example, we’ll introduce LodeStar 8 (which will be released soon).    In the activity, I will show 6 animals.  3 of the animals are critically endangered.

The object of the lesson is for students to understand what critically endangered means and, given some data,  to be able to identify some animals that are examples of critically endangered species.

Identifying the critically endangered is actually highly technical, involving numbers, habitat area, habitat fragmentation, number of generations and so forth.  Let’s say, for the sake of our example, that we presented students with all of that information and then asked them to select the animals that are critically endangered.

If students correctly select a critically endangered species, they will earn 2 points.  Selecting an endangered species subtracts 1 point.  Selecting a vulnerable species subtracts 2 points.

Out of the six animals, three are critically endangered.  The best score is therefore 6.

Here is a screenshot of LodeStar 8 and the ActivityMaker template, which we used in our example.

image8

A screenshot of LodeStar 8, due to be released March 2020

ActivityMaker supports different page types.  I’ll select the “Text” page type.  This page type supports text, imagery, SVG graphics, and widgets.  (We’ll talk about widgets soon.)

On the first page, I’ll add six images and a page heading.

image9

Produced with LodeStar 8 ActivityMaker Template

Adding Branch Options to Images

First, to assign a Branch Option to an image, I click on the image and select the branch icon.  The branch icon is used throughout LodeStar.  (Please note:  You can only add branching logic to an image once it is loaded and appears on the page.)

image10

The Branch Option dictates what happens when a question is answered correctly or incorrectly, when a page is displayed, when a gate is reached and so forth.  In this case, the branch icon controls what happens when an image is selected.  There is a selected branch option and a deselected branch option.  This is new to LodeStar 8.

image11

To start, I load the image, select a scalable size (in percentage) and then click on OK.  I then click on the image and re-open the dialog box.

I click on the ‘Selected’ Branch for the Sumatran Rhino and launch the branch dialog.

I then set the Branch Option to ‘Append Value’ and fill in the variable name, which is ‘score’ and a value that will be appended to the variable, which is the value of 2.

Appended, in this case, means that 2 will be added to whatever the value that the variable ‘score’ is currently storing.    Essentially this:

Score = Score + 2

Meaning

The new value of score is assigned the old value of score + 2.

image12

For deselected, the opposite is true.

score = score +  (-2);

Or

-2 will be appended to score, which is the same as

score = score – 2;

image13

I then want to present the option for students to evaluate their selections.  I type in text ‘Check Answer’, highlight it, and then select the ‘Insert Link’ tool in the HTML editor.

LodeStar’s HTML editor is unlike any other editor.   The ‘Insert Link’ dialog presents multiple options including the ability to link to one of the LodeStar pages.  The Pages (UID) dropdown displays all of the available pages.  If the author forgets to give a page a human-friendly name, then only the computer-friendly UID number is shown.  In the screenshot below, you can see both.

image14

When the student clicks on ‘Check Answer’ they will jump to the ‘Evaluate’ page and see an Embedded Variable widget displayed on the page.

The purpose of the Embedded Variable widget is to display the values of variables.    The widget dialog is launched by clicking on the sprocket icon as pictured.  (Remember, the LodeStar HTML editor is not your everyday brand of HTML editor.)

image15

Insert a widget on a page

The widget dialog presents a menu of different widgets.

image16

Widgets enable authors to embed timelines, word problems, questions, drag and drop, and other items on a Text Page

The author inserts the ‘Embedded Variable’ widget wherever s/he wishes to display variables and then types in the following:

Your ability to identify critically endangered species ranks {score} out of 6.

‘score’ is a variable name.  It holds a value (the student performance).  When the student sees this sentence, they will see the value and not the variable name.  If the variable has not been initialized (given a starting value), they will see ‘undefined’.

I also added two links:  ‘Start Your Journey’ and ‘Go back’.

Students can go back and attempt to improve their scores or they can move on.    The ‘Start Your Journey’ links to the ‘Gate 1’ page.  The ‘Go back’ links to the page with the animals.   The following diagram, found under ‘Branches’ on the left side, shows the branching connections from the Evaluate page to the preceding page and from the Evaluate page to the Gate. (I’ll explain gates in a second.)

image17

The Branches view

The following screenshot shows the ‘Embedded Variable’ widget editor.   Variables that have been used elsewhere in the program need only curly braces {} to be used.  Variables that don’t exist can be declared here.  (They can hold the result of expressions written in JavaScript, which is a more advanced concept.)   ‘score’ was used on an earlier page and, so, it can simply be referenced with the curly braces.

image18

Again, the two links on the page cause the learner either to move forward to the gate or backward to the animals.

Finally, we have the ‘Gate’, which is a LodeStar page type.  We use the gate in this case to branch the student.  If the student scored 5 or above, then we follow the ‘Pass Branch Options’.  If the student scored lower than 5 then we follow the ‘Fail Branch Options’.  ‘Pass’ and ‘Fail’ might not be appropriate terms, but students never see these terms.  They just imply one branch if the condition evaluates to true and another branch if the condition evaluates to false.

The condition is:

Pass only if Score Is >=  5

The variable that holds the score is the variable named ‘score’.  The variable name can be anything.  The author simply checks ‘Use Custom Score’ and identifies which variable will be used in the condition, as pictured below.

image19

The following two screens show the two branch options.  The ‘Pass’ option is set to ‘Jump to Page’ to a page that is titled ‘Enrichment’.  The ‘Fail’ option is set to ‘Jump to Page’ to a page that is titled ‘Remedial’.

image20

The following screenshot shows a page labeled ‘Enrichment’.  Notice the ‘Page ID’?  The Page Id was used in the gate.  This represents the start of a whole series of pages that represent the enrichment sequence. Similarly, there is a remedial page, the start to a series of pages that represent the remedial sequence.

image21

Here is what the ‘fail’ branch dialog looks like.

image22

When I click on the ‘Gate’ in the Branches view (as opposed to Pages view on the left side) and filter out the other pages, I can see the following.  Gate 1 branches to either ‘Enrichment’ or ‘Remedial’.  If I check off the filter I will see all of the branches for all of the pages, which gets to be a bit overwhelming.

image23

More Complex Scenario-based Learning

So far, we are making the learner do something.  We then store their performance in a variable called ‘score’.  We use the value of the variable to branch in one direction if the score is low and in another direction if the score meets or exceeds a number.

That is a very basic building block.  It’s like Legos.  A Lego® brick is a simple thing, but Lego® bricks can be combined to form ever more complex shapes.  So too in eLearning.

As a culminating example, let me describe a project we recently completed.  The basic strategy of storing values in variables was used in a highly interactive learning module that we created to teach the topic of using LinkedIn in for business development.

image24

With the use of variables, we were able to track learner performance through four Social Sales Index (SSI) measures: brand, people, insights, and relationships.  If learners acquire the skills to improve their SSI index through the learning module, then they can apply that directly to LinkedIn and see tangible results.

image26

In the learning module, behind the scenes, there are four variables, each matched to an SSI metric.  As learners expand their LinkedIn network, respond appropriately to notifications, build their profile, etc. etc. they increase their SSI.  Each activity is tied to one of the variables.

The Function

We started with the humble variable, and then saw it used in branching logic.  Variables are also frequently used with functions.

A function is a group of instructions used by programming languages to return a single result or a set of results or simply to do something.

Because LodeStar automatically tallies student-earned points and reports performance to the learning management system, in our example, we use functions to override that behavior by setting the user score and total score to our SSI metrics or to anything we want.

Let’s look at functions in general, and then at how our example uses them.

As mentioned, the function either does something or gives you are result based on some input.  In LodeStar, functions are just something you use rather than define.  But if you looked at a function from a programmer’s point of view, it would look like the following function named addValues.  (functions are often named in this way, with the first letter lower cased.)

function addValues(value1, value2){

            sum = value1 + value2;

            return sum;

}

‘value1’ and ‘value2’ are inputs (or arguments, in technical speak)

The body of the function falls inside the curly braces {}.  The body of the function adds the two inputs and spits out a result — a return value.  Notice how we assign the sum of ‘value1’ and ‘value2’ to a variable?

Our use of a function is simpler.  We don’t need to define functions.  That work has been done for us.  We just need to use them.  We need two functions to override the default behavior of LodeStar.  As mentioned, the default behavior is that LodeStar automatically tallies up the student performance points in all of the different question types and reports that to the learning management system.  But we don’t want that.  We want to report the SSI score.

A perfect SSI score is 100, so that becomes the total score.  The sum of brand, people, insights, and relationships becomes the user score.

We use the function named setCustomUserScore(value) to set the user score.  We use setCustomTotalScore(value) to set the total score.

Once we do that, all of the module’s learning activities are tied to this real-world performance measure.  Finally, and most, importantly, all of the activities simulate real-world LinkedIn actions.

Conclusion

So, for our project, it all started with the humble variable.  We asked how does LinkedIn measure proficiency.  The answer is the SSI index.  We then asked how would we capture the learner’s performance in an SSI metric.  The answer is four variables named brand, people, insights and relationships.  We then asked how could we bring up different types of content in the form of notifications, messages and so forth.  The answer was in the use of variables and some conditional logic.  Finally, how would we report the SSI index to the learning management system.  The answer was….the function.

Instructional Designers traditionally think about text, graphics, audio and other types of media.  These elements alone lead to very linear designs.  The addition of variables, logic, and functions frees up our designs from the constraints of these linear models and allows us to add variability, surprise, realism and other things that enrich the learning experience.

So, start simple.

Geolocation Storytelling: Van Gogh In Arles

Introduction:

Because this is so personal, I’ll introduce myself.  I am Robert “Bob” Bilyk,  founder of LodeStar Learning.  I am passionate about the project I am about to describe and a proponent of instructional technology in general.

I recently heard an interview with Christopher Kimball, formerly of America’s Test Kitchen.  Two things he said that stuck with me: First, he described himself as being a home cook rather than a chef.  Secondly, he talked about introducing recipes to other home cooks that were slightly out of reach of their comfort zone and knowledge but not way out of reach.

My efforts are a modest version of that.  I’m interested in helping online instructors reach out and embrace new ways of interacting with their students.  I’m trying to connect to that inner instructional designer in all online teachers. And I’m trying to introduce strategies that are within reach but may require a stretch.

Geolocation storytelling is one such strategy.  It’s an incredible strategy that, I believe, is within reach of all online instructors.  Geolocation storytelling works for a broad range of disciplines: literature, history, biology, environmental studies, communications, urban planning, and on and on – wherever location is relevant. I use the term storytelling very loosely.  It can be fiction or non-fiction.

Geolocation storytelling reveals something about a location when the student visits the site either physically or virtually.   The student can see or hear the narrative on her smartphone when she physically visits a site or clicks on a map marker.

In this article I intend to share a project that I’m currently working on.  I intend to disclose the inspiration of the project, the brainstorming, and the nuts and bolts of how I am putting it all together.  It’s not completed. It truly is a work in progress.

 

Screenshot of a Geolocation project.

Screenshot of one page of a LodeStar Geolocation storytelling project situated in Arles, France, and focused on Vincent Van Gogh, the Dutch painter.

 

The idea

Recently my wife and I traveled to Iceland and France.  We had several ideas in mind for geolocation stories — ideas that would match up to educational needs.  Some of our ideas turned out to be impractical because of cell phone coverage issues. But one of our ideas hit the jackpot.

The keys to a good geolocation story are a)  locations where there is a strong cellular signal b) exterior locations with line of sight to the sky for the Global Positioning Satellite (GPS) signal c) a strong educational objective that is tied to location and d) somewhere to house the project like a learning management system.

For us, all of the elements came together in Arles, France.  Before arriving in Provence, in southern France where Arles is located, I imagined a GPS-guided walking tour of all the places that Vincent Van Gogh painted and sketched in Arles.  But I didn’t know whether or not it would be practical.

As it turned out, it was not only a practical idea (cell phone coverage was great and the buildings didn’t obstruct the satellite signal) but one that needed to be done.

The need

I’m sure there are dozens of guidebooks, brochures and pamphlets on Van Gogh’s Arles.  We didn’t immediately find any. The tourist office had a nicely illustrated guide in French, which we didn’t buy.  Instead,  we thought we’d start off with the obvious starting point — Fondation Vincent Van Gogh.

The mission of Fondation Vincent Van Gogh is wonderful — but it houses only a few of Van Gogh’s paintings.   If you are fresh off the train, boat or motorway, full of anticipation of all things Van Gogh, the Fondation is a bit of a disappointment.  (They do sell rubber ear erasers, however.)

We then thought of the next thing we knew.  The Yellow House!  That’s where Van Gogh stayed and painted and decorated in anticipation of the arrival of a fellow artist: Paul Gauguin.

As we soon learned, the Yellow House doesn’t exist.  We asked around. No Yellow House.

Arles is a wonderful place.  But it is difficult, at first, to make that Van Gogh connection.   If you know where to go, you’ll find panels of Van Gogh’s work at the locations where he painted some of his most famous works.  However, you need a guide to find them. Arles is a big place. The panels are helpful but you need to know something about Van Gogh to really appreciate them.

The opportunity

So here is the crux of the thing.  Van Gogh painted in locations. Location — with its people, rooted in the farms and neighborhoods, its colors, patterns, streets, trees, and flora — is an important part of the story.  As important is the perspective and knowledge of the educator. What the educator can bring to the story, superimposed on location, is the opportunity.   In our project, visitors to Arles would be guided by the story to important places and then presented with information related to the places.

The intrepid educator

I’m not a Vincent Van Gogh scholar.  In contrast, I think of the scholarship of educators with whom I have worked.  I think of educators like Dr. Carolyn Whitson, at Metropolitan State University, who recently published an eBook titled  ‘Understanding Medieval Last Judgment Art’* and I imagine what they could do with geolocation story telling. This strategy is within reach of educators like Dr. Whitson because she teaches online, she uses technology, and she has already embraced eBook technology (and other technologies) to make her text and photography accessible to a wide range of students.  (The link to her book can be found at the end of this post.)

I’m not a Van Gogh scholar, but I am an enthusiast.  Since I was a teen, I’ve been drawn to his sketches, paintings and personal life.  His ministry in the Borinage coal mining district, ‘The Potato Eaters’ and the sketch ‘Sorrow’ with its accompanying tortured love story hooked me from an early age.   His hope of renewal in Arles and the vibrancy of his paintings and the eventual devastation of his dreams and aspirations, in various ways, inspired me. I carved wood, painted, and wrote stories under the same melancholic humor as the artist.

And so it was with much enthusiasm that I approached this geolocation story-telling project.  But recognizing that I am not a Van Gogh scholar I limited myself to these few simple elements:  location, Van Gogh’s own words and paintings, photography, and (sparingly) some shared insights from an art historian, the late Jean Leymarie.   I added a few details to help bring significance to the location but kept those to a minimum.

Less is More

From an instructional perspective, less is more.  Writers like Leymarie can bring boatloads of insight to the subject, but what do the paintings and locations evoke in students?  Too much information in geolocation story telling cuts off the blood supply. The student needs to be aware of  her surroundings – with a modicum of interpretive assistance.  At several of the Arles locations, what is interesting is the contrast between the scene and the paintings.  How might students account for the contrast? In places, like the Rhone River, the scene is not nearly as interesting as the painting.  In other places, life imitated art. The hospital garden (now the library garden) and the Cafe Van Gogh had to be decorated to match the painting. In short, geolocation can be the convergence of location, media, the educator’s perspective and the students’ own thinking and imagination.

The Nuts and Bolts

The coordinates

To produce the geolocation tour of Arles, I used the ARMaker template in LodeStar 7.3.  Other tools are available that will create similar projects, but I’ll describe the tool that I designed and know.

Each page produced with the ARMaker template includes a rich text editor and geolocation fields that I’ll explain in a minute.  In the authoring tool, a page looks like this:

 

YellowHousePage

 

Note where the content sits, and where the coordinates are held.

To the student, the page will look like this:

 

yellowhousepageforstudents

 

The images that appear as thumbnails in the authoring tool are now rendered in full size in a slide viewer.  The coordinates now appear as markers on a map.

map

The student can either walk to the site and have the page content called up or, if the instructor allows, the student can simply click on a marker to bring up the content associated with the marker.

In other words, geolocation story telling can require students to visit sites or it can help organize content in a virtual tour that students can take from the comfort of the library or their homes.

In our project, we actually traveled to Arles to see the sights first hand and designed the application for a guided walking tour.  We meandered the streets, took photographs, took GPS readings, and absorbed the sights and sounds.  But a lot of this can be assembled by the instructor without leaving her office.

The GPS readings can just as accurately be obtained from Google Maps.  In the screenshot below I invoked the popup by keeping my mouse depressed on a location.

If you are interested in this approach, bring up a Google Map, click and keep your mouse button down.  If nothing pops up, click on a street away from any existing Google markers or building outlines.

The number that appears at the bottom of the popup is a coordinate.  For example 43.678610, 4.630738 means roughly 43.6 degrees latitude and 4.6 degrees longitude.   These coordinates have six numbers to the right of the decimal.  You need this level of precision so that your coordinates fall within a few feet of your target location.  Click on the coordinate and it appears at the top left of the screen, in a format that is easy to copy to your clipboard.

GoogleMap

Google map with the coordinates popup. Incidentally, La Maison Jaune is not the Yellow House and we only encountered Gilets Jaunes once and not in Arles.

 

The following is a screenshot of the LodeStar page with the coordinates pasted in. The next thing to add is proximity, which means how close do the students need to be to the location before they pass an invisible geofence that triggers the display of content.

coordinates

The content

The content can be in the form of audio, imagery, text, timelines, questions, and other assessment exercises.

In the following screenshot, the page features text and an inserted widget.  In the screenshot below, I clicked on the black sprocket. widget_sprocket , which brought up all of the widgets that can be inserted into the text.  I chose the image slider widget.

LodeStarWidgetDropDown

 

From there I could insert my images, caption them and dictate how they would be displayed – with a display list or without.

LodeStarImageWidget

 

The result could be something like this:

 

LodeStarScreenShot.png

 

Audio can be added with the help of the audio icon at the top right of a text page and the audio dialog, which supports the import of mp3 files. (Note that auto play policies in browsers prevent the auto play of sound files unless the user has interacted with the application first. Browser policies differ.)

 

audiodialog

 

Finally,  ARMaker (our template) is built on Google technology and so it supports what Google has afforded us, including the ability to map our location and mark it.  In this case, I scaled way up to a global view.  My current position is the black dot.  Arles is the red marker.  Normally, the student uses ‘My Location’ to mark how close they are to one of the locations.  The screenshot below shows that I’m 28,073,020 feet away from the nearest location, which is the Langlois Bridge, on the outskirts of Arles.  I have a bit of a walk ahead of me.

 

MyLocation

 

Google technology also allows us, in many locations, to switch to the satellite view or to drop down to the street view.

 

Satellite View

satelliteView

Satellite view of Arles

 

Street View

streetview

Street view of Place du Forum, in Arles

 

The red marker was placed on the street view by our coordinates in the LodeStar tool.  (LodeStar interacts with the Google Map.)  The white arrows and our mouse clicks enable us to navigate the streets.  In this view, we are in the Place du Forum, which was a plaza that dates back to the Roman times. We are facing the Café Van Gogh (yellow building), which was the location of a very famous and wonderful Van Gogh painting, ‘Café Terrace at Night’, that the artist described in his letter to his brother.  The second story of the Café recreates the scene of another famous painting named the Night Café.  The original site, Café de la Gare, was near the Yellow House and is now gone.

Conclusion

All of this can be housed in the instructor’s learning management system: D2L Brightspace, Moodle, Blackboard, Canvas, Schoology, wherever.  In fact, in order for the application to be able to receive location data, it must be launched from an address that begins with HTTPS//   The ‘s’ means secure. All learning management systems use this protocol to secure student data.

So technical stuff aside, imagine the possibilities.  With the combination of location and the instructor’s perspective or prima facie information shared through text, imagery, and audio, educators can use geolocation storytelling to transport their students to another place or they can get online students out of the house and into a neighborhood location that is of scientific, social, historical or artistic interest.

Again, the possibilities are endless.

As for ‘Van Gogh in Arles’, this project will be completed and published shortly after Thanksgiving, 2019.  You won’t need to go to Arles to view it  — but I highly recommend the trip.

 

References

 

 

The Problem with Simulations

Introduction:

As a student I can be told about central tendency in statistics and the properties of a normal distribution.  I can memorize the difference between mean, mode, and median.  I might even do well on an exam, if asked to calculate the standard deviation.  I could, by rote, follow the four steps, and produce an accurate number – and yet have no concept of variance, and the significance of samples and of sample sizes and understanding the complete picture of central tendency.  Or I could play with a simple simulation as illustrated on the following website:

http://onlinestatbook.com/stat_sim/sampling_dist/index.html

In this simulation I see the real population distribution and I see the output of mean, median and mode – and standard deviation.  I can see that I need a lot of samples in order to start to see the low frequency outliers from the mean.   Rather than being told something, or memorizing a formula, I get to manipulate numbers and see the story of central tendency and variation play out.

I’ve moved from the lower level memorization of a definition and the lower level performance of a procedure to the higher level conceptual understanding that is so important in the field of statistics.

The problem with the term simulation

Perhaps, my example doesn’t quite measure up to your idea of a simulation. That indeed is one problem associated with the concept of ‘simulation’.  We have one word that describes a wide range of things.  A simulation can be any number of things ranging from this ‘smart’ animation of samples to an immersive virtual world, complete with body suits and head gear.  The Inuit, reportedly  had different words for snow, including “matsaaruti” for wet snow and “pukka” for powdery snow. Instructional designers have one inadequate term for a full range of activities:  the simulation.

But that’s not the only problem.

I’ll set out in this post to outline categories of simulations, champion their value, and help clear away some of the obstacles to their adoption.

The need for higher order strategies

Despite their value, simulations represent a very small percentage of online learning activities.  Many business, medical and engineering programs engage their students in simulations, but the ratio of simulation-based activities to all online learning is small.

More than ten years ago, research at Cornell University (Bell, 2008)  cited a study that places simulations at a ‘relatively small percentage (approximately 2-3%) of the total e-learning industry’.  The study states that the costs of producing simulations is high and the effectiveness of simulations has had mixed reviews.  That’s the heart of the problem. The authors of the study suggest that “instructional designers are left with little guidance on how to develop an effective system because the factors that influence the effectiveness of simulation-based training remain unclear.”  Not a very promising start.

But, in my view, simulations are an important strategy for online instructors.  In order for online learning to have any significant impact on learning performance, we need instructors to be skilled at selecting strategies that promote higher order thinking.  Too much of online learning replicates the worst of the classroom experience, in which students passively receive a lecture.  The interactive portion is resigned to a quiz. There are significant alternatives – but they are not easy to implement.  The simulation, as a strategy, is the most challenging.

Simulations are effective because students enjoy engaging in simulations and being challenged to think.  Instructional designers often prescribe or design simulations to promote higher order thinking that helps students synthesize facts, concepts, principles, rules and procedures.

Educational psychologists recognize the value of simulations to promote cognitive complexity – which is the student’s ability to detect nuances and subtle differences that might impact their decisions or judgement.

In a meta-analysis conducted by Dr. Traci Sitzman at the University of Colorado, computer-based simulation games promoted students’ retention of content, belief in their own capacity to complete the tasks, recall of facts, and their procedural knowledge (Sitzmann, 2011).

But what is a simulation – and how can busy, online learning instructors leverage this strategy?

As mentioned, the term ‘simulation’ covers a broad range of activities – from the very simple, to the very sophisticated.  We’ve all seen the complex training simulators used in space and flight training.  A commercial aircraft simulator can run from ½ million to several million dollars.  Clearly outside of our budget.  We’re also familiar with high fidelity simulations in nursing.  They range from virtual reality systems to high fidelity mannequins.  These are two categories of simulations that require significant investment.  There are other types of simulations, however, that are simpler and affordable.  And they can positively impact every discipline.

A Range of Types

Under this umbrella of simple and affordable, we can include a range of simulation types.  In past articles, I’ve written about interactive case studies.  In interactive case studies, students are presented with a case and some resources. They have to do something as a result such as create a business plan, solve a problem, uncover underlying issues…whatever.  In the past, I contributed to a team working on an interactive case study that involved assessing a student’s eligibility for credit for prior learning.

In decision-making scenarios (a type of interactive case study), a student is placed in a situation, must collect information, make a decision and then evaluate that decision based on the expert answer, which may come in the form of feedback from a coach or from the revealed consequence of the decision.  I’ve written about a decision making scenario that placed the student in Abraham Lincoln’s shoes when southern states were threatening to secede.  As a student, you consult the same advisers who Lincoln consulted.  You make a decision and then contrast that with what Lincoln actually did.  The whole idea behind this decision-making activity came from a professor of history at Tulane University.

Kognito  (https://kognito.com/) produces a wide variety of simulations for different audiences, including mental health professionals and school personnel.

One of their products educates faculty, staff, and students about mental health and suicide prevention.  In their simulations, the company employs a variety of strategies:  users interact in an environment made up virtual characters and virtual settings.  The learners role-play by selecting the most appropriate thing to say in a simulated conversation.  Learners get immediate personalize feedback as they engage in decision making in an interactive case study.

Another type of simulation involves students tweaking the values of parameters and seeing the result graphed.  For example, an Isle Royale simulation has students tweaking the initial number of wolves and moose on an island.  After the simulation is started, students watch the wolf and deer population rise and fall until the populations fall into a pattern.  The InsightMaker site hosts thousands of this type of simulation.

Another popular type of simulation is found in the interactions featured at the University of Colorado. https://phet.colorado.edu/  Student learn concepts by changing parameters.  In learning about Ohms law students can increase or decrease voltage, increase or decrease resistance and then see the resulting amperage.  The display is highly visual with all parts of Ohm’s Law graphically illustrated. Even the equation is illustrated, with parts that grow and shrink in size.

 

Screenshot of a LodeStar Learning Activity on SIR modeling

Screenshot of LodeStar activity with embedded InsightMaker on SIR modeling: Infectious diseases

A Range of Purpose

Simulations fulfill a range of purposes or functions.  The purposes aren’t mutually exclusive.  Simulations may involve one, several or all of these.

Functional or Procedure simulations help learners perform a function in a given situation.   Software simulations, for example, require learners to perform tasks in the software environment.  Vehicle simulators and high fidelity mannequins require learners to do the right thing at the right time.

Conceptual simulations helps learners view a concept in isolation and, in some cases, change the parameters, see the effect and be able to recognize the concept in action.  For example, in a simulation of predator-prey relationships, students see a unique pattern that always develops regardless of the initial number of predators or the initial number of prey.

Process Oriented simulations often include underlying mathematical models – mathematical representations of a real-world system. ‘What-if’ process simulations ask students to make a change to a process and see its outcome.  Students change inputs and immediately view outputs.

Synthesis Oriented simulations involve learners in gathering information, making observations, recalling key principles, concepts and facts and then putting it all together to make the appropriate choices.   Decision-making and interactive case studies are examples.

Behavior Oriented simulations engage students in the affective domain and require students to choose the appropriate behaviors and demonstrate the right attitude given a situation.  Choosing to recycle garbage or choosing to manage time are examples.

In short, types of simulations align nicely with types of knowledge.  Less important is the technology – virtual world, versus two-dimensional animation, versus text narrative – and more important is the behavioral and cognitive change.

By focusing on what is important and eliminating what is not important, we can pare away cost and remove one of the obstacles to using simulations in our curriculum.

Definition

It is difficult to sum up simulations in a single definition and so I offer these attributes.

An educational simulation:

  • Loosely or closely represents reality (low versus high fidelity)
    • Represents or models the behavior or characteristics of a system
    • Mimics the outcomes that happen in the natural world.
    • Pares away unnecessary detail
  • Stimulates a response in the learner
  • Presents learners with a situation that causes them to think – that is, draw upon their knowledge and procedural and analytical skills to make decisions, to form hypotheses, to draw conclusions, to state rules or act in some way
  • Provides feedback

Under this broader definition, a disease model that shows a population that is susceptible to, infected by, and recovered from a disease is a simulation.  It is a particularly useful simulation if its underlying math and logic represents a real world phenomenon – even if it is an over-simplification. It is also useful if it allows the student to change parameters of the model, such as population size, the number who are initially infected, the proximity of members of the population and so forth and then make inferences about the outcome.  In this way, the simulation invites learners to ask ‘what if’ questions.   The results of student input cause learners to think and, perhaps draw their own conclusions about general rules and principles.  Changing parameters and running the simulation provides immediate feedback.

General attributes that make simulations an effective learning strategy

From a meta-analysis (Cook, 2013)  focused on simulations involving virtual worlds, high fidelity mannequins, and even human cadavers, we learn about the positive effects of key learning strategies including:

  • range of difficulty
  • repetitive practice
  • multiple learning strategies
  • individualized learning
  • feedback
  • longer time

In short, students benefit from interactions that vary in difficulty, present opportunities for repeated practice, engage them in different ways, adapt to student performance and confidence level, give them time, and, importantly, provide meaningful feedback.  Those are useful characteristics of any eLearning.

Much of eLearning doesn’t include any of these characteristics — not one!   A lot of eLearning is built on voice over PowerPoints that have been imported into an eLearning authoring tool.  The feedback is limited to a score on a final quiz.  More finessed eLearning comes in the form of talking head videos with chapter quizzes.  Many of the learning platforms that allow instructors to market their courses don’t even bother with the import of interactive learning objects.  They support video and audio files and PDFs – that is, presentation formats, not interaction formats.

By necessity, the corporate world relies on voice-over PowerPoints.  High-end eLearning development shops bristle at the prospect of creating a voice-over PowerPoint.  They are often engaged in making highly creative learning objects that impact a lot of employees and yield a high return on the investment.  When I worked for these companies, we developed six figure learning objects that would reduce service calls, for example, and save a company tens of thousands of dollars or cut down on the use of natural gas, to cite another example,  and save a utility tens of thousands of dollars.  But the economics don’t always support such high-cost investments.  The continuing education industry for medical and accounting professionals, for example, is characterized by literally thousands of voice-over PowerPoints.  These industries change so fast.  The demand far outpaces our ability to create quality learning experiences.

Instructors may recognize or accept that simulations are important, but don’t know where to begin. Obviously, building a half-million dollar simulator is out-of-reach, but there is something that instructors can do to make use of this strategy.  The next section is dedicated to some practical suggestions.

Simulation tools

There are a number of web sites that provide free authoring, hosting, and viewing of simulations. One of my favorite cloud-based simulation tools is InsightMaker. (https://insightmaker.com) InsightMaker supports a variety of different simulation types.  Instructors can build their own simulations and models or use one of thousands that have been created across many disciplines. I want to emphasize that last point.  You will be able to find a simulation that you can use – but it may take a little patience and perseverance.

In biology, an instructor can find simulations on food chain, prey/predator population dynamics and much more.  In business, one might find sales forecasting, or marketing simulations.

In ecology, an instructor can simulate the tipping effect of climate change when shrinking icecaps accelerate climate change with bodies of water absorbing radiation rather than reflecting it.   Students can change the values of parameters and see change accelerate.

Here are other sites and examples worth investigating:

https://blog.cathy-moore.com/resources/elearning-samples/

https://phet.colorado.edu/en/simulations/category/chemistry

https://elearning.cpp.edu/learning-objects/organic-chemistry/tlc/

 

And for the engineer:

https://www.youtube.com/watch?v=iOmqgewj5XI

https://www.youtube.com/watch?v=CFwrfoyRE6c

 

Conclusion

There are a number of ways to get started using simulations.  Finding simulation websites is one; finding cloud-based modelling tools is another.

There are a lot of elements to a simulation.   The authors of the Cornell study suggest that all too often we focus on the technology of simulation rather than on the critical educational elements that are found in the content, the level of immersion (fidelity related to the real world), the interaction, and communication.  The cost is strongly associated with the design and the production of content – the imagery, music, the interface, etc.  The interaction, however, may be accomplished relatively inexpensively with text narratives and decision-making (supported by authoring tools).  The last element, communication, can certainly be facilitated through the learning management system discussion board or group discussion in the classroom.  If we can study these elements discretely and evaluate their impact on learning, as instructional designers, we can separate high cost artwork and media production (that may have little instructional value) from low-cost instructional strategies that provide great value in terms of learning outcomes.

References

Bell, B. S., Kanar, A. M. & Kozlowski, S. W. J. (2008). Current issues and future directions in simulation-based training (CAHRS Working Paper #08-13). Ithaca, NY: Cornell University, School of Industrial and Labor Relations, Center for Advanced Human Resource Studies. http://digitalcommons.ilr.cornell.edu/cahrswp/492

Sitzmann, Traci, (2011) A Meta-Analytic Examination of the Instructional Effectiveness of Computer-based simulation games

Cook DA, Hamstra SJ, Brydges R, Zendejas B, Szostek JH, Wang AT, Erwin PJ,
Hatala R. Comparative effectiveness of instructional design features in
simulation-based education: systematic review and meta-analysis. Med Teach.
2013;35(1):e867-98. doi: 10.3109/0142159X.2012.714886. Epub 2012 Sep 3. Review.
PubMed PMID: 22938677.

 

Postscript: A Proposed Low-fidelity, low cost simulation

On a personal note, for the last several years I’ve been thinking about low-cost simulations that pay high dividends in terms of student outcomes.  As mentioned, I’ve written about decision-making scenarios and interactive case studies.

My latest experiment has been with a model that I call a State Response Engine (SRE).  In the future I hope to write extensively about it.  Briefly, SRE presents the learner with a randomized state and requires the appropriate response.

To better understand SRE, let’s imagine this eLearning activity.  The learner is an online instructor.  The situation is that the college dean has presented the instructor with a set of learning goals.  The online instructor must follow the appropriate process in order to select, develop and evaluate activities and assessments that will align to the goal and help students achieve that goal.

The random state comes in the form of a specific student audience and learning goal.  The engine (the computer program) selects an audience (e.g. non-majors versus majors or freshman versus capstone students, etc.)  From that point forward all of learner responses and any future random states relate to the first choice.  If the computer chose senior students completing their capstone – all of the future states relate to senior students.  All of the resources that appear relate to senior students.  The learners can then investigate the resources for key situational factors.  The engine then randomly selects a learning goal.  The goal might involve the capstone students in promoting conceptual knowledge or putting it all together – but a goal of recalling some basic facts and figures would not be in the selection pool.

The engine then displays resources connected to the state and options in the form of learner responses.  Some of the options or choices would be valid regardless of the goal and student audience.  Others would be valid only for a specific type of knowledge or a class of learner.

The learner progresses through phases or categories.  The phases might be specific stages in a process or something else.  In this case, the phases relate to recognizing situational factors, developing objectives, designing assessments, and designing activities.  In short, a backward design process.  Some of the response options will be correct; others will be incorrect based on the randomly chosen state.   At every stage, learners will be shown links to resources that will help them make the right decisions.  After learners have chosen what they judged to be the right responses, they submit for evaluation.  They then receive a response by response critique and an overall score.

That’s it in a nutshell.  It may or may not be a useful arrow in the instructor’s quiver, but we must continue to search for low-cost high-yield strategies that promote higher-order thinking.  I’ll continue in this pursuit and celebrate other attempts to create effective strategies.

Online Learning Trends: Risks and Opportunities

Introduction:

Our web journal focuses on specific instructional design strategies for online learning.  But in this post, I step back and address something much more fundamental – and at risk.

Online learning has tremendous potential.  I am encouraged by faculty who really want to do a great job in their online courses and continuously strive to do better.  Chances are very good that you are in that group.  You are taking the time to read this blog and explore new ways of engaging students.

Next month I’m retiring from my position as Director of the Center for Online Learning from a state university. This gives me occasion to reflect on the eight years I’ve served in this role and on current trends.  As trends would indicate, the immediate future presents faculty with both risks and opportunities.  Faculty who are invested in quality online learning should think about the immediate future very carefully and help direct policy and best practices at their institutions that advance the state of teaching and learning in this relatively new medium.

Online learning can be an instrument of good.  But because of its technological nature, it is susceptible to scale, mechanization and bad practice. At risk, at the very least,  is the autonomy and self-determination of faculty.

In our university, faculty make the critical decisions related to their courses.  They are free to make choices related to activities, assessments, instructional materials, teaching methods and course support.  When faculty are free to decide and exercise that freedom, individually and collectively, they exercise self-determination.  With self-determination comes leveraging of faculty strengths and recognizing their own limitations;  responsibility for decisions; and substantial personal reward for success.  Self determination means faculty can apply their competency, and effect positive change in their students.

Risks to self-determination may appear in many forms.  Today, a few of the potential sources of risk include:

  • Highly competitive and large-scale online programs that discourage or eliminate fledgling entrants
  • A billion dollar Online Program Management industry that can dictate the design of courses from entrance requirements to curriculum and course design.
  • Turn-key publisher platforms that demote the decision-making of instructors

Competition, Online Program Management (OPM), and publisher resources are not inherently bad things.  I view them as risks only when they subvert faculty control. OPMs, for example,  have successfully ramped up online programs and built university enrollment.  Publisher platforms have provided course content and resources where, perhaps,  none existed.  Each of these trends, however, does impact faculty self-determination and needs to be carefully considered.

 

1024px-Adult-coding-collaborate-1181472

Photo Credit: Christina Morillo,  Creative Commons CC0 1.0 Universal Public Domain

 

The Nature of Change

The nature of change in online learning can be misleading.

Many changes in this space get hyped and then disregarded when they don’t achieve immediate, high impact.  But, then, over time they have profound, long-lasting impact.  The MOOC is a good example.  2012 was the hype year.  2013 was the year of disillusionment.   Today, MOOCS are a vital enrollment strategy for many universities.

(See https://en.wikipedia.org/wiki/Hype_cycle for a definition of this phenomenon.)

In a somewhat related manner, many of the changes in the last decade happened incrementally without cataclysmic impact and disruption.  And yet eLearning is in a very different place today because of them.

The Recent Past

It is eye opening to consider just a few things that the past decade has brought to us.  I’ve intentionally omitted a deeper discussion on many things such as Virtual Reality, Augmented Reality, eBooks, artificial intelligence, and so much more.  I’m sticking to a few basic things that have had profound impact on just about everyone.

Online enrollments have steadily increased

The Babson Survey Research Group showed us year after year that distance education enrollments continued to grow, even as overall higher education enrollments declined. Today, nationally, nearly a third of all higher education enrollments are online. (Seaman, Allen & Seaman, 2018).

(For more on the Babson reports see: https://www.onlinelearningsurvey.com/highered.html)

At our state university, nearly a third of our credits are earned by students in fully online classes.  More than forty percent of the credits are earned in either online or hybrid classes.  Most of our students take at least one online class each year.

Over the past eight years, online enrollments kept climbing as did the perception of faculty that online courses were qualitatively on par with face-to-face courses.  As more faculty became engaged in online learning, perceptions changed in favor of online learning.

Today, imagine the negative impact on your university if online enrollments were removed overnight.

Tools have become cloud-based

In addition to online enrollment increases, most of our tools today have become cloud-based.  Our IT department, in a metaphoric sense, is spread across many for-profit companies who host our learning management system, media system, collaboration tools, office applications, remote proctoring, and more.  Where you won’t easily find a cloud-based service is in how to improve teaching and learning experiences for your own students.  Universities will need to keep online pedagogy/andragogy  in their wheelhouse of expertise.

(See article that recognizes shift away from technology-focused professional development to pedagogical-focused:  https://www.insidehighered.com/digital-learning/article/2018/02/28/centers-teaching-and-learning-serve-hub-improving-teaching)

Accessibility, Mobility and Interoperability have become critical

In the past decade, legislation and compassion have demanded that we pay greater attention to accessibility for all students, including those who are visually and hearing impaired.  Our courses play on mobile devices and are adaptable to smart phones, tablets, and desktop computers.  Cloud-based services talk to one another.  The learning management system survived obsolescence by partnering with other service providers.   Our university learning management system, because of integrations with other providers, can display media from a library, check originality of student papers, remotely proctor, engage students in a discussion over a PowerPoint, and perform other services that are not innate to the platform.

It is a different world – and yet it didn’t seem to change overnight or particularly startle anyone with its abruptness.  It didn’t feel like an eruption or disruption.

The Near Future

Current trends suggest that the future won’t be any different.  It will change incrementally, but one day instructors will wonder what happened!  Related to faculty autonomy and self-determination, specifically, here are some of the critical market forces faculty should observe:

Market dominance

The annual Babson report tells us that nearly half of online students are served by five percent of higher education institutions.   Only 47 universities enroll almost one-quarter of fully online students.  Those universities will presumably have the resources to reinvest in curriculum development, instructional design, enrollment management and aggressive digital marketing.  Smaller institutions and new entrants to the marketplace may be forced out or forced to partner with each other and with external organizations in order to compete.  The challenge to faculty comes with a perceived gap between well-resourced and under-resourced programs, unnatural alliances and forced partnerships.

On a side note, the encouraging news for smaller public universities is that the majority of online students take at least one course on campus.  Most online students come from within 50 miles of campus.  Distant education is local, which means that the university can cultivate relationships with partnering two-year colleges, local employers, and community groups and market through both traditional and digital methods.

In short there is hope for smaller institutions – but only if the following are diligently and vigorously supported:

  • Strong faculty support for online development, both pedagogically and technically (instructional designers, instructional technologists, learning management specialists)
  • Strong student support (orientations, mentoring, advising, tutoring, high impact practices like first year seminar and electronic portfolio)
  • Integrated, team-based approaches to enrollment management, marketing, advising, online program development and professional development.
  • Communities of practice that encourage faculty to share best practices with one another and especially with other members of their discipline

In my opinion, the days of working in silos are numbered.  If programs are developed without market analysis and attention to enrollment/communication strategies from the start, they will not compete and will not be available to faculty and students in the future.

Instructional Design Support

In the past, the tide of instructional design has ebbed and flowed.  Today and toward the future, it is cresting.  A quick scan of Indeed.com will convince you of that. The best programs now have a phalanx of instructional designers.  My chats with educational leaders has underscored the fact that instructional designers provide university programs with a competitive advantage.

The Online Learning Consortium (OLC) reports that as online learning has grown there has been an equivalent increase in demand for instructional designers in higher education institutions (Barrett, 2016).

(To learn more about OLC and the evolving field of instructional design, visit https://olc-wordpress-assets.s3.amazonaws.com/uploads/2018/07/Instructional-Design-in-Higher-Education-Defining-an-Evolving-Field.pdf)

Fulfilling that demand has not been consistent across universities.   In a recent survey, fewer than half of those who taught online said they had worked with an instructional designer.  The following article provides one interesting approach to sizing the number of designers to the institution.

https://www.insidehighered.com/blogs/technology-and-learning/many-instructional-designers-librarians

In my opinion, we typically don’t have enough instructional designers. Designers play a critical role in helping faculty match instructional strategies to the level and type of learning and can draw from a tool chest of techniques, applications, methods and evidence-based practices.  A recent survey of instructional designers, cited by OLC, showed that 87% of respondents have masters’ degrees, and 32% have doctoral degrees.  Most higher education instructional designers provide faculty with direct support in design and professional development (Intentional Futures, 2016).  The result is increased student performance and satisfaction as evidenced by research studies on specific practices.

At our university, through extensive professional development we saw a growing body of faculty adopt the skill set of instructional designers.  We saw faculty who could critically evaluate online courses and discuss issues of course alignment, integrated course design, accessibility, student engagement and many of the issues that concern instructional designers and make a difference to students.

In the past, in instructional design and other areas of online learning, higher ed institutions failed to build their core competence.  Several sources identify the number of instructional designers employed by colleges and universities as 13,000. But, as the report from the Online Learning Consortium states, “There is still a certain mystery surrounding who instructional designers are.”

In short, instructional designers in a good relationship with faculty will strengthen the faculty’s ability to make good decisions and produce a good, impactful course.  Over time, faculty who design and develop online courses should acquire many of the skills of an instructional designer.  That can happen through seminars and workshops and communities of practice, learning circles, brown bag lunch sessions – all of it sponsored by faculty groups and the centers focused on faculty development and online learning.

Online Program Management

Wherever we have failed to build our core competence, external providers are ready to flood in and assist us at great cost to the university.

One category of external provider is the online program management company.  Online Program Management companies (OPMs) provide expertise and services in instructional design, enrollment management, digital marketing and other areas in support of online learning.  They provide the support through a number of revenue-sharing mechanisms.  An online program manager, for example, might help plan a program, design courses, produce courses and manage enrollment and marketing.  In exchange for these services, the Online Program Management company might receive revenue equivalent to 40 to 60 percent of the tuition dollars earned from the program for a contracted number of years.  A typical number is 10 years.

The following Eliterate article estimates that 27 companies currently provide Online Program Management.

https://eliterate.us/online-program-management-market-landscape-s2018/

The alternative is that there are external providers who will provide a needed service for a fee.  For example, if the university is weak in digital marketing, an external fee-for-service organization can help. In this arrangement, the university pays the fee up front but keeps the tuition revenue.  A growing number of companies provide services and then recover the fees through tuition revenue sharing – but only until the initial costs are covered.

Faculty need to be aware of all of these flavors of services because faculty are invested in the future of the university and its their autonomy that is at stake.

One of the founders of the original Online Program Management companies (but who now has a vested interest in a different business model) describes a growing dissatisfaction with the OPM revenue-sharing model:

“He compared revenue-share OPMs to the businesses in the early 2000s that built websites for millions of dollars. At the time, they were the only people who knew how to do it, but as more workers learned HTML, these companies went from ‘very valuable to pretty much out of business’ in a very short span, he said.”

Inside Higher Education, 2018

 

According to Inside Higher Ed, the bottom line is one that all faculty should recognize:

“To launch a successful online degree, institutions need expertise in instructional design, must be skilled in identifying areas where there is student demand, and must have enough funds to develop and market the program, which several sources said could cost upward of $1 million each.”

 

https://www.insidehighered.com/digital-learning/article/2018/06/04/shakeout-coming-online-program-management-companies

 Publisher Platforms

Business analysts predict that the US digital education publishing market will register a compound annual growth rate of close to 12% by 2023. (Research And Markets, 2019) The digital education business is a huge and growing market.

Online faculty can choose to use digital publisher resources for part or all of their courses.  Textbooks often come with a publisher-based online learning platform where students can engage with course material.  In many cases the publisher platform is integrated with the university learning management system.  Students log in to their university online course and seamlessly connect to the publisher resources without a second log in and in many cases with no awareness that they are accessing the publisher platform. In some cases, the reverse is true.

Key players in the U.S. digital education publishing market are Cengage Learning, Inc., Houghton Mifflin, McGraw-Hill Education, and Pearson.

The upside to publisher platforms is that they save instructors time and that publishers are continuously improving their offerings, which, in some cases, include adaptive learning.  (McGraw Hill’s LearnSmart, for example.)  The downside is that, for some platforms, answers to quizzes and solutions to problems are discoverable on sites that students use in order to cheat on their assignments and exams.

The more insidious downside to publisher platforms is that they can lead to an instructor acquiescence to all of the critical design decisions of a course.  In some, hopefully rare, cases instructors substitute publisher PowerPoints for their own advance organizers, explanations, guiding questions, graphical illustrations, and materials that are contextualized for the specific circumstances of the students, program and environment.

As one online program manager cautions:  “Never allow publisher-made materials to be the meat of your course!“

Learning House

Adaptive Learning

Adaptive Learning has huge potential and should be continuously monitored and repeatedly evaluated – but again, the role of the faculty member should be carefully considered.

Contrasted with traditional Learning Management System content, adaptive is not a ‘one size fits all’ learning product.  Typically,  we structure topics within a learning management system in a sequence.  All students, regardless of knowledge, experience or ability move through the same sequence.  Adaptive Learning, in contrast, assesses students on what they know and what they need to learn.  Students then view or engage in the content that they need.  If students miss items or lack confidence, then the adaptive system connects them to the appropriate prerequisite skills.

Adaptive Learning solutions are available in a variety of forms.  For one, they are available as turnkey systems.  McGraw Hill’s ALEKS is a popular product that assesses and teaches math subjects that range from pre-algebra to calculus.  They are also available as open platforms in which an instructor or department can build content and sequence learning pathways that capture the prerequisite relationships between topics.  Examples of open adaptive learning systems include Acrobatiq, CogBooks, and BrightSpace LeaP™  .  Many of these platforms can be integrated with learning management systems through an interoperability standard called LTI (Learning Tools Interoperability).

(For a glimpse into adaptive learning, visit: https://campustechnology.com/articles/2019/04/24/new-frontiers-of-adaptive-learning.aspx)

Once the adaptive system has been designed/adopted and deployed, faculty need training on how to facilitate a group of students who are progressing at their own pace but still need the academic and social support of their peers and instructor.  There are many design decisions related to how an adaptive system dovetails into a course – and faculty need to be at the center of that decision-making.

Open Educational Resources (OER)

Open Educational Resources are already impacting us in so many ways.  You might be surprised to hear faculty denounce open textbooks, for example, and yet find them in your book store.  Faculty can engage with OER on so many levels.  They can find open resources cataloged in dozens of repositories such as OER Commons (https://www.oercommons.org/ ) and Merlot (https://www.merlot.org/merlot/).  They can purchase completely assembled OER-based courses from, ironically, publishers who earn more from their digital platforms than from underwriting and maintaining original content.  They can use repositories like OpenStax (https://openstax.org/ ) to find complete textbooks or sign up for a free account in OpenStax CNX (https://cnx.org/), which gives granular access to open material at the page and module level.  Finally, faculty can participate in the creation of OER by creating content, assessments, learning objects and supplementary material and posting them to a repository.  In our state, we’ve just launched Opendora (http://www.opendora.com/ ) that houses materials created by MinnState faculty.  Faculty can also participate in textbook reviews.   In other words, faculty can engage in the use of OER in many ways before even considering authoring a book and making their intellectual property freely available.

Conclusion

Current trends and practices offer support to faculty, but also have the potential of rendering instructors passive bystanders in their own courses.  The online learning space is becoming more competitive and expensive.  To many, this seems counter-intuitive. After all, online learning should be opening up new markets and it should be cheaper.  Universities can decrease their physical footprint!

The reality is that universities will either invest internally in multifaceted teams in support of strategic program development or pay outsiders to design, build and market online programs.  Potentially, instructors could be supported or sidelined.   We will either invest in instructors populating adaptive systems or purchase off-the-shelf solutions that may not, in the end, be well adapted to our learners.  We will either support rich curriculum development or populate online courses with publisher materials and, in the end, pass on the cost to students.   We will either use OER in new ways of engaging students or purchase turn-key solutions built entirely on OER.

Faculty have the greatest stake in the future direction of the university and the impact of these key trends.  Their own autonomy and academic freedom is at stake.  Faculty need to be aware of the issues and be present wherever decisions that impact curriculum development are made.

References:

Michael Feldstein’s Blog (industry observer) eLiterate
https://eliterate.us/

Phil Hill’s Blog (industry observer)
https://philonedtech.com

Wil Thalheimer’s Debunker Club (research to practice)
https://debunker.club/debunking-resources/

Online Learning Consortium
https://onlinelearningconsortium.org/

Inside Higher Ed
https://www.insidehighered.com/digital-learning/views/2018/04/04/are-we-giving-online-students-education-all-nuance-and-complexity

Publishing Market Research
https://www.researchandmarkets.com/reports/4764929/digital-education-publishing-market-in-the-us?utm_source=CI&utm_medium=PressRelease&utm_code=4lszwc&utm_campaign=1237781+-+US+Digital+Education+Publishing+Market+Report+2019+-+Increasing+Number+of+E-Learning+Enrolments+in+the+Higher+Education+Sector&utm_exec=joca220prd

The Challenge of Online Learning is Challenge

Introduction: Placing Students at the Center

We know the capacity for good movies to stimulate our curiosity, make us alert, shock us, and tug on our emotions.  Movies are carefully scripted to evoke those audience experiences and maintain our interest.  The script and the production can’t be contrived without careful attention to their impact on the audience.

In higher education, we ought to be thinking about eLearning and our impact on students in a similar way but, evidently, we don’t.  We ought to be thinking about student emotion, curiosity, and motivation. Instead, we focus on content and we appeal to the student’s sense of order, and stability.  We reduce surprise and meet expectations.  In a somewhat cynical sense, it is a transaction.  Students need to know what to expect to plan their time, block out their schedules, and reduce the risk of failure. If they do these things correctly and put in the time, they get credit.

And so, we design accordingly.  As instructors, we introduce ourselves.  We then focus on good housekeeping.  We tell students about the goals and objectives.  We tell them what they are expected to do.  Each assignment has a clear due date and a point value.  And so on.

We think that good eLearning should have few surprises.  Good eLearning should meet all of the criteria of a good housekeeping rubric.  Objectives are stated.  They are measurable.  They align to activities and assessments.  All materials support the goals.  Technical support is identified.  And so on.

And it’s not that these things aren’t important.  It’s just that our rubrics don’t probe the depth of student experience in an online course.  In one quality review rubric, the learning activities occupy only one section.  The goal of active learning is but one essential standard out of many.

Some of our thought leaders decry the present state of online learning.  M. D. Merrill called it shovel-ware.  Michael Allen calls it boring.   Cathy Moore calls it an information dump.  Will Thalheimer says that ‘eLearning has had a reputation for being boring and ineffective at the same time it is wildly hyped by vendors and eLearning evangelists.’

(https://www.worklearning.com/wp-content/uploads/2017/10/Does-eLearning-Work-Full-Research-Report-FINAL2.pdf)

But more and more of higher education is being delivered online.  Students demand it.  So then, what is the remedy to this boring, ineffective, information dump hyped by eLearning evangelists like me?

The critics give us the answers – if we would only listen.  Michael Allen proposes learner challenge as a source of motivation and interest.  M. David Merrill, in his first principles of instructions, puts learner problem solving at the center.  Others have written about discovery or inquiry or active learning or constructivism.  All of these things put the learner at the center, and not the content.  As Michael Allen says, ‘Content is not king.’  No, content is not king; it’s not even prince.

Will Thalheimer, based on his extensive research writes:

In general, providing learners with realistic decision making and authentic tasks, providing feedback on these activities, and spreading repetitions of these activities over time produces large benefits.

https://www.worklearning.com/wp-content/uploads/2017/10/Does-eLearning-Work-Full-Research-Report-FINAL2.pdf

So, what can we do as instructors to think in terms of student experiences, active learning, problem-based learning, emotion, curiosity, surprise, novelty, realistic decision making, authentic tasks, constructive feedback, repetition – and all of things that place learners at the center of our design rather than content?

One answer lies in challenge.

Mystery Skull Interactive Challenge

The screenshot below represents an activity that engages students with a challenge designed by the Smithsonian Institute.  In the activity, students drag skulls into boxes, rotate them, and try to identify the species of the skull.  When stumped, students can ask for hints.

The same content without the challenge is covered in hundreds of courses.  You can picture the familiar old pattern. You can anticipate that there would be a topic named ‘Homo Habilis’.  The topic would feature several paragraphs of text, perhaps with pictures, that describe the distinguishing features of this species. Homo Erectus would be dutifully covered and then on to  Neanderthalensis, and Homo Sapiens.  The instructor might even link to the Smithsonian activity.

What if, instead, the instructor designed the course with the challenge at the beginning or at the metaphorical center of the course.  The course content would serve as a resource to help students master the challenge.  In the challenge students examine and compare skulls.  When stumped, they consult the hints – and look up resources.   In this scenario students play an investigative role.  They are immediately challenged and immersed in the heart of the course. Their natural curiosity is piqued.  They experience the ‘pain’ of failure when they make incorrect choices.  Their imaginations are stirred as they role play the scientist.

http://humanorigins.si.edu/evidence/human-fossils/mystery-skull-interactive

It may be difficult to imagine instructors designing challenges. The Smithsonian Institute obviously had a budget.  It is evident in the media. The skulls are in 3D.  Students can rotate them.  The interface is beautiful.  The learning object was done in Flash, which took some scripting.

Let’s set aside the media production for a moment. (We’ll return to that in the conclusion.)  Let’s focus first on the value of challenge.  If we had the resources and the creativity to design our courses around a challenge, is there value in that?

mysteryskull

Screenshot of Smithsonian Institute Mystery Skull Interactive

 

CCAF Model

Michael Allen would say there is tremendous value in challenge – and challenge is an important element in his CCAF model of design.  CCAF represents context, challenge, activity and feedback.

The CCAF model should be a source of inspiration to instructors who design online courses.  In brief, CCAF is where the fun and, forgive me, the challenge of instructional design begins.  Let’s explore CCAF for a minute.

Context

What motivates your students?  What is the situation in which they will be able to use and apply the learning?  Your course isn’t some abstract, impractical thing.  It has relevance.  It has meaning.  It will impact your student lives.  It will make a difference. Imagine the context in which these things are true.  Imagine the setting that makes the course material relevant, interesting, appealing, and life-like.

Dr. Linda Rening, an instructional designer, writes “What would the learner see, feel, and experience while he or she performed the correct behaviors?”

In nursing, we might place students in the context of a surgery or an outpatient’s home.  In managerial accounting, we might place students in an organization and ask them to gather and analyze information in support of the organization’s strategic goals.   In history, we might place students in Britain in the 30s, faced with the challenge of appeasement versus aggression.

Challenge

‘Challenge’ turns traditional course design on its ear.   As I’ve said, many courses follow a tired pattern.  That pattern invites this prescription:  provide housekeeping details, state objectives, present content and assign readings, elicit performance, provide meaningful feedback (sometimes), and assess.

A ‘Challenge’ activity engages students immediately in thinking about the course content and using it in some way – perhaps unsuccessfully at first.  When developed artfully and skillfully, the ‘Challenge’ will immediately cause the learner to ‘feel’ the relevance of the course material and recognize the difference between what they know and don’t know. They will feel pain.   If the ‘Challenge’ presents too little pain, the student may develop a false sense of security about what they know.  If too much pain, the student may be scared off.

Challenge addresses all aspects of motivation.  Challenge causes students to act – to solve problems, to make decisions, to consult resources, etc.  The right level of challenge causes students to persist.  They engage because the challenge is not too easy and not too difficult.  It’s the Goldilocks engagement.  Just the right level.  Finally, Challenge engages students with a level of intensity.  That vigorous engagement promotes retention.  The things we work harder on are those that are remembered.

In a course on public leadership, the challenge might be to write a testimony in support of a provision in a legislative bill.  Students would have to draw upon their knowledge of history, the law, the public sentiment, and other things in support of their testimony.  In computer science, the challenge might be to write code to perform a task in as few lines as possible.   In law enforcement, students might play the role of a parole officer who needs to assess risk of recidivism without offending the client.

In all of these cases, when students are challenged early in the course, they might recognize what they don’t know and be more open to learning.

Activity

The challenge connects to activities that students must do to increase their level of knowledge and skill.  In a sense, the challenge provides enough cognitive dissonance to motivate the learner to learn.  Cognitive dissonance is a perceived inconsistency between what the learner knows and ought to know to realize the course outcomes.  That difference leads to discomfort that the learner is motivated to reduce.  Too much dissonance can be debilitating.  The right amount is motivating.

Loosely, Dr. Allen’s CCAF model is like M. D. Merrill’s ‘First Principles of Instruction’, which begins with the principle that learning is promoted when learners are engaged in solving real-world problems.   The models are similar in that they are problem-centered. The challenge causes learners to pull in knowledge as needed.  Enough cognitive dissonance is created that motivates learners to seek the resources that will lessen the discomfort of ‘not knowing’.  This is quite different than and an improvement over, for example, a presentation event of instruction. ‘Presentation’ suggests a push to students of relevant information, much like Robert Gagne’s ‘Present the content’ event of instruction.

In sharp contrast, many of our courses are content-centered and not challenge or problem-centered.  Many of our online courses start the same.  Students meet the online instructor through some form of an announcement and then get promptly led to the course housekeeping documents that spell out course title, instructor contact into, course prereqs, description, objectives, reading list, etc.   Another section or document may spell out what is due at what time and for what number of points.  The transaction. Then there’s the boiler plate technical and disability support information and so on.

Before even reaching course content, students have run the gauntlet of course housekeeping information.  One horrific development is that occasionally students will run the gauntlet only to find in the main course content a series of publisher Power Points, interspersed with quizzes and a major project.

A cynic might look upon online courses as entirely transactional.  Students will say ‘if I do the work, I’ll get the grade.’  Or ‘I take online courses because it’s convenient.’   Today’s students balance work, life, family, and … school.  They understand that a certificate or a degree will lead to a job or better pay.  They will exchange their time and effort for earned credits.  They will accrue enough credits to graduate.  And then they will redeem that time and effort in the form of credits for a job or better pay.

Unlike the cynic, our inner instructional designer says that we can create a better learning experience for students online.  Why better?   We can individualize instruction.  We can programmatically add instruction that will help learners overcome obstacles.   We can challenge students in a ‘safe’ environment where their lack of knowledge isn’t exposed.  We can encourage students to take chances without the risk of embarrassment.

Will Thalheimer writes that often online and hybrid courses outperform traditional face-to-face courses.  He asserts that its not the modality of learning that makes the difference.  It is the teaching and learning methods used in the course.

The bottom line is that eLearning in the real world tends to outperform classroom instruction because eLearning programs tend to utilize more effective learning methods than classroom instruction, which still tends to rely on relatively ineffective lectures as the prime instructional method.  (Thalheimer)

 Feedback

Feedback that is detailed and specific and directly related to the learner’s action is a critical element to any learning and particularly important in online instruction.  In the CCAF model, feedback is best when it can be applied to future actions.   Generally, in online learning, students benefit when they receive feedback from one quiz, project or activity that they can apply to the next.   In a CCAF challenge, students act and then receive guiding feedback that will help them with future actions.

So how do I apply CCAF to my course?

Briefly, a strategically placed challenge toward the beginning of the course might provide the level of motivation that causes students to act, to persist, and to work toward their goals with intensity.  This challenge might be in the form of a case study, or a decision-making scenario or an analysis.

Early challenges can be holistic and realistic.  By holistic, I mean that they resemble life itself and bring together the entire scope of the course in terms of facts, principles, rules, concepts, and problems.   Developmental challenges can be more focused on some subset of the course — building skill that can be later applied to the holistic challenge.  Students should get better at repeated attempts as they draw upon the course content.  Realistic challenges should help students transfer skills from the classroom to the real world.

Returning to the Fossil Challenge

Granted, the Smithsonian Institute Fossil Challenge is both engaging and beautifully designed.   But its real value is in getting students to think.  Low tech alternatives can equally engage students.

All instructors with a little ingenuity have the tools available to them.  With the following skills, one can incorporate challenges into learning management system content pages:

  • Creative Story telling
  • Editing and Importing of Images
  • Editing and Importing of audio
  • Creating Hyperlinks
  • Embedding Web 2.0 content from cloud-based applications

 

Instead of rotating objects, for example, instructors can embed a SlideShare viewer into their course or creatively display a series of photos in a film strip.

In the example below, still photos were dropped into a PowerPoint Online template, saved to a public OneDrive folder,  and then embedded into a course.

 

mysteryskullPowerPoint

Screenshot of PowerPoint Online Template, which can be saved to a OneDrive public folder and embedded into a content page

 

Conclusion

Context, Challenge, Activity, and Feedback are all critical to motivational and effective online learning.  Online instructors can reorganize their courses with the contextually relevant challenge at the center of the course, complete with activities and feedback to build student skill.  Most of the ingenuity is in the story telling, the setting up of scenarios, and enabling students to make choices.    If challenges cause students to think or to be motivated to learn more, then the online course will be effective and students will benefit.

 

Aligning Strategies to Types of Knowledge

The challenge:

Cross-country skiers use one type of wax for all conditions.  After all, snow is snow.

That statement is obviously absurd.  Snow varies in age and moisture.  Waxes behave differently given the temperature.  Skiers have different objectives; they may want their skis to grip or glide or both.

Similarly, instruction is instruction.  Course content dictates what must be taught and how.  Again, obviously absurd – but perhaps not so ‘obvious’.  Instructional strategies should vary based on the students, the situational factors as well as the level of learning and type of knowledge represented by the learning outcome.  Instructors need different waxes and techniques based on the conditions.

Successful online course design requires a fundamental shift from instructors being content-centric to being aware of the snow, the temperature and the outcomes.

When I attended university, there was little attention to the pedagogy of instruction.  Snow was snow. It was up to the student to work out the strategies for success.  The professors were knowledgeable and inspiring.  The best of them provided coherent and sometimes fascinating lectures.  History teachers could conjure up the 1905 Winter Palace revolution; biology teachers got animated over the hermaphroditic activity of earthworms; and so-on.  The stage was set but it was up to us to make sense of the lectures, strategize on how to understand, remember, and recall the pertinent information; and perform well on the assignments and exams.  It was college. It was expected.

Given that more than one-quarter of the students drop out of college after their freshman year, clearly something isn’t working.  The reasons might be primarily social and financial, but they certainly include the academic.  Students who don’t have the strategies to learn in a university environment get academically disconnected very quickly.

Online learning doesn’t inherently help the situation.  In fact, it might accelerate a student’s problems.  Online faculty find it more challenging than traditional on-campus instructors to facilitate true and genuine discourse between students and to facilitate engagement of students with the subject matter.

Online faculty also find it more challenging to gauge how their students are doing.  Faculty don’t get that immediate feedback from students online as they do in the classroom.  Is this content reaching students?  Is it going over their heads.  What questions are they having?  That immediacy doesn’t inherently exist in an asynchronous online environment.

It is therefore more critical than ever to take a teaching and learning approach to online instruction.  By ‘teaching and learning’, I mean that we need to understand the component skills (Ambrose, Bridges, Lovett, DiPietro and Norman) that we are trying to develop in our students.  We need to understand what type of knowledge those skills require and what strategies are best matched to the types of knowledge (Smith, Ragan).  What level of learning are we hoping our students will achieve?  Are they to remember key facts, understand important concepts, apply their learning to new situations?  Are we trying to promote retention of information or application of knowledge in novel situations?  What precisely are we trying to do?

Online environments, because of their remoteness, require that students practice and perform.  They require that students receive periodic feedback – feedback that they can apply to future assignments.  So, rather than one high stakes test, an online course might include multiple assignments that help the students develop in stages.

Instructors may need to be analytical about the course content.  What levels of learning: remembering, understanding, applying, analyzing, evaluating, creating?  What types of knowledge: declarative, conceptual, procedural, attitudinal, and/or strategical?  What strategies will promote that knowledge?

This post provides a simple example based on photography.  The art and science of taking good photographs involves many types of knowledge and thereby invites different instructional strategies to help students acquire that knowledge.  Hopefully, you’ve taken pictures and enjoy looking at photographs.  Some simple technical elements are introduced in this example.  Many people will recognize them.  But for those who don’t, I’ll provide a short explanation along the way.

camera

Declarative knowledge

The camera diagram above presents two labels:  aperture and shutter.   Both of these things feature prominently in the making of photographs.  At a declarative knowledge level, students should be able to identify an aperture and shutter, given an illustration.  This alone, however, is unlikely to be the end goal of instruction.  Labeling parts of an engine doesn’t mean you can fix an engine.  Labeling parts of a camera doesn’t mean that you take creative photographs.  The ability to label is a ‘stepping stone’ type of objective – but a necessary stepping stone to understanding the concepts of exposure and depth of field and the use of those concepts in the composition of a photograph.

As a result of our design, we might choose to focus first on labels and definitions and then on concepts.  Or we might choose to deal with concepts and definitions, concurrently, in a more integrated manner.   In the former, we might choose to reduce cognitive load on students to not overwhelm them.  In the latter, we might want to show the immediate relevancy of these things toward a conceptual understanding.  It obviously depends on the students and the context. Those are decisions that an instructor is in the best position to make.

Whether or not we tackle declarative and conceptual knowledge as discrete instructional steps, we must recognize that they are separate.

A student demonstrates declarative knowledge when s/he can point to the opening in a camera lens and identify it as an aperture or see an illustration of a shutter and identify it as such.  When a student can define an aperture as a controllable variable opening in a lens, or a shutter as a device that lets light pass through for a precise length of time, then the student is demonstrating declarative knowledge.  In fact, the student can include these terms in organized discourse that makes the student appear very knowledgeable.  Use focal plane shutter in a sentence.  Sounds quite technical.

In fact, organized discourse might be quite misleading.  A student may have no knowledge of the underlying concept of exposure or how to use aperture and shutter strategically to solve a composition or exposure problem.   An assessment that requires students to label and define things or use the terms in an essay might only be assessing declarative knowledge. Again, probably not the end goal.

 

Conceptual knowledge

As mentioned, both aperture and shutter relate to the concept of exposure.  Exposure, in photography, is the amount of light that reaches a digital camera sensor or the light sensitive crystals on film.  Controlling exposure with aperture and a shutter is a balancing act.  The larger the lens opening (aperture) the shorter the time the shutter should open (shutter speed) to achieve proper exposure.  The smaller the lens opening, the longer the time the shutter should open.    If the shutter were opened for too long without a balancing small aperture or if the shutter were opened for too short a time without a balancing large aperture then the picture would be over or under exposed.   That is the concept of exposure and its related measure: exposure value.  It can be understood mathematically as EV = log2 N2/t or metaphorically as a balancing seesaw.  In either case the understanding is a conceptual understanding.

The strategies for declarative and conceptual knowledge will be different.  For declarative, we might help students relate to what they already know: the pupil of an eye for aperture; a window shutter for a camera shutter. We’ll also come up with strategies for retention and retrieval.  For conceptual, we might use the analogy of seesaw or have students craft an equation that requires an increase in one variable to offset the decrease in another.

In addition to exposure, aperture and shutter speed have significant impact on the composition of a photograph.  The larger the aperture, the less the depth of field, which means that objects in the foreground and background will be blurrier.  The shorter the shutter speed, the blurrier moving objects will be.  If you want to focus on a goldenrod and blur out the plants in the fore and background (left), you choose a large aperture.  If you want to focus on a single branch and blur the background (top right), you choose a large aperture. If you want to sweep across pines against the sunset and essentially paint with light, you set the shutter to a very slow speed (bottom left) and prevent over exposure with a very small aperture.

collage

Compositions with aperture and ahutter Speed.  Left image and top right images show small depth of field. Bottom right image shows slow shutter speed.

Students can be presented with photographs that show these concepts in play.  They can be asked to guess at the aperture and shutter speed setting.  They will look for exposure and blurriness in the foreground, background and subject.  This is analysis.  The types of knowledge (declarative and conceptual) now interrelate with remembering terms, understanding concepts, applying concepts and analyzing.  We remember the definition of aperture; we understand that exposure value is a relationship of aperture to shutter speed; we apply our knowledge of aperture to blur a background; and we analyze a photograph for evidence of camera settings. In short, levels of learning (Bloom’s taxonomy) intersect with types of knowledge. Richard Mayer wrote about this in Applying the Science of Learning.  Patricia Smith and Tillman Reagan wrote about this in Instructional Design, as did many after them.

Strategic knowledge

Declarative knowledge supports conceptual knowledge, which supports strategic knowledge.  We might present the students with a composition problem that can only be solved by using aperture and shutter speed strategically.  Perhaps it is unsolvable problem that requires yet another element:  film speed or film sensitivity.

Some instructors might choose to begin with a composition problem – requiring students to work backwards to the underlying concepts and underlying declarative knowledge.  Some instructors will combine types of knowledge and reveal the interrelationships of things sooner rather than later.  Whatever the overall strategy, a clear awareness of types of knowledge will help in the instructional design.

 Conclusion

When instructors think about the component skills, levels of learning and types of knowledge and all of the factors that will impact students acquiring, assimilating and applying new knowledge, instructors are practicing instructional design.  Instructional design places the learner rather than the content at the center of focus.  Intentional, instructional design promotes better courses and increases the probability that students will be successfully engaged in achieving the course outcomes.

 

References

Smith, P. L., & Ragan, T. J. (2005). Instructional design. Hoboken, NJ: J. Wiley & Sons.

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (n.d.). How Learning Works. Josey-Bass

Mayer, R. E. (2011). Applying the science of learning. Boston, MA: Pearson/Allyn & Bacon.

 

 

 

Interactive Case Studies: First Steps

Introduction

The complaint against eLearning is all too common:  eLearning applications are boring page turners.  The implication is that students flip through the material, learn enough to pass the exam and move on.  The experience is transactional; not transformational.  No behavioral change.  No cognitive change.

Interactive case studies are one strategy to remedy the problem – but, frankly, they are a bit of a challenge to create.  In past articles, I’ve introduced some of the research that supports the use of case studies.  I also introduced interactive fiction as a way of getting started.  If you haven’t read those posts, I’ll introduce a new example in this article and then move on to ‘first steps’.

Interactive Case Studies aren’t a recent tech fad.  The example that I cite in this post dates back to 2006, but it is as relevant today as it was then.  The strategy stands the test of time.  More importantly, the ‘interactive’ nature of the case study is easy to reproduce technically.  I chose this example because it demonstrates that even the simplest approaches can be effective.

The example is taken from case studies that were created in the Department of Rheumatology, School of Medicine, University of Birmingham.   30 interactive case studies were created all together.  The following is a description of one of them.  There are several critical points that are illustrated by this example.  Hopefully, they will motivate you to take the first steps in creating your own case study.

Background

The authors developed an interactive learning tool for teaching rheumatology. Their reason for doing so is best explained in their own words:

“The existing medical curriculum requires that medical students have a large factual knowledge base, and as such teaching has traditionally been through lectures and rote  memorization paying little attention to nurturing key problem-solving skills.

 

The existing medical curriculum requires that medical students have a large factual knowledge base,  and as such teaching has traditionally been through lectures and rote memorization paying little attention to nurturing key problem-solving skills.

 

Problem solving and decision analysis are essential skills for medical students and practitioners alike. The existing medical curriculum requires that medical students have a large factual knowledge base, and as such teaching has traditionally been through lectures and rote memorization paying little attention to nurturing key problem-solving skills.” 1


1. S. Wilson J. E. Goodall G. Ambrosini  D. M. Carruthers  H. Chan  S. G. Ong  C. Gordon S. P. Young

Rheumatology, Volume 45, Issue 9, 1 September 2006, Pages 1158–1161, https://doi.org/10.1093/rheumatology/kel077

 

Description of case studies

The rheumatology cases are short, reducing the burden on both authors and students.  In the graphical user interface, button clicks bring up resources.

The skill required to place buttons or hyperlinks on a web page is minimal.  Many authoring tools (Adobe Captivate, Articulate Storyline, and LodeStar) provide the ability to connect pages through button clicks or links.  Alternatively, you can partner with your computer science, technical communications or web design department and request a student who knows HTML and is comfortable with some basic JavaScript coding. (JavaScript is a popular scripting language that is commonly taught in schools.)

In the rheumatology case studies, buttons link to a physician letter, or a library that provides a range of background information.

Screenshot of Case Study, features links to resources

Rheumatology Case Study: Department of Rheumatology, School of Medicine, University of Birmingham

As pictured in the screenshot above, students can request patient details, ask questions, examine the patient, order tests and so forth.  In thinking about how you might replicate this in your own course, you should know that this is relatively simple to produce.  Patient details can be listed on a web page, contributing to the complete picture the student needs in order to make a diagnosis.

In the case study, the student navigates through a series of screens, each providing critical clinical information.

The user can order tests, but they come with a ‘real-world’ consequence:  a financial cost is incurred that gets tallied by the program.  This type of thing requires some simple JavaScript coding.   The costs are assigned to a variable that is shared by all pages.  If you wish to avoid that technical hurdle, you can state the cost of a test and still make an impact on the learner.

After the student collects information on the patient, s/he makes a diagnosis and prescribes treatment.  After completing the case study, the student is provided with feedback and a tally of the expenditures.

Formative Assessment

Students also take an assessment in the form of multiple-choice questions that test their knowledge about rheumatology. The student can repeat components that match missed test items.

Summative Assessment

Undergraduate students were asked to produce a written report based on one of the clinical cases.

“As part of this assessment the students were expected to:

Apply their investigative skills to diagnose a range of clinical rheumatological conditions.

Explain the use of a range of clinical and scientific investigations that are required to make a successful diagnosis.

Reports were marked by two independent rheumatologists according to the reporting of the approximately 30 pieces of information or actions relevant to the case, which they were able to find, and the explanation of how these were used in the diagnosis and treatment. Student marks ranged from 55 to 95% with the majority of students gaining 70% or more on their assignment reports. All students achieved both the learning outcomes, indicating the usefulness of the approach used.”

Student satisfaction

In a survey, twenty-eight undergraduate students out of thirty-one responded positively to the interactive case study.  Only thirty-eight out of fifty-three graduate students found the program useful enough to use in the future.  The sharp difference between undergraduate and graduate students may be attributed to access that students had to the case studies.  Graduate students were restricted to one case study.

“Both groups agreed that the program was well organized and clear, the cases were of appropriate difficulty (complexity), that it was realistic and that they had learnt from it.”

Now Your turn

This and past posts have made it evident that interactive case studies can be useful.  But given your time and technology constraints, how can you create your own case studies?

To get started writing interactive case studies, follow these suggestions:

  1. Consider patterning your first case study on what that is offered through Open Education Resources (OER). In most cases, the author has thought through the case study and has done the hard work of including just enough detail to make the case educative and realistic.
  2. Keep it simple. Use button clicks or hyperlinks to enable students to navigate through the case or bring up resources.
  3. Include an analysis activity that requires the learner to consider the ‘evidence’ of the case and offer an opinion.
  4. Include the ability of the learner to compare his/her analysis with that of the expert or peers.
  5. Use the case study to prompt discussion.

Authoring an interactive case study might be a challenge at first.  It’s a bit like creative writing – crafting a story that reveals critical information at the right time.  Terse, yet engaging.  Focused on one important requirement: the case study must help the student achieve an outcome.

Interactive Case Studies require ingenuity, time and a little technical know-how.  To help faculty and instructional designers get started, I offer a simplified method.  The intent is to get students immersed in the story, drawing upon their knowledge to choose paths, make decisions, offer an analysis and share with other students.

Interactive case studies can offer lots of bells and whistles.  In contrast, this is a simplified approach – more like an interactive story or a choose-your-own adventure.  Our inspiration came from a finance professor at our university.  We started with an Open Educational Resource titled Personal Finance by Rachel Siegel, which our finance professor selected.

An important side note:  Personal Finance is now in its 3rd edition, and is available from Flatworld.  Flatworld’s stated mission is:  We are rewriting the rules of textbook economics to make textbooks affordable again.

Personal Finance begins with the story of Bryon and Tomika a young couple who are currently in school and plan to get married soon.  Both students will earn at least $30,000 in their first jobs after graduation and will likely double their salaries in fifteen years – but they are worried about the economy and about their job prospects after graduation. They have critical decisions to make to secure their financial future.

Rachel Siegel follows the case study with questions that the young couple or a financial advisor should answer about their situation.  She then proceeds to outline the macro and micro factors that affect thinking about finances.

Set a Learning Goal

Before getting to work on patterning an interactive case study on the story in the text, we need to be clear on the learning goal.  You shouldn’t start any eLearning development without a clear goal in mind. You need to answer what learning outcomes the learner will achieve by engaging in the case study.   Rachel Siegel’s intent was to use the case study to make a point: there are a lot of factors to consider.  Our goal in the example was to use the case study to help the learner identify the macro and micro factors that affect finances.  In other words, we narrowed the scope.

Find an existing case study

In an effort to keep things simple, we patterned a case study closely on the one well thought out and communicated by the author.  This might help you get started.  Find a case study narrative in your own field and pattern your own case study closely on that one.  If it’s a good case study it will be short but feature enough detail to provide interest and a learning situation.

Case studies are found in open education resources (eBooks, PDFs, learning repositories like Merlot.org).  They are also found in case study websites like:

Science Cases

AMA Case Studies

But be careful.  Case studies are usually copyrighted.  Seek permission or ensure that the case study or text is offered under a Creative Commons license.

So, to recap, the first step is to set a learning goal.  The second step is to find a case study that already exists in the literature or on the web that you can pattern your case study on.

The design of our interactive case study is to provide the reader with a story (closely patterned on the text) and challenge the reader to determine which factors threatened the financial health of the characters.

This is a stepping stone case.  It is not a ‘putting it all together’ where there are numerous factors, no clear cut right and wrong answers, and plenty of room for interpretation.   In our case study, the learner is presented with the facts; parts of the story are revealed based on learner choice; and, at the conclusion, the learner answers some objective questions, performs an analysis, submits the analysis and compares his/her submission to the ‘expert’ analysis, which is revealed.   Alternatively, the end-product could have been an analysis that was submitted to a drop box or to an online discussion.

How we built it

First, we didn’t use Rachel Siegel’s story – but one closely based on it.  As an easy first step, you have the option of converting an existing case study into an interactive case study or creating a derivative case study that changes some critical details to challenge the learner.  If there is any doubt about the legality or the ethics of copying the intellectual property, please contact the owner of the creative work.

Once we chose our characters, we licensed images to match the characters.  Alternatively, you can use Wikimedia or find images licensed under Creative Commons.    I did a search at https://search.creativecommons.org/  and immediately found images of student couples that I could easily have used.

On the first page, we provided instructions.  The instructions tell learners about the built-in notepad and the transcript button.   These aren’t necessary.  Students can take notes in any way they prefer.  The transcript button shows a report of all of the feedback – but these items are more for the convenience of the learner.  Traditional methods are just as effective.

Screenshot of instructions and first introduction to Chris and Divya, the characters in the case study.

Screenshot of instructions and first introduction to Chris and Divya, the characters in the case study.

Provide Choice of Paths

Put the learner in control.   The characters Chris and Divya share a lot of personal financial details.   The readers (learners) of the interactive story can decide what details they want to read and pay attention to.  As depicted in the screen below, the reader can read details from Chris’ or Divya’s background or decide at any time to assess their situation.  The reader will obviously not provide an accurate or meaningful analysis until s/he reads most or all of the facts of the case.

The essential thing here is choice.  Adult learners like choice.  The more complicated the case study, the more that choice matters.   Given the objective, the answers to some questions will be important to pursue; others will be irrelevant.

Depending on the software that the interactive case study author chooses, choices can be presented as hyperlinks, buttons or hot spots.

If the author were using a PDF or HTML authoring tool (like Word or PowerPoint), then choices can be presented to the reader as hyperlinks.  If the author were using Captivate or StoryLine, the choices can be presented as hot spots (clickable areas).  In LodeStar the choices can be presented as hyperlinks or buttons.

Screenshot that shows choices presented to the reader as hyperlinks.

Screenshot that shows choices presented to the reader as hyperlinks.

 

Make Resources Available

In the case study, one of the important financial factors comes from economic data. Economic data is represented as a resource that is always available to the reader.

In the screenshot above, economic data can be accessed with a button click.  The button is visible on the screen at all times.

(In some of our more evolved case studies, resources are shown only when they are needed. Some behind-the-scenes scripting allows us to show the right resources at the right time.)

Performance and Feedback

At any point in the case study, the learner can opt to complete the assessment of the characters’ financial situation.  A link to the final analysis exists on every page.  It can also be presented as a permanent button on the screen.  This is akin to a supermarket.  The shopper can go down any aisle in any order and check out at any time when the buyer is ready.

Screenshot featuring Objective questions

Screenshot featuring Objective questions

In this case the ‘checkout’ is the final analysis.  It can be presented as series of multiple choice questions (objective) or essay question or both.  In our example, we chose both.

The objective items provide quick feedback.  The essay item comes up after the learner clicks on ‘Complete your analysis’.  The essay question reads as follows:

Write a very short report in the space below.  Include the macro and micro factors that are likely to contribute to Chris and Divya’s financial security and what factors represent a possible threat to their security.  Click on the ‘Submit’ button when you are done.  You can always amend your report and re-submit.

The learner can consult his or her notes and base an opinion on the facts.  The learner can cite the case study to support the findings.  When the learner clicks on the ‘submit’ button, the expert analysis is revealed.

At our university, the student may see immediate feedback or they are asked to copy and paste their analysis into either a discussion post or into an assignment folder text field (a drop box that not only allows attachments but text entries.)

Both of these options can be supported with the same basic technical know-how used in the rest of the case study.   If an eLearning authoring system like LodeStar were used, the essay submission could appear in the SCORM report of the learning management system.

Screenshot showing a prompt for an open-ended analysis based on the case.

Screenshot showing a prompt for an open-ended analysis based on the case.

Conclusion

The interactive case study presented here relies more on story-telling than it does on technical know-how.  In this type of case study, the learners can choose the information that they wish to read, or ask questions that they choose to ask.  In response to their decisions, information is revealed that will be used in the final analysis.

The final analysis can include objective test questions or essay items or both.  In a simple low-tech situation, the learner can write the essay in Word and then submit that to a drop box or assignment folder.  The interactive case study simply provides the background information and the essay prompts.

Low-tech or high-tech, the learner is asked to examine the information and consider its importance in the final analysis related to the learning outcome.  The learner is being asked to ‘activate’ thinking rather than mindlessly store and retrieve content.  The result is better outcomes for students.

 

Online Instructors and Design Thinking

Introduction:

For me, the excitement of building online courses comes from the power of design.  I love the idea of designing with intention.  Perhaps that is why I’m drawn to the work of Frank Lloyd Wright, Apple Computer, MIT Media Lab, modern architecture and, as you read in my last post, art galleries.   When faculty treat online courses less as the assembly of course documents and more as the product of thoughtful design, students benefit.

Stanford’s d.school (Design School) with its origins in mechanical engineering may seem like an odd source of inspiration for instructors who design online courses.  However, it turns out to be not only inspirational but quite practical.

d.school is the fountainhead of Design Thinking.  Design Thinking helps us to apply human-centered techniques to solve problems in a creative way.  It is used to make art, design products, solve business problems – and even to create online courses.

 

design thinking

The Five Steps of Design Thinking: Empathize, Define, Ideate, Prototype, Test

 

What is Design Thinking?

Stripped down to its essentials, Design Thinking requires empathy – it requires us, for example, to ask who our current or prospective students really are, what do they need, what drives them, what do they know, and what are their constraints.

Secondly, it requires definition.  After information gathering on student needs, program needs and employer needs, what is the problem that the course is intended to solve?  What will the students be able to do that they haven’t been able to do before – cognitively, physically, emotionally?

Thirdly, it requires idea generation.  What are all the strategies available to help students master a type of knowledge or skill at a particular level to a defined degree of success?

Fourthly, it requires playing around with ideas – sketching on white boards or on paper.

Finally, we need to test the usability and effectiveness of our ideas on real people.

That is Design Thinking in a nutshell:  Empathize, Define, Ideate, Prototype, and Test.

 

Design Thinking and Instructional Design

For the last several years, instructional designers have written about Design Thinking and its interrelationship with various traditional and agile design approaches. Corporations have used it in building user-friendly products that meet needs.  But the benefits of Design Thinking and even of Instructional Design have bypassed online learning instructors.  Why?

For one, online instructors can be fiercely independent.  They are the subject matter experts – the content experts. Of more than two thousand faculty members who responded to the Inside Higher Ed’s 2017 Survey of Faculty Attitudes on Technology, only 25 percent said they have worked with an instructional designer to develop or revise an online course.  That is a very low number but not completely unexpected.  Jean Dimeo in her article Why Instructional Designers Are Underutilized, cites possible reasons why:

  • Faculty are busy
  • Institutions have few or no instructional designers and/or learning support personnel
  • Instructors may be unaware that instructional design services exist
  • Faculty don’t want to be told how to teach

 

Design Thinking Applied

In a Design Thinking approach, with the help of an instructional designer, faculty don’t need to develop a course alone.  At our university, we have a conference space surrounded by white boards.  Our training space is clad in white boards.  The instructor can invite colleagues and we can invite team members who understand design, the technology, the media and can help get things done.

Empathize

The instructor, with some help, can gather background information on the students, the curriculum, the program goals, the employer and community needs, and whatever information will drive the curriculum.  A large part of this is human factors.  The table of contents of a textbook may not be the best place to start.  Understanding the learner is a much better starting point.  Dee Fink describes this as shifting the center from content to the learner.  José Antonio Bowen describes this as finding the entry point.  That means the student entry point. Instructors already know and love their content; but how will student be first introduced….and hooked?

Define

The definition phase is like holding a magnifying glass to paper on a sunny day.  It is where something so broad and diffused as goals, aspirations, needs, and requirements sharpens to a focal point.  The course author focuses on the objectives of the course or the problem that must be solved or the task that students must master.  M. David Merrill in his first principles of learning places the problem at the center of everything.  The activation of prior knowledge, the presentation of new information, the practice and feedback, the application of knowledge outside of the course, etc.  are all centered on the definition of the objective, task or problem.  This is tricky work.  Most of our less stellar efforts can be traced to poor definition of what the course or module or learning object needed to do.

Ideate and Prototype

After this hard work, the fun begins.  The white boards come alive with ideas and quick prototype sketches.  Instructors can benefit from folks who really understand the breadth of strategies that can help students achieve an outcome.  In our conference space, we talk about everything from journals to electronic portfolios, VoiceThreads, interactive case studies, simulations, electronic books, OER, publisher resources, to whatever.  The challenge is to find strategies that help students with a certain level of learning (apply, analyze, synthesize, etc.) and a type of knowledge (procedural, problem-based, conceptual, etc.) and degree of mastery.  Is this something we are introducing, reinforcing or, indeed, mastering?  Do we involve students in discussion?  Does the instructor model a practice?  Does she observe student performance and provide feedback?  Do students interact with the content – check their mastery, build their skills?  Faculty may have one or two favorite strategies.  Centers for Online Learning or Centers for Teaching and Learning (CTL) or Centers for Teaching Excellence or eTeaching Services or Innovation labs — or whatever they are called — have a much deeper tool chest to choose from than the individual instructor.  Seeking their help is a critical first step.

Test

These ideas then need to be tested.  We can design websites or interactive content and theorize how effectively students will use them.   The validation comes with the testing.  We can simply observe students interacting with course elements.  We can assess students for performance and survey them for attitudes.  We can do simple control and experiment group comparisons.  The scope and effort will vary but we need some form of validation and feedback before faculty commit to full development of the project.  A recent faculty project featured a very long survey.  It is one thing to anticipate and imagine the wear on students after many minutes of survey taking; it is quite another to observe students complete a long survey.

The First Step

The first step for some faculty can be to seek out their institution’s instructional designers.  Many professionals with different titles play the instructional designer role.  In some places, instructional technologists, learning management specialists or curriculum specialists may be instructional designers. As mentioned, they also live in places with different names.  Seek out the places with all of the whiteboards. Finding the instructional designers may lead to finding other professionals who can help with idea generation.  Oftentimes, the instructional designers can bring the right people together.

Faculty can begin with Define and Ideate.  An instructional designer and her colleagues can help them sharpen objectives and brainstorm strategies that help students achieve the outcome.  Think of it as just hanging out with people and brainstorming with two very very important requirements.  Faculty must do their homework and supply critical background information.

Next Steps

From there, faculty can engage the instructional design team to whatever level they feel comfortable.  Maybe they walk away after getting some ideas.  Perhaps they engage in the testing of ideas.  If the instructor’s locus of control is respected, more of the benefit of Design Thinking will be realized.

The beneficiary is always the student.