Open Textbooks and ePub

A new convergence of technology has survived the early stages of Gartner’s hype cycle1. It has survived the peak of inflated expectations and the trough of disillusionment. It is slowly ascending the slope of enlightenment and will soon achieve the plateau of productivity, bringing benefit to schools and their students.

I am writing about the convergence of the open textbook and the interactive eBook (i.e. ePub specification 3.0).

Open textbooks have been around for a while. The original inspiration, according to Richard Baraniuk, a computer science professor at Rice University, came from the open source software movement. But in the early days, the scale of the open textbook movement was understandably small. Professors in any discipline had to look long and hard for a quality textbook that could be re-purposed for their students. Today, however, the promise of open textbooks shines brighter than ever. Open textbooks are ascending the slope of enlightenment. OpenStax ( alone has more than fifteen hundred books representing six broad curricular areas. Closer to home, the University of Minnesota Open Textbook Library ( carries more than 200 open books.

In a speech to the Minnesota eLearning Summit in 2013, Creative Commons’ Cable Green reported that in the last thirty years, textbooks have increased in cost by 800%. Today, to offset the high cost of college textbooks, students have multiple options: They can rent or purchase used books. They can purchase digital versions directly from companies such as Boundless Textbooks, which provides alternatives to expensive textbooks. They may be given yet another option — the open textbook. The largest disruptor in my view is the open textbook.

One visit to will tell you why. Richard Baraniuk founded Connexions at Rice University in 1999 to provide students with free educational materials. Connexions is now rebranded as OpenStax. At the conclusion of this article, you will find several resources. One of them is a TED talk delivered by Dr. Baraniuk on the subject of open source material.

In a nut shell, Dr. Baraniuk likens textbook pages to learning objects or Lego™ blocks. Textbook pages can be reassembled and organized and blended with new material to serve the different needs of students. Several years ago, I scanned the list of available books. Perhaps I fell into the trough of disillusionment. Today, I am utterly astounded by the breadth and quality of materials. And all for free. At least (in the digital form)  to students.

This is all happening at a time when a new eBook specification has emerged. The specification is called ePub. ePub has been around since 2007 in its initial form. Until now, eBooks were available but not widely adopted.   Only 28% of people, polled by Pew Research, claimed that they have read at least one eBook.   That statistic is about to change.

Today, the latest specification is ePub 3 and it is a game changer. ePub 3 is the convergence of text, audio, accessibility, imagery, video, MathML and interactivity in a digital book.

OpenStax resources are available as ePubs, as well as PDF and HTML. ePubs play beautifully across a wide variety of eBook readers. ePub 3 books play well on iPads using either the iBooks reader or the Gitden reader.

In support of this technology, LodeStar Learning has just released a beta version of ePub3Maker. The LodeStar 7 workflow that instructors use to create learning objects for learning management systems like Moodle, D2L and Blackboard can now be used to create interactive eBooks that follow the ePub 3 specification.

Its an exciting time — because it is a time when so many great technologies are converging.  (I hope to cover the specifics of that convergence in a future article.)

Screenshot of iBooks library

Screenshot of iBooks library

For now, here are two simple examples of eBooks created with LodeStar. Both examples are for demonstration purposes only. Download iBooks (or Gitden) into your iPad from the Apple Store.  iBooks supports ePub 3.  Then click on the links below from your iPad.

The first eBook combines text, imagery, questions and video.

The second is a demonstration of OpenStax content blended with interactive questions. Precalculus.epub



 Ted Talk



If you are pressed for time and want to walk away from this article with something immediately useful, skip to the subtopic ‘A Remembering Design Pattern’. If you are an instructor or instructional designer who is interested in the ‘bigger picture’ related to design, then please read the entire article. I would love your input.

A Basic Instructional Design Pattern

By ‘Patterns’ (the title of this journal entry), I mean instructional design patterns. I want to make the case for instructional design patterns and solicit your input.

In my last post, I began with the statement that

“Design patterns is a concept borrowed from software design. A design pattern is essentially a general solution or an approach to a common task. A design pattern serves as a template for how a problem can be solved or a task can be completed. Sophisticated programmers use design patterns.”

My pitch is that sophisticated online learning developers might want to consider adding instructional design patterns to the way we think and talk about online learning design.

The introduction of instructional design patterns into our conversation has the potential of broadening our thinking about online learning and the strategies that we use to engage students.

This article explains the reasoning behind introducing yet another jargony word into our vocabulary, explores its benefits and provides a simple example in the form of a learning object.

Why Talk about Instructional Design Patterns

One would not expect that our thinking about online learning is limited in any way. The blogosphere is filled with buzz about gamification, adaptive learning systems, big data, augmented reality, mobile learning, and more. And yet, in our experience, many online instructors are grounded by the basic challenges of the learning management system tool set and their own knowledge of productivity tools like word processing, spreadsheets and presentation software.

In our experience, instructors are concerned about getting their ‘content’ into the learning management system and aligning it with outcomes, assessments and discussions. The most popular authoring tool is PowerPoint. Productivity tools like Captivate, Articulate and iSpring that support the conversion of PowerPoint content into a format that is supported by the LMS are also extremely popular.

Periodically, we hear from instructors whose imaginations have been lit by one tool or another – which embody some strategy. VoiceThread allows students to comment and converse asynchronously about a presentation posted by the instructor or another student. Adobe Connect enables instructors to keep online office hours and extend their classroom virtually. Adobe Captivate allows instructors to record voice-over presentations. The advent of Web 2.0, which enables us to be active creators of the web, has spawned hundreds of tools, which instructors can use.

Our conversations about online and teaching have become tool-centric. Outside of the LMS tools, when an instructor finds an exciting new thing – it is usually presented as a tool rather than a strategy.

Our introduction of instructional design patterns is an effort to shift the conversation from tools to strategy. The PowerPoint to HTML conversion, for example, may involve PowerPoint, some conversion tool, Audacity (for audio editing) and video captioning but the end experience is the same for students: a presentation that students passively watch. We can add in Photoshop images, a character pack, animated characters, lip synching, and other things, but we are still left with a presentation that students passively watch. Oftentimes, we do want students to watch a presentation, but we need to expand our options on how we engage students during and after a presentation.

Instructional Design Patterns may help us do that. Instructional Design Patterns will help us to define student experiences first in a tool agnostic way. How would an instructor define the VoiceThread experience without using a ‘product’ name. What is that thing that is characterized by an instructor posting a presentation and then prompting students to respond with text, voice or video? FlipGrid is another tool that embodies a similar strategy. Is there a label that would help us categorize the two tools under the same general strategy?

If so, then we can talk about these things not in terms of products but in terms of pedagogy. Not as consumers but as instructors. Not in terms of whiz bang technology, but in terms of outcomes and relevance to students.

“A design pattern is an effective means to convey/communicate what has been learned about high-quality designs”. – Kuchana, 2004

This, of course, is ironic coming from a tool maker. Our tool, however, has always been centered on instructional strategies. The conversation about patterns is an attempt to raise the bar in what’s possible in online learning from simple strategies to complex strategies. We’re full of ideas on new templates but frankly, instructors and designers need to recognize the pattern that each template is supporting before the templates will be extensively used. A webquest template is an example But our thinking is not limited to webquests.

Strategies matched to Learning Outcomes

For each level of learning (e.g. remembering, understanding, applying, etc.) and for each type of knowledge (e.g. declarative, procedural, conceptual, problem-based) we need a list of strategies that help students with knowledge acquisition, matched to a level and type of knowledge.

Instructional Design Patterns, in our application of the concept, may include one or more strategies. In our previous post, we began to define the term.

A WebQuest, to use that example again, is an organized activity that follows a precise pattern. In making a WebQuest , we assign a task, describe a process to complete the task, a method to evaluate the task and a list of web-based resources that students can explore. Students collaborate with one another and conduct focused research. The task may involve writing, public speaking, and problem-solving. Clearly, a WebQuest is an activity that coordinates multiple strategies. Clearly, a WebQuest follows a pattern.

Webquests have entered into the vocabulary of many K12 instructors – but, in our experience, not many post-secondary instructors. There are many good examples of Webquests in post-secondary, but when I poll post-secondary faculty, not many have heard of them.

Imagine if WebQuests were part of our everyday vocabulary. A colleague or an instructor designer could suggest a WebQuest and a faculty member would instantly recognize what that meant. The activity would take time to create and time to complete. Its greatest benefit is in support of higher order thinking like analysis and synthesis. It is indeed worth the effort. But higher ed instructors need to know what it is, as well as its benefit and how to create one efficiently before they will invest the effort.

Webquests support higher order thinking.

We shouldn’t refer to the WebQuest design pattern by tool name. We should all recognize the term WebQuest, understand its teaching and learning implications, and, perhaps, enlist the help of an instructional technologist to list the top three tools that help instructors create WebQuests.

A Remembering Design Pattern

So let’s start far down the cognitive ladder. The first rung of Bloom’s taxonomy, to use one example, is ‘remembering’ or the knowledge of facts. What instructional design pattern supports “remembering”? One candidate would be the variable interval performance queue.

Variable Interval Performance Queuing has been around for a long time. I first learned about it in a hand-out on good study habits in grade school. Then I read about it in the early 90s in a book titled Computer-based Instruction, by Stephen Alessi and Stanley Trollip. (The book is now retitled as ‘Multimedia for Learning’ and is in its third edition.)

The variable interval queue design pattern presents students with questions that challenge students to recall facts. Medical terms, for example. Correctly answered questions get removed from the queue; missed questions get returned to the queue in variable intervals (i.e. spaced further and further apart).

As we play around with the concept of “Instructional Design Pattern”, one might argue that variable interval queuing represents a strategy or a methodology rather than a pattern. That is probably true — but as we discuss engaging students in increasingly sophisticated ways we definitely move out of the realm of single strategy or methodology to a complex pattern. We need to define the attributes of a design pattern carefully.

I recently viewed a graphic that showed Bloom’s taxonomy and tools matched to each level of the taxonomy. CoboCards (a Flashcard generator) was matched to ‘Remembering’, for example. Wikispaces was matched to ‘Creating’. We need a similar graphic that matches Instructional Design Patterns to the levels of learning.

But what are some activities that might be candidates for Instructional Design Patterns? We need patterns that are effective in helping students achieve outcomes, support the efficient development of activities, and are memorable in that they hold a place in our vocabulary.

I’ve already mentioned WebQuests and Variable Interval Performance Queue. The first has a catchy name and can easily enter our vocabulary; the second is effective, but sounds rather technical.

In an earlier post, I described the Decision-Making Scenario. That certainly qualifies as an instructional design pattern. The scenario follows a pattern of providing background information, presenting a problem, making available resources to help solve the problem, and comparing/contrasting the student solution to that of the expert. Read more about Decision Making Scenarios in our post found at:

There may be some approaches to online case studies that would merit inclusion into the design pattern classification.

Let’s Start Simple

A far simpler instructional design pattern than the Decision Making Scenario is what I call a Present and Check. Perhaps it is too simple — but it is worth discussing.

Recently, I spoke to an instructor who was excited about getting his presentations uploaded to YouTube and captioned. He had no plan for checking the students’ understanding. Present and Check improves on that.

In the following example, the activity follows a simple but effective pattern. The example introduces Bloom’s Taxonomy through an embedded video from the Center for Academic Success at LSU.

The activity does not end with the presentation. Students are presented with a set of instructional activities created by the University of Texas at Austen and asked to identify the correct levels of learning represented by each activity.

The activity employs an interesting strategy in that correctly answered items are removed from the queue and incorrectly answered items provide corrective feedback and are returned to the queue. Students continue through the queue until all items are answered correctly.  (Not quite variable interval performance queuing, but close.)

This latter strategy is not a necessary component of Present and Check. Also optional is the look and feel, the reporting at the end, and the reference item that appears at the bottom of the screen, titled “Bloom Info Graphic”.

Lastly, it is important to note that although this is a simple multiple choice self-assessment, the student is asked to do some analysis of the instructional activities and draw upon his/her knowledge from the presentation to make the right choice. This is better than simply embedding a video.


The Variable Interval Performance Queue, WebQuest, Decision Making Scenario, and Present and Check are all possible examples of the Instructional Design Pattern. They share the following attributes:

  • embody one or more instructional strategies

  • present a distinctive pattern of activity

  • support student learning at one or more levels

  • help us communicate evidence-based practice

We would love to hear your ideas or follow your links to other Instructional Design Patterns. It would be great to engage in discourse that is not tool-centric but centered on enhancing the ability of students to learn.


New Features, New Strategies for Instructors


Each build of LodeStar includes a new feature that deepens the online instructor’s tool chest. Three new important features accompanied the recent updates to both the LodeStar authoring tool and its most useful template, ActivityMaker.

I’ll discuss the three most recent additions with the help of a Literature challenge that was mashed up for demonstration purposes only.  When I say ‘mashed up’, in this case, I mean that three strategies have been thrown together that don’t necessarily complement one another well.  Two of the strategies, the repeating question and the crossword,  are more suited to supporting the acquisition of declarative or verbal knowledge.  The Long Answer example supports higher order thinking in the form of analysis.  So, these three strategies make strange bedfellows.  Nevertheless, the example demonstrates their use.


The Repeating Question

The first strategy is the repeating question. I must admit that I’m not a fan of the question type that resets itself when answered incorrectly. In my observations, responsible students use them to their advantage and the students who need the help the most, misuse them.

In the Literature Challenge example, the student is asked to identify a piece of writing. The student is presented with five choices or distractors, which display in random order. If the student answers incorrectly, feedback is shown. After the student clicks the next button, the question is reset and presented again to the student with the re-shuffled distractors.

For some students, this will work great. If they answer incorrectly, a form of cognitive dissonance sets in. They realize that, perhaps, they don’t know everything. Consequently, they may pay extra attention to the feedback and then apply their new knowledge.

I recently benefited from this strategy in a Quality Matters lesson. I read a paragraph and then did a quick self-check. If I got the answer right, then I received a little confirmation and a little more confidence. If I answered incorrectly, I paid more attention.

But, frankly, I am a motivated student.

A less motivated student would mindlessly click until being able to move on. So much of computer-based training in the past was designed that way and I spent a lot of time observing students move through it.

As I’ve echoed Dee Fink* repeatedly in the past, situational factors (e.g. who are the students) matters a lot when selecting the appropriate strategy.

* Fink, L. D. (2003). Creating significant learning experiences: an integrated approach to designing college courses. San Francisco, Calif.: Jossey-Bass.

To cause a question to repeat using LodeStar, instructors click on the branching options icon that sits at the top of a page or next to an answer option.   The branching options include the following:

No Action
Go To Next Page
Go To Previous Page
Jump To Page
Open URL
Add Overlay
Set Value
Append Value
Reset Page

Instructors choose ‘Reset Page’ to repeat a question (i.e. redisplay the page) when a particular answer is chosen or, if a page level branching option is used, when the answer or any combination of answers is incorrect.

Long Answer

The second strategy is the long answer question type, supported by an optional journal page that compiles all of the student open responses. In the example, the learning activity asks students to identify how the language differs from passage to passage.  This is an open question. Students can type in a short essay.

The caveat is that learning management systems handle this question type differently. D2L or BrightSpace, for example, places the student’s answer in the SCORM Report. This may be a little difficult to find. For learning management systems that can’t handle this question type at all, instructors are encouraged to add a journal page. With the latest operating systems, students can ‘print’ the journal page right into a PDF or cut and paste into a Word document and submit the finished piece to a drop box. The advantage to the student is that the learning activity can chunk out the parts and have students concentrate on only one part at a time. For example in a grant writing course that I’ve recently seen designed, students could work on the ‘Objectives’ section after learning about writing objectives. The activity could then lead them into how to write a proposal abstract and then have them write an abstract. In the end, the journal page puts it all together for the student.


The third new strategy is the crossword. In our example, students were ‘warmed’ up by identifying authors in ‘repeating’ questions. The main activity was the long answer analysis. The ‘cool down’ activity is a crossword that challenges the student to identify authors that match the literary titles provided as crossword hints.

Instructors who wish to compile crosswords for their students can simply choose the Crossword page layout in the ActivityMaker template. Instructors type in a word, hit return, and then type in a short hint. The compiler can easily handle ten or a dozen words.


The three strategies outlined in this article contribute to the rich set of options that are available to instructors in the latest release of LodeStar and its main template, ActivityMaker. We know from feedback and experience that students love engaging activities and benefit from them.

The Example


Creating ‘The Instructional Design for eLearning Challenge’


The Instructional Design for eLearning Challenge highlights several features of the latest LodeStar 7.0 build. This article explores those features and exposes some of the design decisions made in constructing the challenge.


Screenshot of the Introduction to the Instructional Design for eLearning Challenge

First, the topic of instructional design for eLearning is immensely broad. Complicating the field is the designer’s own philosophy of the nature and scope of knowledge. Nevertheless, we can assume an instructional designer’s vocabulary includes Benjamin Bloom’s taxonomy, Robert Gagne’s Events of Instruction, an understanding of key terms and practices in eLearning design, and an acquaintance with some of the theories that inform design.   This challenge was organized after an attempt to survey the discourse between designers and capture some of the key terms, practices and even theories that are communicated in that discourse.

Digital Badges

In designing the challenge, I decided to award a digital badge to recognize a participant’s accomplishment in achieving 85% mastery of the content. I named this badge ‘eLearning Terms’ because all of the questions in the challenge can be categorized as declarative knowledge. Even the identification of a theory is no more than declarative knowledge. Therefore, the digital badge is named ‘eLearning Terms’ because it is nothing more than a test of one’s knowledge of terms that are used in the field of eLearning design.


A digital badge example

For designing and hosting a badge, I had several options. One was to stage the badge issuing process following the Mozilla Open Badge Infrastructure. The second was to use a known badge issuer like Basno., for example, is the badge issuer for the completion of the Boston Marathon.

The Mozilla Open Badge Infrastructure requires LodeStar Learning to host the badge issuing process on its own servers, which we may very well do in the future.   The Basno process was much easier to get up and running quickly. One creates a badge and enables participants to apply for the badge.

 ActivityMaker Template

For this challenge, I decided to use the ActivityMaker template. An instructor authoring with LodeStar selects a template, converts it into a project, adds content, saves, previews and then exports the activity to a Learning Management System or anywhere on the web or in the cloud. I chose ActivityMaker because it is a Swiss army knife. In other words, at the time of this writing, I can use one template to incorporate many different types of activities. A list of activities that the ActivityMaker template supports includes the following:

  • Text with imbedded questions and images
  • Images
  • Multiple Choice
  • Multiple Select
  • Matching
  • Categorizing activity
  • Short Answer, with regular expression support
  • Branching at multiple levels, answers, page scores, and section scores
  • Menu
  • Long Answers (Open Answers)
  • Interview Questions (for interactive scenarios and case studies)
  • Video
  • Flashcards
  • Journals (Compiles all of the Open Answers into a one page report)

In addition, I can embed Web 2.0 content on any text page.

 On the Use of Gates

The next decision was whether or not to create levels for the challenge with performance gates. LodeStar enables authors to create performance gates based on absolute points or percentages. Users who meet or exceed the performance score threshold, branch in one direction; users who fail to meet the threshold branch in a different direction.

For this challenge, I decided to create levels, but not gates. At the conclusion of each level, the participant sees his or her score and receives a list of resources that match missed items. At that point, the participant can stop the challenge and review the resources or continue on. At the conclusion of the challenge, the participant is branched to the success page, which provides badge information, or to the failure page.


Before I got started inputting the questions, I decided on a theme. Starting with Build 22, LodeStar introduces new themes that can be totally configurable. If you are working on an older project, please apply a theme to the project. This will lead to faster downloads and an improved experience.


Screenshot of Themes Dialog

I also applied a transition effect to the project. The tool offers a variety of transition effects that govern the transitions between pages.

Answer Types (Page Layouts)

For the questions themselves, the authoring tool gives the option not to reveal the answers. I chose to reveal the answers. If I had chosen not to disclose the answers, I would have needed to redirect participants to resources where they can learn the answers in order to retake the challenge. In some contexts, this would have been preferable.

Now to the content.

The introduction is a Text page type.


View of the LodeStar HTML editor

The first question was constructed with the layout type of Question(Simple Layout). With this layout type, I could have chosen to provide unique feedback for every distractor (answer option). Instead, I used a new LodeStar feature, which is to offer page level feedback. If the question were answered incorrectly, the feedback would display a resource that would help the participant learn more about the term, practice or theory.

Regardless of the layout type, I assigned each question either 1, 2 or 5 points to match the level of difficulty or effort required to answer the question.


Screenshot of multiple choice question


In Level Two, I created slightly more involved question types. I used the Text layout but added questions to the text page by using a simple mark-up.

The markup is <s> for question stem, the question itself.

<d> is for distractor, or answer option.

<d>*indicates the correct answer.

The following screenshot shows the markup in action. This allows for any number of distractors, and any content including images and mathematics mark-up using MathML.


Question markup on the Text page

To give participants an update on their score and to compile the corrective feedback in one place, I added report pages. You may notice, if you take the challenge, that each report page displays an error pop-up message. This message occurs when LodeStar attempts to send the score to the underlying Learning Management System. In this case, there is no Learning Management System. I could have chosen to make the error silent, but I wanted to display a LodeStar object at work trying to communicate the participant’s score.

In Level Three of the Challenge, I included some categorization and matching questions. To generate these, authors simply type in the question or the directions and then type in a term on the left and its match on the right. LodeStar then randomly rearranges the matches on the right.


Matching Page Type.

Finally, we have the Gate that decides whether or not the participant gets the badge application information. If the participant scored 85% or above, the badge information displays. If not, the gate branches to an apologetic message.

In the screenshot below, we see that the Gate plays multiple roles.


Screenshot of the Gate page type.

From top to bottom: the instructor uses a Gate to decide whether or not the score is reset. The instructor, for example, can decide to branch the user to a remedial section, with a reset score so that the student can try again.

I didn’t reset the score.

Next, the Gate offers the option to enable the Back and Next buttons. The Gate will automatically disable the Back and Next buttons, unless the instructor explicitly allows them.

The Gate supports custom scores. The instructor can choose to track a certain category of questions and then branch based on a custom variable assigned to a group of questions. I could have chosen to assign a score to a custom variable called ‘Basic’ versus ‘Intermediate’ or ‘Advanced’. Level One questions could have appended a value to ‘Basic’. Level Two questions could have appended a value to ‘Intermediate’ and so on. At some point, I could have branched on the variable.   But I didn’t.

Gates decide whether students pass or fail. If they pass, they follow the branch options spelled out in ‘Pass Message/Branch Options’. If they fail, they follow the branch options spelled out in ‘Fail Message/Branch Options’. The options are found under the blue ‘branch’ button.

The Gate also allows students to go down the Pass branch regardless of score.

In my case, I typed in ’85’, and checked off ‘Use Percentage’. This means that students will only follow the Pass branch, if they scored 85% or greater.

Finally, if I wanted to reset a custom variable, I would have typed in the name of the variable (case sensitive) in the field labeled ‘Reset comma-separated List of Variables’.

The branch options include the ability to display feedback and to branch or do the following:

No action
Go To Next Page
Go to Previous Page
Jump To Page
Open URL
Add Overlay
Set Value
Append Value

The last two options relate to the custom variables or scores to which I referred earlier.


Screenshot of Branch Options In the LodeStar eLearning authoring tool



In conclusion, the current build of LodeStar 7 allows me to select a nice theme and modify it, choose a page transition, choose from many different page types, report scores, and branch according to performance.

The current build also includes an improved audio player and the ability to randomize distractors (answer options).

So, we’re moving in the right direction. Support us. Let us hear from you. Make comments. Tell others.

Finally, take the challenge at

Designing a Simple Lesson

Designing a Lesson

This is a lesson about creating a lesson. The goal is to create a mini lesson targeted at casual picture takers who wish to increase the appeal of their pictures. I personally love creating photographs, and have learned some tips along the way to make my photographs more interesting. I also love the process of instructional design. It is a process of discovery. As I think through this lesson, I’ll disclose my thoughts and my strategies, my steps and my missteps.


This isn’t a typical course in which I have a context that includes a learning situation and learner characteristics. For one, this is a mini lesson (i.e. one learning object) and not a course. I’m not beginning with set course goals and a set of learners. I’m establishing my own goals and hypothetically inviting anyone who would benefit from those goals to participate.

I’m also not writing about activities that an instructional designer would design and hand off to a team that includes a graphic designer, programmer, media specialist and other support. I’m writing about ideas that any instructor can use with the right tools. Obviously, I am biased towards the LodeStar tool because I’m its parent, but these strategies can be produced with any third-party tool that plays well in learning management systems like Blackboard, Moodle, Desire2Learn and others.

The intended audience for this blog post is composed of instructors who want to improve their online learning but have little time, few technical skills, and little help. If that sounds familiar you’ll want to read on.

I chose a lesson on photography because that is a topic familiar to most people. I could have chosen Canadian literature or Object-Oriented Programming but the topic of photography would appeal to a broader audience. At first blush, it will seem like the activities (strategies) I use only apply to a technical subject. That just isn’t the case. Most online learning in a variety of disciplines would benefit from activities that engage students in different ways. You’ll need to imagine how this lesson might apply to your own discipline. For some it will seem like a bit of a leap of imagination.

Starting with Situational Factors

If I were designing a lesson for a real audience, I would have to understand much more about the intended course outcomes and where the students are at. I would have to consider more carefully the size of class, the constraints of the teaching environment, the specific challenges of the curriculum, the skills and attitude of the learners and more.

I am designing this lesson for a particular learner: one who has little technical knowledge but has an interest in improving the quality of his or her photographs. These are picture-takers who may possess anything from a cell phone to a digital single lens reflex camera. My design decisions need to reflect that.

Forming Learning Objectives

Forming learning objectives is an important next step. One really needs to understand the type of learning outcome and then choose strategies or activities that support that learning outcome. My favorite text on this subject is written by Patricia Smith and Tillman Ragan. I cite the source later in this article.

Learning objectives should be specific, measurable, and achievable. The learning objectives will cover basic facts (declarative knowledge), concepts, and principles. The intent is for the learner to acquire the foundational knowledge that he or she will need in order to recognize key principles of composition when shown some photographs.

So here is a first pass at defining some objectives for my mini-lesson.

The learner will:

  • Define key terms such as aperture and shutter speed.
  • State how exposure is affected by aperture and shutter speed.
  • List five principles of composition that raise the quality of photographs.
  • Identify the illustrated principles of composition when shown a series of photographs.
  • Identify which principles of composition he/she is already using.
  • Choose to apply new principles to his/her own picture-taking.


Injecting Instructor Presence

My next consideration is instructor’s presence. How much of me will I inject into this lesson? I decided to be informal and to make mention of my past. My decision is based in part on Richard Mayer’s work in Multimedia Learning.

Here is the principle that research shows has a positive impact, cited from Multimedia Learning.

Personalization principle: People learn better when the words are in conversational style rather than formal style.

Mayer, Richard E.. Multimedia learning. Cambridge: Cambridge University Press, 2001. Print.


Considering Characteristics of the Learning Objective

At the end of the day what will matter most to learners is that they can apply the principles of composition and exposure to their own picture-making. The learning object should bring them as close as possible to this outcome. My immediate thought was a simulation – a simulation in which learners could change the settings and the camera angle and see the effect. For this subject matter, simulations exist. I plan to include one: CameraSim. It is brilliant.

The challenge, however, is that for most instructors, simulations are not really an option. If one doesn’t exist on the web, the opportunity of including a simulation is rare. Too much online learning is created on a shoestring budget. Faculty are given a small stipend, little training and are asked to command the attention of their students with what little mastery they have of media. Exceptions exist.

Some university media labs exist that build cool things for faculty to incorporate into their online learning. In our own backyard, The University of Minnesota’s LT Lab is an example.

If you do have time, there are ways to construct simulations without knowing computer code. One great way is with an application called ZebraZapps. Again, I am biased here because for a time I served on the ZebraZapps team under Dr. Michael Allen. This is a cloud-based application found at In fact, I did create a small simulation that helped to demonstrate the concept of ‘Depth of Field’. Had I more time, I could have used ZebraZapps to produce something similar to CameraSim.

Photographic composition as a subject is not unlike what one can imagine in university courses. Students should have the opportunity to apply foundational knowledge, concepts and principles to making key decisions, solving problems, evaluating options, and building things. As designers, we have to think about whether that foundational knowledge is needed as a prerequisite building block or can be learned in the context of decision-making or problem solving? Do we give learners the facts in a supplantive manner or do we help them to infer the facts? Meaning, do we present the facts or help learners figure out the facts for themselves…or the underlying concepts, principles, etc.

It is paramount that instructors understand what kind of learning objective they are hoping learners will achieve. Is it declarative knowledge (facts that students recall) or a concept, principle, procedure or problem? Is the learning objective cognitive, psychomotor (e.g. gymnastics), affective (attitude), or metacognitive (learning about learning). That makes a difference because, given the objective, I draw from a set of strategies that experience and research tells me are effective for that type of outcome.

Deciding on The Opening

The opening message is always an interesting dilemma. I realize that I am only creating a mini-lesson, but let’s think about online courses for a moment. Most post-secondary online courses start with the syllabus, schedule and the house-keeping details. Students appreciate that information. They are reassured that they are going into a house that is in order and that they won’t encounter Norman Bates who will slash into their GPA. I personally believe that the house-keeping documents are important but I long for something else as the first beat. I want the hook, the ‘thing’ that assures me that this course is relevant and will impact my life. I want to be persuaded that, as a result of this course, I will be able to do something I haven’t been able to do before – and it is something that will help me.

Similarly, for this lesson, I need to think about the first screen. I suppose it could be instructions on how to navigate, housekeeping, but for the simple interface I’ve chosen, it is not really necessary.

For visually handicapped learners, the navigation is identified by ARIA roles, information that reveals the navigational controls. This feature is built into the tool and doesn’t require instructor action.

Outside of instructions, I have many options. I could embed a video of me, as an instructor, making the case. Or I could embed a YouTube, Vimeo, or MediaSpace video that uses personality, imagery, sound and a remote site to demonstrate the relevance of the topic. Notable instructional designers frequently write about establishing relevance in their own distinctive ways. Specifically, refer to writings and presentation of Dr. Michael Allen, Thiagi Sivisailam, Richard Mayer, David Merrill, Patricia Smith and Tillman Ragan. Establishing relevance is a critical early step.

Or, I could create some cognitive dissonance. I would challenge a student who thinks he knows everything about the topic to a problem. The inability to solve the problem can render the student more receptive to the instruction that follows…or it could totally incapacitate the student and lead to a course drop. The careful use of cognitive dissonance can be an effective strategy or, if misused, can be counter-productive.

In the past, I have chosen all of the above options. I’ve also thrown the students into a simulation and asked them to generate the rules. This technique is known as generative (versus supplantive). Smith and Ragan write about that in their book “Instructional Design”.

Smith, Patricia L., and Tillman J. Ragan. Instructional design. 3rd ed. Hoboken, N.J.: J. Wiley & Sons, 2005. Print.

In this case, I chose to get personal. I have recently been struck by some simple approaches used by faculty at the university where I work. One was a professor of Non-Profit and Public Administration. He launched his course with a personal disclosure of why this course was important to him. The instructor was reaching out through this online medium and connecting with students who shared the same values and aspirations. It was effective.

So I start the lesson in the simplest manner possible, by injecting a little of me onto the page.

Composition 1

Bringing in Video

But very quickly in the lesson, we need to hear from someone with more credibility than I have in this field. I chose Michael Browne because he re-enacts and exposes his decision-making as he ‘finds’ pictures.

There are several benefits to including Michael Browne’s video. One benefit is the instructor modeling the thinking process. Students benefit from watching instructors work through problems (Smith and Ragan). Secondly, through video, we are being transported to a place beyond the ‘page’. In this video, we travel to two sites and listen to Michael Browne as he plans his shot.

Recently, I heard a short presentation by an ed tech innovator who lamented that learning management systems were ‘frozen’. The world has moved on with social networking and the interoperability of applications. I know precisely what he meant, but at the same time I appreciate that the learning management systems offer us a blank window in which we can display any content we wish that ranges from embedded content, to linked tools through Learning Tools Interoperability to imported learning objects. The LMS can provide the overall authentication, administration and accountability functions but anything can happen in that blank window.

To embed Michael Browne’s video, I benefit from LodeStar (www.LodeStarLearning), which allows me to do this easily. I simply choose the video layout page type and paste in the url of the YouTube video.


Composition 2

But there is a bigger principle at play here. And that is, embedding videos or any media that supports embed.

If I didn’t have LodeStar, I would look for the word ‘Share’ or ‘Embed’ near the media asset. For example, in YouTube, I click on ‘Embed’ below the video and that provides me with a code snippet that I can paste into a learning management system like Blackboard or Moodle.

In this case, the embed code looks like:


This means that you are embedding a frame onto the page through which the video will display. Technically, this is called an iFrame. The width of the iFrame will be 853 pixels and its height will 480 pixels. The rest of the information spells out the source of the video and some of the video and frame property settings. For example, border or no border.

Here is what that looks like. Many many things can be embedded.

Composition 3

In LodeStar, an alternative way of embedding is to click on the <> icon in the html editor. Then just paste in the iFrame code snippet.

Checking for Understanding

After displaying a video, or even a simulation, I find it helpful to check for understanding. This is a very basic practice that we do in the classroom and neglect to do online. Perhaps the Learning Management Systems (LMS) make it too unwieldy to generate quick formative assessments. We tend to use the LMS quizzing tools for summative assessments that Dee Fink describes as audit-ive assessments versus educative.

David M. Merrill and other researchers tell us that students benefit from frequent feedback that they can apply to future tasks or assignments. Dee Fink emphasizes the importance of frequent and immediate feedback in what he calls Fidelity Feedback: Frequent, Immediate, Discriminating and Loving. By discriminating, he means that feedback should show the difference between poor and acceptable work.

Composition 4

Before we can launch into the main part of the lesson, which discusses principles of composition, the learners need prerequisite, foundational knowledge. The learners must understand aperture, shutter speed, ISO and their relationship to both depth of field and exposure. Originally, I introduced the composition principle of ‘background’ first and then introduced the technical aspects of depth of field. I believe this was a misstep. I intended to help students understand why they needed to understand Depth of Field and the concept of aperture and then launch into a technical discussion. This, however, really broke up the flow of the presentation on principles. The technical discussion seemed more of a digression.

And so I reordered the lesson and changed my transitional statements.

Designing Learner Engagement

Students often talk about ‘learning by doing’. Either ‘learning by doing’ or ‘learning by thinking’ is engagement. Engagement can happen on multiple levels. We think of the student as being ‘engaged’ when the student interacts with other students, instructor and/or content in a way that makes the student think. That thinking might involve defining, classifying, categorizing, applying, problem-solving, evaluating, creating, and any one of numerous actions that appear in teaching and learning taxonomies.

In this lesson the camera simulation will invite student interaction. Students will be able to adjust aperture and other settings and get immediate feedback. Do they have the knowledge, however, to make sense of what they are seeing?

I could design the camera simulation activity as a generative activity. In other words, I could ask students to change the settings, observe the effects, and infer some key principles. Alternatively, I could scaffold the activity.   I could isolate the key principle and then introduce the simulation. I chose the latter. I quickly developed a low-tech simulation in ZebraZapps. I then embedded the simulation in the same way that I described earlier – with an embed code snippet. ZebraZapps offers several options for doing this.

Composition 5

I wanted to simplify the relationship between aperture (size of lens opening) and depth of field. In reality, it is not quite as simple as that. When you change aperture, you must change the shutter speed reciprocally or you affect exposure. I included a YouTube video to describe the triangular relationship between aperture, shutter speed, and ISO. The primary concept related to composition is ‘depth of field’. If students understand depth of field, then what follows will be easier to understand.

Presenting Content

In the heart of the lesson are seven principles of composition that anyone can apply to improving their photographs. The presentation combines text and imagery.Composition 6

The presentation is followed by a short exhibition of photographs, The learners are asked to recall the principles of composition as they view each photograph and then they are held accountable by the challenge.

Composition 7


Designing The Challenge

The final activity is the challenge. In the challenge, the learner is shown a series of photographs and asked to identify the main principle applied. The learner gets immediate feedback for each answer and sees his or her progress in the meter.

Composition 9

At the end, a report displays the final score and all of the feedback, which can include links to resources or references to chapters that will help the learner.

Composition 8

Finally, the learner passes through a gate and receives customized feedback based on whether he passed or failed.

Composition 10

In Summary

The following is a summary of the strategies that were used in this lesson.

  • Instructor presence
  • Embedding video
  • Embedding simple simulation through ZebraZapps
  • Embedding a third-party simulation
  • Checking for understanding
  • Adding a challenge that provides feedback and holds students accountable for the principles covered in the presentation

 Link to the Lesson





On the Eve of a Major Release

For those of you who have stumbled upon this journal entry, our web journal is dedicated to instructional design and how the LodeStar authoring tool supports instructors in that endeavor.

Our firm belief is that, with the right tool set and training, instructors can remain the authors and orchestrators of highly engaging and effective online learning experiences for their students.

The LodeStar authoring tool enables instructors to create interactive learning objects for a variety of learning environments including all major learning management systems (e.g. Desire2Learn, Moodle, Blackboard) Curriki, eFolioWorld, MyEfolio, and any website to which an instructor has access.


I am excited to report that the latest version of LodeStar is about to be released:  LodeStar 7 Lite.  Over the course of LodeStar’s twelve-year history, I’ve observed how instructors interact with authoring tools and where some of the stumbling points lie.

I have also recognized where LodeStar can’t compete.  Some tools offer decent screen capture; and others do a good job with PowerPoint conversion.  Some tools offer the full expressiveness of a powerful programming language but without the complexity.  ZebraZapps is an example.

Instructors need multiple tools, however.  LodeStar complements the best tools found in academia.  The future of LodeStar lies in its ability to offer instructors a rich range of engaging strategies and ease of use.  When instructors wish to create, for example,  an interactive, simulated interview in which students must make key decisions based on their content knowledge, LodeStar should be the tool of choice.

LodeStar7 was a complete re-visioning of that value proposition. For years we glued on features and sometimes left the instructor behind.  LodeStar 6.7 had a powerful vector graphics drawing capability that supported drag and drop and click/touch interactivity for all devices.  But unless instructors were specifically trained, they didn’t tap into the richness of LodeStar 6.7’s ability.

LodeStar 7 changes that.  LodeStar 7 Lite pares away  features  and re-introduces them in a more integrated, intuitive manner.  Not all of the features will be immediately available in LodeStar 7 Lite.  Spell check is missing. Drawing is missing.  But, in time, those features will return.  In the near future, drawing and interactive graphics will be a first-class citizen of the tool, not an add-on.

LodeStar 7 Lite is currently in alpha testing.  The first workshop is scheduled for the end of October.  If all goes according to schedule, it will then make its debut in a statewide webinar in November and be featured at the Minnesota Government IT Symposium in December.

The missing features will hopefully be repatriated by the conclusion of February.  Then the focus will be on creating insanely brilliant templates that instructors will absolutely love.

In the meantime, here is a sneak preview of some of LodeStar 7 Lite’s  features.

 HTML5 templates picked from an easy-to-use template viewer

LodeStar Template Viewer

HTML Editor cleanly separated from page and layout management and other navigational controls

Quick access Layout Manager offers a broad range of options

LodeStar eLearning Authoring tool Layout Manager

LodeStar eLearning Authoring tool Layout Manager

Built in previewer that avoids issues with security

LodeStar Built in Previewer

LodeStar Built in Previewer

Full HTML5 audio support

Audio Importer

Audio Importer

Full Embed support with the ability to play back in the editor

Embed Support in an eLearning Authoring Tool

Embed Support in an eLearning Authoring Tool

Extremely simple Export

LodeStar exports to any SCORM 1.3 (SCORM 2004 release 3) Learning Management System

LodeStar exports to any SCORM 1.3 (SCORM 2004 release 3) Learning Management System