Designing with Animation

Animation can enrich the learning experience and, when used appropriately, improve learning outcomes.  New standards and new technology under the hood of the modern browser give learning experience designers a whole new set of tools and techniques to apply to their designs.  I’ll parse out the different types of animations and their uses, discuss the underlying technologies and present examples. 

To start, let’s reprise Richard Mayer, the author of Multimedia Learning. 

Mayer defines a concise narrated animation.  He explains that concise refers to a focus on the essential steps in the process. If the objective is related to understanding how a four-cycle engine works, then a concise animation would include only the details that related to the objective.  For example, I might include a simple animation of a piston traveling up and down a cylinder, compressing gas, power stroking from the combustion, exhausting the spent fuel and then refilling with fresh gas.

Wikimedia Commons ((CC BY-SA 3.0)

Concise in this example means that we focus on the crankshaft, the traveling piston, the ignition, the intake and the exhaust.  The learner is not distracted by anything not related to the objective

The image above is an animated GIF.  Most authoring tools can easily import an animated GIF. This technology, however, has its uses and its limitations.  It is difficult to use the animated GIF in a concise narrated animation because it is difficult synchronizing the 4 cycles with the narration.  If you understand four-cycle engines, the GIF makes sense.  If not, then narration will help learners understand each cycle of the process.  Fortunately, there are easy ways to sync animation with narration.  This brings us to the timeline animation.

Timeline Animation

The basic idea behind a timeline is that you control animation effects according to time.  In tools like Storyline, each row of the timeline represents a different screen element.  You can then apply entrance and exit animations to the screen element at specific times.

The timeline in our authoring tool, LodeStar, works differently.   LodeStar displays one timeline per graphical element.  Each row represents a different property such as left, top, opacity, rotation and scale.   With these five properties you can control the position of an element on the x and y axis at a specific time and you can control fade-in and fade-out, rotation and size.  These are the properties that designers typically want to animate.

In the screenshot below, you can see that I have two gears.  A left gear and a right gear.  When I select a gear or choose it from the lower right pull-down menu, I get its corresponding timeline.

As you can see on the timeline, each row represents a different property.

LodeStar Animation Editor with SVG graphic

To understand the positioning properties, we need to understand the difference between the SVG image and its sub-elements.  We also need to understand other types of images like PNGs, GIFs, and JPEGs.  These are called bitmapped or raster graphics because that is precisely what the graphic is:  a map of binary digit (bit) values for every colored pixel.  The rules for positioning bitmapped graphics and SVG elements differ.

To understand this, let’s first tackle the SVG graphic.

SVG

SVG stands for Scalable Vector Graphics. It is a vector image format that can be scaled up or down without losing any image quality. This is different from the bitmap or raster graphics which are made up of pixels and can become pixelated or blurry when they are resized.

SVG graphics are great for logos, icons, and other types of graphics that need to be scalable and look crisp at any size. They can also be easily edited with text editors or graphic design software, and they support interactivity and animation through script.

We can think of an SVG graphic in two ways: as a whole image or as a collection of elements.  Examine the SVG image of an old Buick below.  The automobile can be animated from left to right, for example, like any bitmapped graphic.

SVG Graphic on a LodeStar page

We can do that with a timeline:

Timeline applied to graphic

The timeline shows a duration of 5 seconds.  At 0 seconds the car’s left property is 1.  That means 1% of the width of the window.  At 5 seconds, the left value is 100%.  This means that the left edge of the graphic will be at the right edge of the window – in other words the car rolls off the screen.

This animation would be less than satisfactory because the tires don’t rotate.  So, if we only animated the entire graphic we’d get an inferior result.

That now brings us to SVG elements.  Loosely described, these are the sub elements of the SVG graphic.  They consist of polygons, lines, rectangles, ellipses, paths, layers, groups, and more. In the screenshot below of LodeStar’s SVG editor, we see that the rear tire is selected.  It has a cryptic id, which we can change to an easier name.  Whatever the name, this element is both programmatically addressable (meaning we can change it with a simple script) and it is separately animate-able.  For example, we can rotate the tire.  Now we can move the whole graphic from left to right and rotate two of its sub-elements to improve the animation. (You will see this in an animation sampler introduced in the conclusion.)

SVG graphic with element selected

Bitmapped Graphics

Bitmapped graphics, also known as raster graphics, are digital images made up of tiny colored squares called pixels. Each pixel represents a small portion of the image and can be assigned a specific color value. Generally, only programs like Photoshop allow us to manipulate bitmapped graphics at the pixel level.  Examples of bitmapped graphics are PNGS, JPEGs, and GIFs.

With bitmapped graphics we can animate the entire graphic’s position, opacity, rotation and scale.  But we can’t take one of its subparts (a small sub-section of pixels) and independently animate that section.  At least, not without sophisticated code. Nevertheless, bitmapped graphics have their advantages. Any photorealistic image is best captured in a bitmapped graphic.

The animate-able properties

In LodeStar, the meaning of left and top differ between SVG graphics and bitmapped graphics.  To best explain this, we need to place images in three categories:  the entire SVG graphic, an SVG element (sub-element) and bitmapped graphics that are not inside an SVG graphic.

For SVG elements, left means a translation or change along the x axis.  1 means that the graphic has shifted 1 pixel to the right.  100 means that the graphic has shifted 100 pixels to the right.

For images (including JPEGS, PNGS, GIFS and the SVG graphic as a whole), left means the percentage of the window.  0 means that graphic is painted at the very left of the window.  50 means that the graphic is painted half-way across the window on the x axis.  The reason for the difference is that LodeStar projects maintain their responsiveness to different devices with different screen widths whenever possible.

Technically, when we reposition an image, we are removing it from its normal place in the HTML document.  When we assign a timeline to an image, we are removing it from the HTML flow and assigning it an absolute position. 

If you didn’t want to remove the image from the flow (its position in the document), then you can lock the image position in the image dialog.

An absolute position means a position relative to its parent.  If its parent is 1000 pixels wide, a left position of 10 places the image 100 pixels to the left of its parent’s left edge. If the image is positioned beyond its parent’s boundaries, it is hidden.

SVG elements are displayed inside a viewbox.  We transform the position, scale and rotation of these elements without removing them from the flow. They are painted or shown inside the viewbox.  If we shift the position beyond the boundaries of the viewbox, the element is clipped or hidden.

For SVG elements it is important, when adding a left, top or rotate keyframe to also add a keyframe at the same time offset for the other two properties. SVG transformations (change of position and rotation) are defined by all three properties.

Controlling animations with script

LodeStar animations can be controlled by the timeline, as we’ve seen in the example above.  They can also be controlled by script or by a combination of timeline and script.

Let’s return to the gears example. In this example there are two SVG elements inside an SVG graphic.  In the example, the SVG graphic is not animated at all.  However, its elements (the gears) are positioned and rotated.  One gear is rotated from 0 to 360, the other gear is rotated from 35 to -325.  This causes the gears to rotate in opposite directions at a slight offset from one another so that they mesh.

A separate timeline for each graphic or SVG element

In the example we also positioned a rectangle with rounded corners at the bottom of the viewbox.  We are treating this graphic as a button. We added a branch option to the rectangle, which converts it into a button that responds to clicks.

The branch option that we applied to the rectangle is called a ‘Select Branch Option’.  When clicked, the button executes the following script:

appendValue(“rate”, 1);
var rate= getValue(“rate”)
updateAnimation(“1681686408319”, “play”, “”, “”,10, rate);

In this script, we are adding 1 (appending) to a stored value named “rate”.  We then get that value from storage and assign it to a variable named ‘rate’.  ‘var’ means variable. 

In the third line we use the variable in a function called updateAnimation().  This function allows us to

  1. Identify a page by a page unique identifier called a UID.
  2. Set the state of the animation:  play, pause, or reverse
  3. Optionally we can set the current time in the animation.  By default it starts at 0 seconds. That is why we use “” in the function and don’t bother setting the current time.
  4. Optionally, we can also state the duration of the animation.  By default, the duration is set by the timeline.  We therefore use “” in the function.  We could shorten or lengthen the duration.
  5. We set the number of iterations or the number of times the animation repeats.
  6. Lastly we set the rate. A rate of 1 is standard. A rate of 2 is twice as fast. Every time we click on the button, the animation speeds up.  The gears turn faster.
Controlling animation with LodeStar Script

Animation synced with narration

In the next example, graphics are synced to different points in the narration with the use of the timeline.

Modern browsers now offer us fast and efficient animations and audio support with the ability to synchronize the two.  This is a significant development in the web platform.

In the example below, the author added a narration and a SVG graphic to the same LodeStar page.  In the SVG graphic, the piston, connecting rod, crankshaft, valve, etc. are all SVG elements.  As you can see in the screenshot, the crankshaft is selected and shows an ID of ‘crank2’.  This helps to identify the element in the animation editor.

Synching animation to narration

After the audio narration was imported and the SVG graphic created, the author launched the animation editor.  The play button now plays the audio narration and the animation.  The author can pause the narration and add keyframes to control the position of the piston, the position and rotation of the connecting rod and so forth.

The pivot point or anchor point of the connecting rod is changed with the following buttons TL, TR, C, BL, BR.  These buttons place the pivot point top-left, top-right, center, bottom-left, and bottom-right respectively.  Essentially, we are pinning down the center or a corner so that the rotation happens around this point.  Under the hood, we are really changing the transformation origin.  The transformation origin is the point around which a transformation such as a rotation is applied.

Listening to narration in Animation Editor and adding keyframes to control position, opacity, scale and rotation

Finally, once a timeline is created for an element it can be given a unique identifier and applied to any element with script. For example, if I rotated a rectangle with a timeline on one page, I could apply that animation to a triangle on another page — with the use of script. The script function is webAnimate. (See appendix A)

Conclusion

In the sampler linked below, we can see multiple uses of animation.  On page one, we see a simple decorative animation of an attitude indicator or artificial horizon used in airplanes.  You can easily imagine how this can be applied to a simulation.

One page two, we illustrate how an SVG graphic is moved from left to right while its elements (the tires) are rotated.

On page three is the gear example.  Click on the faster button repeatedly to see a demonstration of how we controlled speed programmatically by changing rate..

On page four we have a simple graphic with foreground and background synced to an audio file.

One page five, we can immerse the viewer in a scene with the use of parallax. Parallax is a visual effect where the background of a web page appears to stay still or move in the opposite direction of the foreground.

Finally, on page six we show a narration synced to an animation.  Pausing or replaying the narration causes the animation to pause or reset.  The narration and animation are synchronized.

(Best viewed in Chrome, Edge, and Safari)

Animation Sampler

https://lodestarlearning.github.io/Animations/index.htm

Author’s Note:

Animations were done in LodeStar 10 (Beta Build 5). Secondly the script for the 4-cycle engine explanation was generated by ChatGPT, and the narration was text-to-speech using Amazon Polly.

Appendix A

webanimate(elementid, timelineid, duration (optional), direction (optional), currentTime (optional), position (optional), callBack (optional))

animates the element’s css properties based on a timeline creating with the animation editor, where id is element ID (no hashtag), timelineid is the id of an existing timeline created in the editor, duration is the length of the animation in seconds, currentTime is where to start the animation in seconds, position is the css position property which is usually set to ‘absolute’ to support top and left movement, and callBack is the name of a page whose branch options will be called when animation finishes.

Top Influences on the Development of LodeStar 10

Introduction

Modern web pages offer designers a rich palette of media types and standards to create engaging learning experiences. The web page has become an amazing success story.  It started as a battle ground of competing standards and self-interests and has arguably matured to become a meeting ground. 

If you aren’t on the bleeding edge, you’ll benefit from the convergence of standards. In CanIUse.com, browser support for many enabling technologies such as SVG and WebGL (explained later) shows up as green tiles across the table of browsers.  Green means these technologies are commonly supported, which is good news because learning experience designers can put these technologies to work.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Many of today’s eLearning projects are essentially webpage applications with additional standards that support communication to learning management systems or learner record stores. Many of the technologies that make the web interactive, responsive, accessible, and expressive are the same technologies used in eLearning applications.  Most of the major eLearning authoring systems are web page design systems for web pages that are hosted in learning management or content management systems.  There are many exceptions, of course, which include augmented reality systems, gaming engines and environments, and other virtual spaces that are not built on HTML5.  But let’s stay focused, for a moment, on the web.

For maturing standards, the web has become a place of agreement.  In the not-too-distant past, basic HTML markup and styling had to address the many differences between browsers and how they interpreted the World Wide Web Consortium (W3C) standards.  Even a technology that most of us take for granted, the audio file, was once unsupported by a single file format that every browser could play.  Designers had to choose both an audio format and a fallback format. Thankfully that has changed.  All browsers can now legally play the .mp3 file or the HTML5 Ogg Vorbis audio format.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

Soon the .m4a audio file (AAC) will be supported by all browsers and offer even higher quality audio at less the data cost.

caniuse.com by Alexis Deveria, available under a Creative Commons Attribution 4.0 License

But audio is only the beginning.  All modern browsers (IE 11 excluded) support GIF, animated GIF, JPEG, PNG images, animated PNG, and motion video in the MPEG-4/H.264 video format. 

All browsers support the language features of the last major revision to JavaScript.  JavaScript is the code that makes the web interactive.  It is the code that makes eLearning projects interactive.  Standardization allows all of us to benefit from the interactions that eLearning authoring tools produce with less worry about browser and device differences.  (I emphasize less worry because there is always something to worry about.)

Interactive 3D has become a new frontier for eLearning.  All major browsers support WebGL, which is a method of generating 3D graphics using JavaScript and hardware acceleration.  In the early 90’s, when I first created 3D worlds, I needed an entire lab of computers dedicated to rendering three-dimensional meshes into an animation of three-dimensional images that we would transfer onto a laser disc.  Today, WebGL enables us to render a mesh into a rotatable, scalable image in real-time, all in a browser.  If you’re not familiar with WebGL, please read on.

In short, Learning Experience designers, instructors and trainers can now use audio, video, imagery, text, three-dimensional graphics, scalable vector graphics, math mark-up, interactivity, and logic to realize their grandest designs and create engaging experiences for their learners.

On the eve of LodeStar 10’s release, I am taking stock of these standards and other influences that had a strong bearing on where our product is headed.  Like all toolmakers, I am keeping an eye on effective strategies as well as emerging and maturing technologies and am imagining the opportunities for designers as we work to make these technologies practical and accessible.

Here is a list of standards and strategies that are central to LodeStar’s current development.

Scalable Vector Graphics

A lot of our development has focused on Scalable Vector Graphics.  SVG offers the designer many benefits.  Simple graphics such as the famous SVG tiger pictured here keep their sharpness regardless of the display size and the resolution. They are scalable.  They also offer more opportunity for accessibility.  Scaling can help learners with low vision.  The SVG title is readable by most screen readers. Also importantly, the SVG graphic is made up of individual elements whose properties can be changed by program code or user interaction.

LodeStar displays SVG graphic

In the screenshot below, the tiger graphic is opened in an SVG editor in LodeStar.  The author has right-clicked on an eyeball and can now choose branch options based on selection, deselection, drag, hover over and hover out.  All of LodeStar’s branching options and script can be executed based on any of the above events.  For example, based on the click of an eye, things can happen: the eye color changes, an audio description plays, an overlay appears with a complete description of a tiger’s vision and so on.

With LodeStar, designers edit SVG graphics and add interactivity

Importing PowerPoint as SVG

We’ve never been huge fans of starting an eLearning Project as a PowerPoint.  That hasn’t changed, but LodeStar10 does support importing a single PowerPoint slide or an entire PowerPoint presentation as a series of SVG pages. 

PowerPoint supports exporting a slide or series of slides as SVG.

PowerPoint Presentation

LodeStar 10 adds support for importing a single SVG image or an entire folder of SVG images.  LodeStar interrogates each slide and looks for things like Base64-encoded images.  PowerPoint converts imported images to a long string of characters called Base64.  This is a great format for transporting images inside a single file but browsers tend to load and render Base64-encoded images very very slowly.  LodeStar detects the Base64 encoding and then translates the characters back into an image file that is loaded into the project. 

The result is that vector graphics are editable as SVG elements, and embedded images load and display quickly.  The designer can display the slide as is, edit elements and add branch options to elements.

Designer edits a PowerPoint slide in SVG editor

MathML

For a short while, all browsers supported the MathML markup language, enabling markup without the need for add-ons.

Rendered MathML in LodeStar HTML editor

But there have been setbacks. We’re looking forward to when MathML is once again available in all browsers. Given the likelihood of that, LodeStar continues to support MathML.

Support for MathML

MathML (Mathematical Markup Language) is supported by W3C as the preferred way of displaying mathematics on a web page or eLearning application. MathML describes structure and content of mathematical notation and provides for a higher level of accessibility than simply displaying an image.  Designers can quickly edit and manipulate the size of a MathML expression.  This is an improvement over taking a picture of an equation, for example, and pasting the image into a presentation.  In the past LodeStar automatically converted expressions into images or it used the MathJax library to convert expressions written in Latex to MathML.  But now we’re banking on full support for MathML in the near future.

SimpleSim

For years, LodeStar offered the Interviewer Page Type to support what we called decision-making scenarios and simple simulations.  We continue to offer that page type but have expanded the number of layout options for interactive decision-making. 

For starters, we added a new page type called the SimpleSim.  This page type supports graphics, interactive widgets, text and whatever else is needed to set the scene.  At center stage is the situational prompt and three decision options (as pictured below).   All of LodeStar’s branching options can be invoked based on the learner’s choice.   For example, the ‘Jump to Page’ branch option can bring up a scene that matches the choice and advances the narrative.   Branching options also allow us to add feedback, keep track of points, collect user responses and so forth.

To style the scene shown below, the author used a palette for the color scheme, added a header graphic through Tools>Themes, selected a layout style that set the window width and navigation at the top, and added a background graphic.   The use of palettes, themes, layouts and page types enables the author to control every aspect of this simple simulation, including the interactivity.

Look and feel is controlled by Layout, Theme, and Palette

CCAF

It’s no secret that we are huge fans of Dr. Michael Allen’s Context-Challenge-Activity-Feedback model.  In a project that was intended to improve employees’ social sales index (SSI) in LinkedIn, we set the context as a simulated LinkedIn.  For the challenge, the learner must improve the main character’s SSI score by providing the right advice and interacting with a simulated profile, notifications, messaging, etc. – just like LinkedIn!

LinkedIn Simulation

CCAF projects are not page turners or Present-and-Checks.  They can be quite advanced.  To support a more sophisticated interaction than the display of content and multiple-choice questions, LodeStar offers LodeStarScript, which can be written in the Execute Command branch option.

LodeStarScript enables designers to change the properties of graphics on the fly, including SVG graphics.  Properties can include color, position, image source, rotation, opacity, etc.  LodeStarScript offers the designer the power of conditional logic, loops, local and global scoped variables, and a long long list of functions.

In the simulation below, the learner can select a camera aperture and control exposure.  The effects of exposure are simulated with the simple change of the color and opacity properties of an SVG element.

Camera simulation with LodeStarScript

xAPI/CMI5

Megan Torrance, a veteran of learning design, authored a research paper sponsored by the Learning Guild.  I won’t steal her thunder and encourage you to read the paper for yourself, but I’ll cite two statistics from her research that tell the story of xAPI.

In a survey of 368 respondents, the majority of whom belong to organizations that create or purchase learning solutions, 44.9% of the respondents indicated that ‘We are interested in xAPI but have not used it at all.” 

Version 1.0 of xAPI was released way back in 2013, and yet 10 years later adoption is not widespread.

So what is xAPI, how does it relate to CMI5, and why are we so interested in it?  In short, xAPI and CMI5 are game changers.  They are not the same thing but they are close cousins.  An eLearning activity that uses CMI5 can generate an xAPI statement, which gets recorded in a Learner Record Store.  CMI5 can also tell the LMS whether the learner passed or failed. 

So, let me be a little more specific.

With these technologies, I can store my eLearning projects in my own repository — GitHub for example.  I can then import a very lean and simple file to the Learning Management System, which tells the LMS from where to launch the activity.  The LMS then passes to the activity learner information and a token for secure communication.

CMI5 uses xAPI technology but it also understands the vocabulary that LMSs require.  Pass/Fail.  Incomplete/Complete.  xAPI reports to a learner record store any statement that the designer has added to the eLearning activity.  ‘Learner has reached Level Two.  Learner completed a video.  Learner attempted Level Three four times.’  CMI5 can generate any kind of xAPI statement in the form of learner actions.  In addition, CMI5 can tell the LMS whether the learner passed and/or completed the module.

 But among the reasons why people don’t yet use it are:  lack of knowledge, lack of Learner Record Store, and LMS does not support it.

I am extremely fortunate in that our Learning Management System is Prolaera.  It is designed for the CPA industry.  Prolaera can import a CMI5 activity.  As a result, I can do the following:

  1. Send a statement about the learner reaching Level 5 to the learner record store.
xAPI statement

2.

Read a list of learner experiences from the Learning Management System’s Learner Record Store. (the learner’s name has been erased from the screenshot).

Learner Record Store

From the screenshot above, you can see that we can report on any learner experience.  For example, the learner first experienced the results page with a score of 200 points.  We can also see that the learner passed, satisfied the requirements, completed the module and terminated the activity.  These are all terms that the Learning Management System understands.

It may take time but CMI5/xAPI will eventually be widely adopted.  These standards are incredibly important to the advancement of eLearning.  It begins with awareness. The more designers learn about it, the more they can encourage their learning management system vendors to support it.  In the meantime, we are ready for it!

3D

Glen Fox’s Littlest Tokyo is a great example of what is possible with Three Dimensional objects viewed jn a browser.  The object is beautifully detailed with a running streetcar animation as an integral part of the 3D object.

Littlest Toyko, by Glen Fox

Designers will be able to use free tools like Blender, TinkerCAD, Sketchup or even their smartphones to produce 3D meshes.  

Smartphones like iPhone 12 come equipped with LIDAR.  LIDAR emits a laser pulse that reflects off of solid surfaces and returns to a sensor on the smartphone.  The round-trip duration is noted. From that, the software can accurately position the solid surface in three-dimensional space. LIDAR has been available in specialty instruments for a long time but for designers to be able to use this technology practically, the software needed to improve.

In whatever way the 3D model gets created (3D graphics software, downloaded from a warehouse, generated by LIDAR) it can then be loaded into a viewer and manipulated (scaled, rotated, navigated) by the learner.  Imagine vital organs or historical places or complicated machines as manipulable objects. 

Currently, we’re working on a loader and viewer for 3D Models.  The first LodeStar 10 release won’t include a 3D model viewer, but we’ll introduce it later in a minor release.

In the meantime, we do support photospheres.  Photospheres use the same underlying technology: WebGL. WebGL enables hardware-accelerated physics and image processing and rendering onto the HTML5 canvas.  The hardware is a dedicated processor called the Graphical Processing Unit or GPU.

The photosphere that appears in the screenshot shows a distorted view of an art gallery.  The first art gallery image (shown below) was produced in Blender.  The second art gallery image was taken with an iPhone at the Minnesota Marine Art Museum in Winona.

Photosphere created in Blender
Photosphere created in iPhone

The image appears distorted – in fact, spherical.

Once in LodeStar, the designer can add images, markers, and hotspots to the photosphere.  All of these things get correctly positioned on the sphere.

In the LodeStar editor below, I am adding Lawren Harris’ paintings to the gallery as well as hotspots.  A hotspot click takes the learner to another room in the gallery.  A click on the painting brings up an image overlay.  A click on the video graphic starts a video. 

LodeStar editor adds interactivity to Photosphere

The end result:

Interactive Art Gallery on the Group of Seven

Conclusion

2023 marks the twentieth anniversary of LodeStar Learning.  We filed with the Minnesota Secretary of State on March 11, 2003.  I’m pleased that LodeStar has adapted to all of the technology changes over the years.  LodeStar began as code embedded in Lotus’ LearningSpace.  It then enabled instructors to create rich learning activities in ActionScript and Flash.  In 2013, LodeStar Learning pivoted to a whole new generation of software that used HTML5.  LodeStar10 continues that progression and harnesses the power of HTML5, SVG, 3D and so much more to help designers create great learning experiences.

Visual Design for eLearning

Introduction

In eLearning, good visual design is yet another challenge.  As instructors, we want our interactive lessons to look good – but we aren’t trained in layout and graphic design.  In many of my own projects, I’ve relied on graphic designers – but often I’ve had to make do with my own limited skills.  I’ve learned a couple of things over the years and am happy to share what little I know – more as a starting than an ending point.

Let’s begin with the premise that we want our pages to be visually appealing to students.  Of course, more importantly, we want our pages and layouts to support our instructional objectives.  We want things to look good and function well.  At the very least, we don’t want our design to distract the students or confuse them.

Fortunately, visual design is a combination of art and science.   We can draw from a body of knowledge that is evidence-based and not as subjective as we might imagine.

To describe visual design, I can start with the basic concepts of  flow, color, style, reading order, consistency, contrast and structure.

When in doubt, simplify

Whenever I’m in any doubt about visual design, I think about the art gallery.  In most galleries, the walls don’t compete with the art work.  Plain walls.  Open spaces. Strategically lit rooms.  The labels and interpretive text are positioned so the information is easily associated with the art work. The label doesn’t compete and isn’t crammed.  The text is printed in high contrast to the background.  I can move easily from piece to piece all around the room and then onto the next.  The flow is well thought out.

art_gallergtufts

Tufts University Art Gallery

Our interactive lessons can be designed similarly.  Text can be cleanly separated  from imagery – with an adequate margin between text and image.  Margins can provide clean separation of the other page elements. The page background can be selected to not compete or distract from the lesson.  The developer can be intentional about guiding the eye from one thing to the next.

Or not

Or sometimes, for effect, we can do the exact opposite.  Agitate, provoke, move students out of their comfort zones.  But, in either case, visual design requires intentionality.

Visual flow

Screen elements have different visual weights or powers of attraction based on the size, color, and even shape.  Unusual things attract the student’s attention.

Instructors should decide where students should look first.  If one element is larger than the others, students’ eyes might be drawn there.  If all elements are in black and white but there is a splash of color somewhere on the page, the student’s eye will go there.  We’ve known these things for some time, but recently, usability labs have provided us with eye tracking sensors, which produce heat maps. Heat maps graphically display how people look at a software screen, for example, and which elements they look at. Areas that attract the most attention appear in hot red.

From usability studies and from age-old observation, we know that visual designs have an entry point. We need to plan or consider where that entry point might be.

We also know that visual designs can have unintended exit points. As an example, hyperlinks can be hugely counterproductive to visual flow control.  For good reason, we think of hyperlinked information as being highly useful to students (another resource) but they introduce the risk of students losing the flow, being distracted, perhaps never returning to the lesson.

If our visual design is a simple text page, our job is easier.  We can use headings, sub-headings, text wrapped around images as well as size, italics and color to signal very important information.  If a page is a free-form layout, we need to plan visual flow more carefully.  In that planning, we need to note that the eye is attracted to color, strong contrasts, and follows along thick lines or elements that are composed in a way that suggests directionality.

Color

Color can be used to direct the eye and to attract the student’s eye to key information.  Richard Mayer, in his book Multimedia Learning (Cambridge Press, 2001), describes the signaling principle.  The signaling principle states that people learn better when cues that highlight the organization of the essential material are added.  Instructors can use color to provide that cue, but color-blind students will not benefit.  Multiple cues are needed to highlight essential material.  Italics for example.

2_illustration of color

Color used sparingly to draw the eye.  Layout created by Clint Clarkson

I’ve always been cautious of the ‘circus’ effect of too many colors.  One color will clearly signal important information or draw the student’s attention if s/he is not color blind.  Two and three colors can be used effectively.  Introducing more colors leans toward a circus effect, where color ceases to attract attention.  Graphic design sites describe a 60-30-10 rule, which states that:

The dominant color should be used 60% of the time, your secondary color 30% of the time, and an accent color 10% of the time. Typically, the most dominant color should also remain the least saturated color, while your bold or highly saturated accent color should be saved for your most important content.

http://www.eyequant.com/blog/2013/06/27/capturing-user-attention-with-color

 Style

Style may be the most fickle thing to embrace in your visual design approach.

In the early 20th century, graphic designers were influenced by modern art, the Bauhaus school, posters, the De Stijl movement (think Piet Modrian), constructivism, architecture and more.  Today graphic designers are as likely to be influenced by styles on the web.

Just a couple of years ago, instructional screens featured gradients, beveled buttons, drop shadows, textured backgrounds and an attempt to imitate the material world in the digital medium.  Microsoft and Apple, in the redesign of their graphical user interfaces, reflected the sudden change away from material world imitation.  Buttons lost their three-dimensionality and became flat, single-color, texture less features.  The new look became, in a sense, minimalist and, perhaps, more functional.  The rise in mobile computing favored flat designs over both texture and minute detail as well as other features that didn’t translate well to the small screen smart phone.

4-apple

Apple Interface: Shift to a flat design

Flat design is a thing.

“Flat design is a minimalistic design approach that emphasizes usability. It features clean, open space, crisp edges, bright colours and two-dimensional illustrations.”  –Tom May, 2018

But styles change.  So, what is an instructor to do?  My hunch is that we should focus on evidence-based practices and embrace minimalism not for its trendy appeal but for its functionality.    We should probably pay attention to the world around us.  Pay attention to styles on the web.  Pick your favorite website and think about the underlying elements that make it visually appealing and functional.  Visit the website of a college of art and design.  Follow it over time.  But don’t get too hung up on style.  It is a black hole.  Once you pass the event horizon, you’ll never return to creating anything useful for your students.

Reading Order

Focus instead on some simple things – such as reading order.  Highlight important words to ‘signal’ their importance.  Use headings and sub-headings to expose the organizational structure of your page and to help students with visual disabilities who rely on a screen reader.  (Students with screen readers scan pages by moving from heading to heading.  A blind student who used JAWS (popular screen reader) can hit the 1 key to navigate to a level 1 heading  to get a sense of the structure and organization of the document.  He can hit the 2 key to move to a level 2 heading.

Use bulleted lists and numbered lists where appropriate and reduce the amount of writing.   The traditional wisdom was to ‘chunk’ writing by separating it into pages – but mobile devices may be affecting students’ habits.  They are accustomed to endless scrolls.  More research is needed on the effects of cognitive load of endlessly scrolling pages.

Again, when in doubt, simplicity is preferable.

Consistency

Consistency is key. As students navigate the lesson, they shouldn’t burn brain cells on figuring out each page.   Pages that function the same should be styled the same.   For example, imagine that your page summarizes key concepts with a bulleted list.  Summarizing key concepts is an important strategy.  Our  pages may dive deeply into the details – but we want students to emerge with a clear map of the key ideas.  A bulleted list can be set off to the side of the page (left or right) or placed underneath, separated by space, color, and possibly a border.   The placement should be consistent so that students know where to find the summary in each part of the lesson.  They’ll look for it.

Contrast

At all times we need a strong contrast between the text and the background.  Lack of contrast affects readability.   Strong contrast also directs the eye.   I break this rule too often when I style hyperlinks to be colored in something other than the standard, boring blue with no decorative underline.  And I always regret it.  I strive for elegance and create a problem instead.

Some of these key principles relate to work done on perception by the Gestalt psychologists of the early twentieth century.  One of their principles, ‘Figure-Ground’ relates to an object and its surroundings.  Photographers embrace this principle when they want the subject of a photograph to be clearly known – in other words separation of the subject from the background.  Photographers will use a large aperture setting to blur the background (reduced depth of field) and thus create a clear distinction between figure and ground.  All elements in the lesson need to be distinct from the background – and that especially applies to text and the background.

Structure

Structure relates to the organization of elements on the screen.  It is concerned with proportion, symmetry, asymmetry, and balance.  These concepts are expressed in so many ways.  In photography, artists may think in terms of the rule of thirds – whether they are following or breaking the rule.  Two-thirds land; one-third sky.  One-third rocky foreground; two-thirds blurred valley background.   Two-thirds of blank space on the left; one-third of birds on the right.  Halves, in symmetry has quite a different effect and can be a statement in and of itself.  The parliament buildings of London reflected in perfect symmetry in the Thames, for example.

We can make similar decisions with the placement of images on the page.  They can be set with a width of 66%, which means that they will always scale to two-thirds of the page, regardless of page size.  Or the image can be set to 33% with text wrapping the image and taking up the remaining space.  Or they can be wrapped in negative space (e.g. white background) with the ratio of image to negative space a very deliberate choice.  Again, photographers might subdivide the plane in a three by three grid, which gives them 9 spaces in which to organize the structural elements of the photograph.  Traditional layout artists, similarly, had grids that subdivided the page.  Instructors can get a sense of their layout by abstracting the visual elements on the page as shapes.  The paragraph becomes a dark block.  The negative space becomes a white block.  What proportion of the overall space do the blocks occupy?  What is their relationship to one another?  Are they pleasing and pure?  Are they distracting and confusing?

Ratios or proportions reduced to formulas probably doesn’t explain why some layouts are pleasing to the eye and others are not – but it is still interesting to consider the use of math in the pursuit of beauty. The divine proportion or the golden ratio was probably used to plan some of the great pyramids and it is being used evidently today to construct websites.  We know that from, again, abstracting web elements into dark and light shapes. The ratio is defined by a simple equation:

a/b = (a+b)/a = 1.6180339887498948420

So, if our text block was denoted by ‘a’ and our image block was denoted by ‘b’, the ratio of text to image would be the same as the ratio of text plus image to text alone.  So, the secret to all good learning is in the golden ratio?  Not quite.  The only point I am making is that the proportion of things will have an effect.  We should at least be aware of how things are laid out on the screen. Proportion matters.

3-Proportion.png

Layout created by Lauren Franza

Conclusion

The instructor who consciously and conscientiously includes visual design in the planning of his or her eLearning lesson will reap the reward.  Students will benefit from being guided through the lesson, and not being distracted by colors, crammed elements, inconsistency, poor readability, and an off-putting layout.  Visual design is a large study – but the application of a few principles will greatly improve one’s eLearning design.