Stairway to Expertise
– Show me, coach me, test me, let me, congratulate
Back in the last century, people learned to operate computer software by reading thick manuals laden with obscure text and scant pictures. Or they attended training classes where they squinted at the instructor breezily demonstrating barely recognizable procedures. Or they clickety-click-clicked their way through the Help file, gleaning snippets of information but never weaving them into a coherent tapestry.
Today, computer users can learn from a personal tutor who demonstrates the program, guides them through their initial efforts, monitors their growing skills, and certifies their mastery. Tools like Captivate, Camtasia, and TurboDemo make it possible for teachers and communicators to create effective software simulations–without programming. Even simple presentation tools, such as PowerPoint can create truly interactive simulations.
Software simulations have great potential to teach and are within the capabilities of most technical communicators, tech-support technicians, trainers, and other user-assistance professionals. However, the remarkable potential of software simulations to inform and educate will not be realized until software simulations are systematically designed to build the expertise of users from first acquaintance to fluent and expert use.
I'll come right out and say it. Most software simulations fail–not because they lack potential but because they engender about as much true interactivity as the worst gobbledygook-laden manuals.
Demonstrations are not enough
If you search the product-support sites for most software vendors, you will notice that about 80% of what passes for software simulations are not true simulations at all. They are just demonstrations. Don't get me wrong, I like a good demonstration as well as the next lazy user, but demonstrations put the user in an inherently passive role. There is more mental activity in watching a Law and Order rerun than in watching a software demonstration.
Demos are not true simulations
Although the term simulation is used broadly to cover a range of activities, it is important to make a distinction between demonstrations and true simulations. The contrast is especially important in education and training. The key distinction is that, in true simulations, the user controls the course of events. Take a look at this side-by side comparison.
In a demonstration, the user watches passively as the task is performed. It is like looking over the teacher's shoulder. The user may start and stop the demonstration, but does not actually perform the steps being shown. In a true simulation, the user decides and acts. The simulation may animate the system's responses to the user's actions, but it is up to the user to decide what to do and how to carry out those actions.
In a demonstration, the user learns from a clear explanation. A logical commentary or narration is essential in a demonstration. In the simulation, learning relies primarily on authentic feedback, like that provided by the actual system or by a coach who is experienced in using the system. Simulations let users practice as much as they want to or need to.
Demonstrations are highly effective for selling products and for informing users. They can make clear the great results possible with the product. And they can teach already experienced users how to extend their existing skills. True simulations are ideal for serious training and education because retention is greater and understanding is deeper.
These distinctions between demonstrations and simulations are crucial for two reasons. First, you should never send a demonstration to do a simulation's work. Or vice versa. Each has its uses but they are not interchangeable. They are, however, complementary. A good demonstration prepares and motivates a user to engage in a true simulation. Users who have used simulations to learn basic operations can often pick up additional skills by watching well designed demonstrations.
Levels of interactivity
Software simulations afford different levels of interactivity with the user. In some, the user is quite passive and in others the user is almost as active and self-reliant as with the real system. Let's look at different levels of interactivity possible in simulations and see where each level is appropriate.
Levels of simulation activities
I define four distinct types of simulation activities along a scale of increasing self-reliance by the user. Each different level of self-reliance corresponds to a different type of simulation.
At the bottom of the scale is the show-me simulation in which the user just watches the simulated task being performed.
Up the scale is the coach-me simulation. In this type simulation, users follow instructions to perform the task themselves within the simulation.
Further up the scale is the test-me simulation, which measure the user's ability to perform the task within the simulation without prompting.
Finally, at the top of the scale is the let-me simulation which instructs the user to perform the task with the real software.
How do these levels correspond to terms commonly used to refer to these activities? The show-me activity is what we usually refer to as a demonstration or demo. The term simulation is commonly used to refer to the coach-me and test-me activities. The let-me level is not usually thought of as anything other than just using the product.
Why is this scale important? It provides a staircase for educating users. Novice users start out at the bottom of the scale, because they know little about the simulated program. For them to become productive, proficient users, they must climb to the top level of the scale.
Within each level of activity are more specific levels of self-reliance, which I will spell out later.
Different levels of learning
If your goal is to boost users from no knowledge of a computer application to skillful use of the application, you may need a range of activities at the different levels of interactivity.
Here is an example of a learning object that contains four levels of simulations.
You can visit this example at horton.com/sims. This example offers four levels of activities, each behind a separate tab. NOTE: This example requires the Flash plugin for your browser.
The Show me tab triggers a show-me demonstration that illustrates how to perform the task. The Coach me tab starts a coach-me simulation that provides feedback and prompting the user to perform the task. The Test me tab starts a test-me simulation to measure users' readiness to perform the task on their own. And the Let me tab provides a problem for users to solve using the real software.
This approach provides a stairway of progressive challenge. Users may select the activities in sequence. Or they may skip over a level if they feel confident they do not need it.
A show-me activity lets users watch a clear, convincing demonstration of a task, procedure, or feature. In the show-me activity, the demonstration performs the actions while the user watches. Actions are explained by commentary provided as displayed text, spoken narration, or both.
Here is an example of a show-me activity:
In the show-me activity, there is no direct interaction with the user. The user may click a Play button to begin the demonstration and a Continue button when it pauses, but that is about all.
This lack of interactivity is the strength and weakness of the show-me activity. It is a strength because it requires little skill or motivation on the part of the user. Users who can play a VCR, can watch a demonstration. But watching is not the best way to learn a complex activity. Without practice, the user may not remember much of what is shown in a demonstration. Still, all in all, the show-me activity makes a fine start in a learning sequence that continues with a coach-me activity.
Several types or genres of show-me activities are possible. One popular type demonstrates a scenario. It shows a real use of the program to accomplish a particular piece of work. Here is an example:
This example starts with a personal and pragmatic goal. It includes sample data describing a realistically complex situation. The rest of the scenario shows how the goal was accomplished–a real-life activity.
Use a scenario approach to demonstrate a practical result the user can achieve with the software. A scenario can also be used to motivate further learning. In either case, it prepares the user to engage in a coach-me activity.
In creating scenario-style show-me simulations:
- Pick an example that is realistic and meaningful to users.
- Tell users what they are seeing rather than how they are to do it.
- Narrate in the first or third person to make clear that the user is watching someone else perform the task. The first-person narration might say: "Here's how I performed the task." The third-person narration would say something like: "Here's how an experienced user might perform the task."
The user-interface tour introduces users to parts of the screen they will need to interact with to operate a computer program. This example points out the main areas of the screen and provides commentary on each area:
The user-interface tour helps orient and entice users. It answer's questions such as:
- What's there?
- What can the program do?
- What do these icons mean?
The user-interface tour covers the main areas of the interface and shows features crucial to its use. It provides users with a clear overview of the interface without overwhelming them in detail. The successful user-interface tour avoids providing too much detail too soon. It may, however, offer links to demonstrations of specific features. That way, users can choose how much detail to see.
Another approach in show-me activities is the feature demonstration. It shows an important part of a computer program, such as a complex command or a valuable capability. This example shows the start of a demonstration of the feature of specifying people for a timer-picker to select from:
Feature demonstrations usually show a simple way to use an individual capability of a computer program to perform a task–a task that many users perform frequently.
The feature demonstration may concentrate on a single command or tool in the program. Or it may describe on an individual dialog box where the user may need to take several actions.
Because the feature demonstration provides details on specific parts of a program, it often follows the user-interface tour which provides an overview. Or the user-interface tour may provide links to feature demonstrations that elaborate on the components mentioned in the user-interface tour.
What goes in Show-me activities
Here is a job aid I developed to guide designers in creating show-me, coach-me, and test-me activities.
Action occurs in three phases. First, the user is introduced to the task. Next, the user performs the steps of the task. Finally, the user examines the results and reflects on what was learned.
For each step, components are listed. This is a rather complete list. The circles at the right flag which components are used in which levels of simulations.
This list helps when developing multiple levels of simulations for a task. It aids in planning overall development process by identifying components that can be reused in different levels.
The coach-me type activity guides the user in performing the simulated task. It provides clear prompts for each step in the procedure and meaningful feedback on the success of each step taken by the user.
Within the coach-me activity, you can provide varying levels of support or scaffolding. A coach-me activity for beginners might provide explicit instructions for each step ("Click here. Type ‘123' and press Enter"). Or, it might provide only general instructions ("Enter the postal code."). For advanced tasks, the coach-me activity might withhold instructions until requested by the user.
The key design principle of the coach-me activity is that the user should never succeed without thinking or fail for lack of information.
What is coaching?
Coaching learners through a task requires giving them freedom to act and advice when they need it. Following is an example of how coaching was applied in one simulation.
The example shows the user at an intermediate step in a procedure. The goal for the procedure is displayed at the bottom of the simulation, just above the navigation buttons.
But what if the user cannot figure out the next step? The user can then click the Hint button to receive a suggestion.
Notice that the hint is phrased as question. A hint does not tell the user what to do. Instead it directs the learner's thinking in a way that helps the learner recall or discover the solution. A hint may define an ambiguous term, direct attention to a specific part of the display, or highlight a part of the original goal.
Clicking the Show how button reveals explicit instructions.
Experienced users may be able to complete the procedure without ever requesting assistance. Novices may have to continually punch the Hint and Show how buttons. Each users gets the assistance needed when and where it is needed. And that is the nature of coaching.
Architecture of coach-me simulations
Coach-me simulations have a more complex organization than show-me demos. This organization, or architecture, is designed to embed the necessary prompting and feedback. Let's look at the architecture for the coach-me activity I just described. This architecture enables users to attempt the task on their own. It also allows them to receive hints or explicit directions if they request them.
This diagram shows the structure for each step in the simulated task. At the start of the step, the user sees a simulated screen, without any prompting. If the user makes the correct response, the simulation continues with the next step. (If the step is complex, you may want to display a brief note confirming the correct response.)
Users who need just a little assistance can click a Hint button to reveal a suggestion for what to do. The suggestion does not tell the user exactly what to do, but does guide the user to think of the solution. A correct response then puts the user back on track.
If the hint is not sufficient, the user can click the Show how button to receive precise instructions on where to click and what to type. Once the user follows the directions, the simulation continues.
In this architecture, the user does not have to request a hint before receiving explicit instructions. The user can always click the Show how button to receive explicit directions.
The example shown here is but one possible architecture for coach-me activities. You might choose to have three levels of assistance available–or only one. You might choose to display prompts or hints without requiring the user to request them. Best practices do not ordain any particular architecture. Best practices suggest you select an architecture based on the difficulty of the task you are teaching and on users' experience performing similar tasks. Remember, users should be challenged, but not frustrated.
The test-me type activity gauges the user's ability to perform the simulated task without assistance. Although users are still within the safe confines of the simulation, test-me activities require them to act more independently than in coach-me activities. In the test-me activity, the user is assigned a realistic task to perform. It may be a simple or a complex task, but it is the kind of task the user would be expected to perform when using the software back on the job.
Here is a frame from a test-me simulation. It looks like the coach-me activity except that there are no navigation buttons. Navigation occurs only by successful completion of steps of the procedure.
In test-me simulations, minimal feedback is provided to the user. Remember, this is testing, not teaching or documentation. Generally, the only feedback provided is that provided by the actual user interface of the program being learned. Sometimes the user may be told when they have made a mistake and instructed on how to continue.
Test-me activities may or may not record a test score for the user's performance. If you are required to certify users' readiness to perform tasks with the program, you will probably want to record a score or at least a pass-fail indicator. In any case, share results with users. They will appreciate knowing if they have learned sufficiently well to begin using the program on their own.
If you are providing minimal feedback, why use a simulation at all? Why not have learners use the product itself, as in a let-me activity? Well, you may choose a simulation because it can record data on user's progress and because the simulation is safer than operating the real system.
Architecture of test-me activities
The architecture of test-me activities is relatively simple. Each step of the task follows a common pattern.
At the start of each step, the simulation waits for the user to respond. It does not provide any prompting other than what the user would receive using the real product.
If the user makes the correct response, the simulation shows how the system would react to that response. And the simulation moves on to the next step in the task.
If the user makes an incorrect response, an error message may be displayed. This message typically points out the error, records the error, and tells the user what to do to continue with the simulation. Once the user makes the correct response, the simulation continues as if the user made the correct response initially. However, no credit is given for a correct response after an error.
By requiring a correct response to continue, you help the user to learn to avoid the mistake in the future. If you do not require the correct response, the user may very well remember the incorrect response rather than the correct one.
If several different errors are common at one point in a task, you may want to include separate error pages for each. That way your feedback can be more specific.
I have shown one possible architecture. Another architecture for test-me activities foregoes error messages altogether and just has the simulation respond as the real system would. This makes the test much more realistic but less instructive. The choice will depend on the degree of rigor you demand in your test and on the user's stage of learning the task. If users are just beginning to perform the procedure on their own, feedback on errors can help them perfect their learning and correct lingering misconceptions.
The let-me activity bridges the gap between simulation and application. It requires users to perform a task with the real software. Let-me activities provide a clear assignment and standards to gauge success.
Here is an example of the first step in a let-me activity. It is the page that prepares the user to begin the activity. Buttons at the bottom display other parts of the let-me activity.
The core of the let-me activity is a clear goal. The goal should be appropriately challenging but in no way dangerous to the user, the computer, or precious data.
Typically, the let-me activity provides a scenario complete with source data. It may include instructions for getting started. For example, it may tell the user how to download files, set up needed directories, and install resources.
Because the let-me activity is performed with the actual software, you may want to include criteria that users can employ to evaluate how successful they were in completing the procedure.
Architecture of let-me activities
Let-me activities can have any architecture you wish provided they give users the instructions they need to begin using the software on their own. But just giving open-ended instructions may not suffice to produce a successful experience.
Let's look at one structure that provides the support needed for users to perform a let-me activity with the real software.
Components of Let-me activities
Here is a more detailed job-aid listing components commonly found within let-me activities.
Notice that let-me activities are divided into four phases: prepare, perform, evaluate, and reflect. For each phase, there are listed actions the user performs in each phase and resources required by the user in that phase.
Plan progressive interactivity
To lift users from first acquaintance with the application to expert usage, you must lead them through a series of progressively more difficult activities, each activity building on the last and providing appropriate challenge. Think of it as designing a ladder where you must decide the correct spacing of the rungs. Part of your task as a designer is to select the levels of simulation necessary to accomplish your goals. Should you encourage users to take small, easy steps or bold strides?
Let's consider what kinds of specific activities users will engage in as they progress through the various levels of simulation activities. Here are some possibilities arranged along the scale of user self-reliance you saw earlier.
At the bottom of the scale of user self-reliance, at the foot of the show-me activity, the user just watches a demonstration passively from beginning to end.
Just above that, at the top edge of the show-me level, the user controls the playback, typically by clicking a Next or Continue button each time the demonstration pauses.
One step up, at the bottom of the coach-me level, the user performs the simulated procedure, but is given explicit prompts every step of the way, for example, "Click here" and "Press Enter."
A more challenging coach-me activity might offer prompts but require the user to request them.
Still more self-reliance results when the coach-me activity requires users to follow open-ended prompts that suggest what to do but not exactly how.
Further up the scale, coach-me activities provide no prompts, but let users request hints when needed.
At the upper limit of the coach-me activity, users receive no assistance beyond hints that appear when they make mistakes.
Test-me activities provide very little assistance other than feedback on errors. At the bottom end of the test-me level, the feedback might occur immediately after an error.
At the upper end of the test-me activity, the user might receive feedback only after completing the task or giving up.
Let-me activities can be performed at two levels of self-reliance. At the bottom level, the user performs a task prescribed by you. You may structure the task to make it easier for the user to perform. You might also include some hints or cautions to keep the user on track.
At the top end of the scale, at the most self-reliant end of the let-me activity, the user performs a task of their own choosing. This level is indistinguishable from real usage and represents complete self-reliance. Hum the graduation song and toss your caps in the air!
Unless you have unlimited budget and your users have unlimited patience, you will need to pare down to just a few levels of interactivity.
Let's look at an example from a recent project. Here our goal was to teach engineers use of a moderately complex piece of software.
We decided to start at the bottom show-me level by having users watch a demonstration of the complete task. This would ensure that they saw the big picture before they had to confront the details. That way, nothing they saw later would come as a complete surprise.
Next, we had them attempt the procedure with minimal prompting in a coach-me simulation. We knew that the leap from watching the task being performed and attempting it was large. So we designed some fall-back levels into the coach-me activity. Users could, when stumped, ask for detailed hints. If that level was not simple enough, they could then request explicit instructions of where to click and what to type. This way, one coach-me activity provided three levels of interactivity spanning the full range of coach-me simulations.
After some prototyping, we decided to omit test-me activities as it added little to the coach-me level done with minimal prompting.
We did include a let-me activity to have users perform a task with the real software on their own. We provided them with a specific goal, sample data, general instructions, and tips to evaluate success.
Your situation will be different of course. But I hope you will go through the same though process in deciding what levels of interactivity you need.
William Horton is a recognized international authority on appropriate uses of new electronic media. William Horton is author of nine books on technical communication, including Designing and Writing Online Documentation, The Web Page Design Cookbook, and Designing Web-based Training. He is a Fellow of the Society for Technical Communication, recipient of the ACM SIGDOC's Rigo Award for advances to software documentation, and winner of the IEEE Professional Communications Society's Goldsmith Award. He has delivered invited talks in China, Sweden, Germany, France, Denmark, Brazil, Canada, and the Philippines. Still not impressed? The kitchen that William and his wife, Kit, designed for their house in Boulder, Colorado, was twice featured in Better Homes and Gardens.