Quiz engineering: How our LMS makes sure knowledge is retained
Training | Features
By Dan Bignold
|
Feb 14th, 2025
From question design principles to optimal average quiz scores, there’s a science to making sure our learning software is cementing knowledge and building staff expertise.

Users get quite heated about our lesson quizzes.

If we do get negative lesson feedback, more often than not it’s because of some perceived injustice in a quiz question.

  • “Badly worded!”
  • “The quiz asked something that wasn’t in the lesson. NOT fair.”
  • “Your ‘correct’ answer is completely wrong.”

Which is great. It’s exactly why our learning management software allows lesson feedback – to fix errors, tweak questions, and generally improve the training for the next learner.

After all, we’re not in the business of trying to catch people out. When it comes to staff learning and development, no one needs a “gotcha!”

At the same time, there is little value in a "gimme".

Quizzes are designed to help staff retain knowledge and cement learning. If they’re too easy, they won’t engage the user properly, and nobody learns. It’s a fine balance.

This is how our online platform does it:

1. Multiple choice questions (MCQ), with about 3-4 questions per five minute lesson. Answer options can be text or image.

  • MCQ means our LMS tests instantly, at scale, without requiring admins or managers at user groups to moderate or mark responses.
  • 3-4 questions means we can cover key messages, without dragging the lesson out too long and risking drop-offs – engagement and lesson completion is key. Less than three, too brief; more than four, a drag.
  • With 3-4 questions, we focus on testing key messages, plus (in the case of brand training) include one wider category question. For example, if it’s a lesson about Brand X’s Chardonnay, we might include one question about how climate factors influence the final wine (but still keep that category question relevant to the brand – again, no “gotchas”).

2. Test key messages, not brand trivia.

  • Trivia can be engaging and good anecdotes can show customers that staff are in-the-know (why they’re included elsewhere in the lesson). But quizzes need to focus on knowledge that’s essential for the majority of sales interactions with customers in retail or bars: flavour profile and tasting notes, drinking occasion (including customer benefit), and price (including position in its category).

3. With three answer options per question, make “wrong” answers plausible.

  • We don’t want guessing, but we also don’t want to gift-wrap the correct answer by making the two others ridiculous.
  • We want learners to recall what they’ve read in the lesson, which means we allow users to go back to review the lesson content before answering. Again, if the goal is learning, we want to give every opportunity.

4. Avoid ambiguity & tricks.

  • Clear, direct and no double negatives – keep it simple, so the focus is on knowledge retention not solving a puzzle.

How does our learning system measure if it’s working?

If we’re doing our job right, we’ll get an average score of between 70-90%.

Less than that? Means the quiz is too hard and messages aren’t landing. Higher? Too easy and not enough users are having to properly engage to pass the test, or the training isn’t giving enough learners new knowledge.

Either show there’s room for something – wording, complexity, information – to be improved.

And how do we fix problems?

We monitor feedback and MCQ average scores for all lessons, to make sure there are no specific issues and that scoring is within range.

For brands, they also have the option to view this live lesson data – including average quiz score, retake rate and user feedback – on their Brand Dashboard. This means they can spot problems and course-correct if a quiz isn’t performing well, and therefore not doing its job properly.