Training & burgers
Picture this: four plated burgers, their buns softly steaming. A quad of hungry lunchers. One very frustrated office canine (our very own Director of Mood Jessy).
The scene? The Talent Hacks HQ. The mission? To demonstrate to professionals participating in one of our Bespoke Academy HiPo programs, that standardized, structured and effective evaluation was both possible to design, and applicable in a myriad other contexts.
Our immediate task was to lead four, half-day Virtual Workshops: to observe, understand and evaluate behavior in order to determine person-job fit. We would also explore the structure required to standardize the evaluation process.
So, we thought, what better way of putting our learning and development training to the grill, than by using the magic of standardization to assess and evaluate… something in parallel. Something completely different. As in, not real-life human beings in a corporate context.
Something like… the best burger.
Before you go thinking this was hunger taking our methodology hostage, you should know that we, at Talent Hacks, are also diehard nerds. We love nothing better than impossible questions that generate 100+ possible answers.
In this case, the impossible question being: “How do you evaluate a burger, so that a group of people (us), reach a consensus about which burger – in this case, out of four brands – is the best?”
The Burger Assessment Project (aka BurP) was born.
Here's to standardization!
What happened next?
Well, firstly we needed some essentials in place:
- Four assessors: Each of us was an assessor, and given we all knew the concepts of behavioral evaluation, we were good to go. The challenge was to achieve inter-rater reliability, so that each assessor’s scores were consistent with everyone else’s.
- Four burgers (x4): At the end of each Virtual Workshop, a different burger would have to be delivered to us for lunch. Burger A on day 1, burger B on day 2, and so on.
- An external helper: Someone other than an assessor had to handle the logistics, i.e. randomly order one brand of burger – unfamiliar to him/her and us – for each of us, per day. They would also need to remove all branding and packaging and plate it for each assessor. This would minimize any possible consumer bias we might have had.
- Evaluation criteria & scoring sheet: Here we ended up with four categories (in a training context, these would be competencies…) with which to assess each burger: Ingredients, Cook, Flavor, and Build. And a range of dimensions ensured each category would be assessed thoroughly.
- A scoring mechanism: Dimensions would be assessed via a 1-6 scale (forcing us to assess either on the negative or positive, with no middle/safe ground). The extreme ends would be described in detail.
The gruelling process...
Once these were in the bag, it was time to test, record and compare results. And that’s exactly what we did.
After each half-day workshop, we would sit down and eat our burger of the day. No talking, nods or facial expressions. No nom-nom sounds. Each assessor had to examine, consume, and then evaluate their burger on their score sheet – without sharing any details with their lunch companions. Next day, same process: rinse and repeat for four days. Pure, clinical assessment.
On Day Four, after the final burger, we held a consensus meeting to go over all our scores. And, wouldn’t you know it? We were unanimous on which cheeseburger was best.
Loosening our belts a few notches, we declared mission accomplished. What’s more, we had proved to our workshop participants that an effective, standardized and structured evaluation process could bring clarity and consensus to almost any scenario.
Even cheeseburger ranking.
Of course, it may be awhile before burgers feature on the Talent Hacks lunch menu again. Assuming our Director of Mood Jessy ever forgives us for the agonizing experiment in the first place…
Make sure you check out our other blog post, Burger Assessment Project – The video, for the video of our experience making this project a reality.