For this week’s episode of The Skytap Virtual Training Podcast, we sat down with Clark Quinn at the 2016 Learning Solutions Conference and Expo after attending his session, “Measurement Matters: The How and Why of eLearning Metrics.” Learn why Clark believes “L&D has the opportunity to be the most valuable part of your business” and what they have to stop doing in order to get there.
Can’t listen right now? Feel free to download to your mobile device for later listening, or check out the full transcript below!
Noel: Clark, in your session yesterday, which was titled, “Measurement Matters: The How and Why of eLearning Metrics,” you talked about how you evangelize about measurement a lot. You mentioned that the learning community isn’t doing a good enough job with measurement, and I wanted to get your take on where L&D is failing. Is it how data is being collected, what kind of data should or shouldn’t be collected, how it’s being reported back to the business? Things like that.
Clark: Sure, Noel. To me, we’re collecting the wrong data. We’re not measuring the right things. What we see being reported in the ASTD State of the Industry Report, and others, are metrics about how much does it cost to have a bum in a seat for an hour. That’s well and good after you’ve had an impact, but to be measuring that, and only that, and benchmarking yourself against everybody else, what you’re missing is whether that bum in that seat for that hour is actually doing anything.
What you really have to measure is the thing you’re trying to change, and that depends on which business unit you’re assisting. If it’s sales, it could be the cost of sales, or the time to close, or the hit rate. Those are the things people need to move, and those are what you should be measuring first.
You determine what you need to change, and then you figure out what would be different in the workplace. Then you figure out, “What intervention do I do?” And you can measure all the way back up. At the end of the day, you want to see if that needle’s moving, and if it’s not, your learning isn’t achieving anything.
Ralph: In our business, we see a lot of technical hands-on training for products. To advance the learning of particular competencies in products amongst customers, we see metrics like Net Promoter Scores, which you may be familiar with, which seems to be measuring how well students enjoyed the class, or how well the instructor was prepared for the class, but, again, it doesn’t really have a whole lot of collection of metrics on what it really means to a business.
What do folks need to do if they’re only collecting things like Net Promoter Scores? What would you suggest, data they can collect from the student directly that would help them better collect the kind of metrics they need?
Clark: The problem with asking people’s opinions of the quality of the learning experience is the correlation between their assessment of the experience and the actual quality of the experience is essentially zero. It’s .09, which is zero with a rounding error, okay? It’s just not valuable.
You can ask them if they’re applying it in the workplace, and if they’re getting the support. Will Thalheimer has written a new book about smile sheets that talks about improving them. There are things valid about them, but that’s still not enough. That’s still not really going to tell you if it’s making any difference in what the training is about, or is supposed to accomplish.
If you only can ask stuff of the learners, the first thing is you don’t ask it right after the learning experience, you ask after they’ve been back on the job for a certain amount of time and say, “Are you now applying this? How is it making you change?” You want them to be reporting on what’s actually happening, not their opinion of it, if at all possible.
Noel: Something else that came up yesterday during your session, which I really liked, I like when people get up and actually say some statements that ruffle feathers and makes people think. You said, “Learning and Development has the opportunity to be the most valuable part of the business.” I think that’s probably because of the business impact that they can bring about, but I wanted to get your take on how you see L&D being able to be that essential. Then maybe how they convey that message to the rest of the business.
Clark: It’s really not about training where that value proposition comes in. The premise is that the world’s changing faster, and the ability to plan and prepare and execute against a predictable world is no longer really valid. What’s going to be critical to organizational success is the ability to be nimble, to be continually innovating, learning continually.
Really, when you’re innovating, you’re problem-solving, you’re troubleshooting, really you’re learning, because you don’t know the answer when you start, inherently. Those are forms of learning as well, and that being able to learn more effectively, efficiently, is really an opportunity for L&D to seize it. I have a simple statement, “L&D isn’t doing near what it could and should, and what it is doing, it’s doing badly. Other than that, they’re fine.”
The point on not doing near what they could and should, they could and should be facilitating and ensuring that the culture is right and the practices are right, so people are getting the maximum from working … doing the knowledge work, working alone and together as effectively as possible. That’s the opportunity for L&D, and if they’re doing that, if they’re helping make sure that the things that the organization needs to do, the innovation and the troubleshooting and all those “agility activities,” if they’re the ones making that most effective, that is the most critical contribution really to the organization.
IT is a tool, it’s a valuable tool, but using IT to be more closely aligned with how we think, work, and learn is the real opportunity. That’s where I see L&D can seize the day, and if they don’t, they’re decreasingly relevant to what the organization needs to do.
Noel: One of the ideas for how they can start to do that, and you mentioned this a little bit earlier, was starting with business impact and prioritizing that, and even incorporating business impact into design.
For organizations who agree with that but have not started it, and have never even contemplated doing it that way, what are some of those first steps that you can do to not just think about business impact first, but like you said, actually make that part of the design process?
Clark: Let me go back to that split between the optimal execution that’s only the cost of entry anymore, and that continual innovation.
Right now, just starting with that optimal execution, the things that we typically do, we too often take the request for a course and develop a course without any consideration of what that course is really supposed to achieve. And more importantly, what is the real problem? The first thing to do is start not just saying, “Okay, you want a course, great.” It’s saying, “What’s the problem you’re trying to solve?” Moving to a performance consulting where you say, “Is it really a skills ability that these people don’t have the ability to do and they need to, or is it just they’re not able to access the knowledge they need right now? Is it perhaps they’re perfectly capable of doing it, but they believe they’re supposed to be doing something else, or the reward system is structured to have them do something else?”
You need to get down to the root cause of the problem, and the root gap in performance, and then say, “What’s the right intervention?” If you do that, and part of that ends up being about the measurement, because you say, “How do you know this isn’t where it’s supposed to be?” You use their metrics, and you start doing that, and you start building credibility for actually having demonstrable impact on the business.
Then you move to the continual innovation side and you start facilitating that, but you start looking for metrics about what innovation is. Are we reducing the number of errors in our production by new processes? What’s our rate of new product ideas? Is that increasing with the quality of product ideas increasing? Are we reducing code errors? Are we troubleshooting faster?
There’re lots of ways. It’s more challenging, admittedly. People talk about intangible benefits, but I think there are things you know that if they start happening more frequently, you just have to figure out if those things are better signs of you doing the right stuff. I say build the credibility by first applying these measurement matters back to the optimal execution side that’ll give you some credibility that you can start leveraging to start addressing the continuing innovation side.
Ralph: Let me see if I understand that response and apply it to something we see again in our field, mostly technical training. The theory is that giving appropriate instruction on skillsets will make your customers repeat buyers, or renewed subscriptions at higher amounts, or even reduce the burden they put on your technical support staff. Are you saying, then, instead of maybe looking at it from a, “I need to teach these certain behaviors, or these certain practices, within my product technically, I should go back and look at what are my top ten support cases,” and perhaps create curriculum around the things that people stumble over the most?
Clark: Absolutely. The reason you go back to those top ten support cases is because these are the problems people have. It might indicate interface or product redesign because it’s fundamental it’s just not matching the way people’s brains work. Many times, and I don’t know about you, but when I use technical products like my iPhone, I’ve never had formal training on it. I figured out how to use it with the interface, and then when I have trouble with it I go to the site and try and use self-help tools. If I don’t remember afterward what I did to fix it, it doesn’t matter. Learning doesn’t have to be the outcome to be the solution. When people watch videos about how to do something like fixing their dryer, which I did not that long ago, I don’t remember now what I did, but I solved the problem in the moment.
So, yes, go back and figure out where the barriers are. Now, if people need to use your technical software and it’s complex, and I figure anything more complex than roughly a digital clock is probably reasonable to have some training, but if you’re doing it well … and too much technical training is woefully abysmal by the way, because they tend to teach you all the way through the menu items instead of giving you the model behind what’s happening. I can go on a whole diatribe about that.
The point being that yes, you should have measurable impacts, but they should be in reducing the number of support calls, and the cost you’re saving indeed should be lower support costs. That’s what you expect.
Of course, Kathy Sierra has a recent book, “Badass.” It talks really nicely about helping people be successful and building raving fans. Not because of your product, but because what they can do because they have your product, but then they become really loyal with the product; but that comes about from design, not training. There is some learning affiliated with it, but you try and build that into the product itself.
Noel: The last question that I had for you was, you also said yesterday during your session, “People who care deeply about metrics that matter are taking an engineering approach to training.” There’s a lot of sessions at this show about metrics, and proving training’s value, and staying relevant. It seems like a lot of these suggestions help with that in that you’re not able to just go to the CFO or the business and say that you’ve reduced the cost or training, or that you were able to train more people, you were able to go and … This approach, this engineering approach, shows that you really did impact the business. You didn’t just impact training alone.
Clark: There’s a lot more to the learning engineering approach, but it is, at the end of the day, being able to make a real argument that, “By design, we solved this problem.” When somebody comes to ask you to make a course, you go look at the material, you talk to the subject matter expert, and then whatever they tell you, you put into the course and you finish it. There’s a number of reasons why that’s just broken.
First of all, subject matter experts don’t actually have access to 70% of what they do. It’s compiled away, which is part of our brain architecture, so just taking what they tell you doesn’t work. Then just presenting that information and doing a knowledge test is what these tools and this pressure for speed do, but that’s not actually going to lead to any ability to do anything differently and continue. If you really want to take an engineering approach, somebody yesterday was talking about when you go to your mechanic. He says, “What’s the problem?” You talk about the sounds, and he does some diagnosis. You want to find out what the problem is, and then you work backwards and you say, “Well, what was the change?”
There’s some art to it, too. You can systematically design the engagement in, there’s still a little bit of creativity in there, but even thinking about engaging the heart as well as the mind is going deeper into the cognitive underpinnings, and that’s the main problem. We don’t design learning for the way our brains really work. Then when you work backwards and then design forwards, and test and verify and measure and tune until you get it right, that’s an engineering approach.
That’s going to lead to real, measurable impact, and that’s the story you’re going to be able to take back to the CEO at the end of the day and say, “I helped the company this way.” Then you can go in and do the ROI calculation and say, “It only cost me X to move the needle Y,” but now it’s not this faith-based approach that we have to learning.
It’s basically, “If I build it, it is good.” Most of corporate America’s attitudes toward training is this, “Well, we know school, and I went to school, and this looks like school, so it must be good.” Unfortunately, it’s not.
Ralph: We rarely get to talk to folks who are experts at metrics around training. I was curious if you’ve seen any studies, or just in your travels, if you have seen any research on the effectiveness of training that’s not only lecture, but also provides a hands-on interactive experience for the student, whether it’s a technical lab that supplements the instructor-based training or something that they actually go and do by way of demonstration. Have you run across the effectiveness training in those scenarios?
Clark: Yeah, there’s been a variety of studies, well done, done properly, with the right ratio of content presentation to practice, and meaningful challenges, and where it’s retrieval practice. Yes, you get significant effects of it.
[Clark later referenced, “The Science of Training and Development in Organizations: What Matters in Practice” as to one such study that proved the benefits of interactive, hands-on learning.]
Particularly the work of David Johnston on problem-based learning and meta analysis, and Strobel and van Barneveld did, shows that if you give people real problems to solve, not only do they retain it better … they don’t perform as well on an immediate test afterwards, but they retain it much longer and they’re able to transfer it better.
We know that giving people challenges that require to draw upon the knowledge, and first drives them to the knowledge, then requires they draw upon it and retrieve it in a context and apply it, and if you give the right context they can abstract across it, yeah, we have strong evidence that that’s much better than trying to just dump a bunch of knowledge in their head. Too often, the ratio tends to be 80% content presentation and 20% practice, and I think we have to flip that roughly to get real sustained abilities, but yeah— hands-on is the right way to go.
Like what you heard? Click here to download or stream other episodes of The Skytap Virtual Training Podcast, and look for new episodes with other awesome guests to post soon!