Reflections on #lrnchat: Exploring Data

Image use courtesy of lrnchat and Kevin Thorn (@LearnNuggets)

Each week that I am able to participate in #lrnchat discussion I post a summary of the discussion to my blog. I do this both for my personal development as well as sharing with the Learning and Development Profession at large. This summary is based on my own interpretations of the chat; others who participated may have differing opinions or interpretations of the discussion. I welcome those that do to add your ideas to the comments.

The topic of this week’s #lrnchat session was “Data: What Do We Need To Know, and What Do We Learn From It?”.

I always find looking at the questions that are used to loosely guide the chat as a nice way to see the overall theme of the chat. Here are the discussion questions that were presented to the group:

Q1. For all the data that is available, how much gets used? Do we learn from the data?
Q2. What are the shortcomings of the data we have? What data do we actually need?
Q3. Why do we collect so much activity data but so little performance data? Is activity data meaningful?
Q4. With the coming onslaught of more, “bigger” data, what are the risks? How have you seen data used for evil?
Q5. Do we do a good job finding the stories that our data tells us? How can we be better at making data useful?

I have a love/hate relationship with data. There’s tremendous power in data. Data often provides evidence to to something that without data may only be theory. It’s data that enables us to take action with confidence that we are on the right path.

That is… assuming we actual collect the right data, and actually USE it.

That’s where the hate side of my feelings towards data comes in, especially in the field of learning and development. We collect data at times, but do we actually use it? And honestly… is the data we collect, technically speaking, a bunch of crap?

Case in point: During an introductory meeting with one of the leaders of the training team at a former employer, I recall asking what the series of binders on the shelf in the office contained. It was explained that these binders contained the post workshop evaluations from the org’s programs.

As I was brought in to help raise the bar of the organization’s training, this data could potentially help me. Unfortunately, it’s potential use fell apart after a few quick questions…

“Have the responses been entered into any sort of database?”
“Have the responses been analyzed in any way?”
…”Um… no.”
“So what do we do with these?”
…”We file them in the binders.”
“And then what?”
…”Well, sometimes we look through them when the annual performance reviews come up.”

We’re talking about ten to twelve three inch binders full of information that was collected and never used. That doesn’t even take into account the fact that the information being collected in the first place wasn’t very useful.

Unfortunately, this story isn’t all that unique. Between Smile sheets, test scores, and attendance rosters, training departments collect a great deal of data. But how much of that data is actually being used effectively? Do we actually learn anything useful from the data we collect? Here are some of the more common points I hear that are learned from data training departments collect.

  • 88% of those enrolled completed the eLearning course.
  • The average test score was 92.6%
  • We had 160 people enrolled in the program.
  • 93% of participants said the program exceeded or far exceeded their expectations.

These three points have a few things in common, but I’d like to focus on one specific on that represents a huge gap in the way Learning and development looks at data.  The bullets above are stats that you might expect to be included in a spreadsheet.  Is that data? Sure, but it’s not effective use of data.

Training departments have a history of using data to share statistics; we need to start looking at data differently and seeing how we can use data to tell stories. Of course, before we start looking at data for telling stories, we should first consider if we’re collecting the right types of data in the first place.

I think learning and development professionals need to take a fresh look at data. For starters, I don’t think most of us even consider what types of data we need. In the vast majority of cases, we simply use the data that is handed to us either by our LMS or via some sort of well established smile-sheet standard. The data used most commonly is easy to use, because it’s really just statistics.

But is that the data we need? As much as we often call ourselves learning professionals, I’ve never felt our primary role is to help people in our organization ‘learn’. In most cases, our goal is to support employee performance, with learning being a subset of that goal. And yet, most of the data we collect is learning activity; we don’t collect very much data related to actual performance. We have data about what people did to prepare for the work, but we need data about them actually working to truly measure if we’re being effective.

Some might say that things will get easier in the future, as the age of big data starts to affect the world of learning and development. The level of data detail available via big data is tremendous, and will enable us to tell data-driven stories that were previously unavailable to us. From that perspective, it’s an exciting time to be in the field.

Of course, there are always two sides to every coin. Usually when a new opportunity emerges, it is accompanied by new risks. Big data’s impact on learning and development is really no different. Here are two risks I see associated with this influx of data.

First, an existing risk will increase tremendously. Data is usually set; it is what it is. Data is often nothing more than information that is put together in a certain way to make a point or to tell a story. The new level of detail available via big data should enable us to increase the quantity and quality of the stories we are able to tell.

Sounds great right? And ideally it should be. But let’s go back to the “information that is put together in a certain way to make a point or tell a story” part. As much as it would be great to think that all the stories are there to empower people and help drive the organization forward for a greater good, the reality is often not so positive. Many people serve their own agendas.  Too often, people will start with a preconceived conclusion and agenda, and then try to find data that supports that agenda rather than allowing the data itself to tell the story. These human motivations exist today, and there’s no reason to think that they won’t continue to be an issue with expanded data sets in the future.

The second risk? That one I think is on us as learning and development professionals. If new data is available that enables us to better measure the effectiveness of the programs we run, it brings a new level of accountability to our field. I think that’s a good thing, but for many professionals that have been enjoying the low-level accountability of cultures that simply look at butts-in-seats and test scores to measure training department effectiveness. big data may represent a painfully eye-opening level of accountability.

Learning professionals need to be looking at data differently, before their organization does it for them. We need to be at the forefront of data analysis, and start looking at our own data in new ways.  Historically we have looked at data as end points, examining the learning that has taken place at the conclusion of a program. When you look at data from a storytelling viewpoint, the endpoints are only one small piece of an ongoing stream of data. We need to be able to look at this continuous stream of data differently, spotting trends that ultimately tell a story.

Of course, this all starts with collecting data that is actually relevant. The fact that employees liked the donuts we served but would prefer healthier options as well? Nice, but fairly useless. We need to ask ourselves what types of data we truly need. Where is the story that needs to be told?

If we don’t start there, chances are our data journey will be headed in the wrong direction.

, , , ,