Reflections on #lrnchat: Learning Analytics

Image use courtesy of lrnchat and Kevin Thorn (@LearnNuggets)

Each week that I am able to participate in #lrnchat discussion I post a summary of the discussion to my blog. I do this both for my personal development as well as sharing with the Learning and Development Profession at large. This summary is based on my own interpretations of the chat; others who participated may have differing opinions or interpretations of the discussion. I welcome those that do to add your ideas to the comments.
 

The topic of this week’s #lrnchat session was Learning Analytics“. 
I always find looking at the questions that are used to loosely guide the chat as a nice way to see the overall theme of the chat. Here are the discussion questions that were presented to the group:
Q1) What learning data does your org collect? Why? What problem is org trying to solve?
Q2) Are there ethical issues in collecting learning data?
Q3) How are you collecting formative and summative data now?
Q4) What about informal and social learning? Can we/should we try to measure that? How?
If you want to gauge the effectiveness or progress of something, you need to collect some sort of data that can be analyzed.  Sometimes the data is very simple, like the data collected by a gas tank that is analyzed with the output showing you how much gas is left in your tank.  Sometimes the data collected is much more detailed and robust, like the seemingly endless statistics collected by sports organizations. 
Collecting data for analysis as a means of gauging performance is a practice that exists in almost every aspect of life, including the learning and performance departments of organizations.
When you hear about the effectiveness of a learning program, the discussion often focuses on the effects of the program.  Youll hear about changes in performance, the effect the changes in performance are having on the organization, and perhaps a discussion on the ROI of the program.
What you dont hear about as often is the data that has been collected and analyzed that enabled the learning department to come to these conclusions.  An analysis is only as credible as the data behind it. 
This weeks #lrnchat focused on the data collection methods used in Learning and Performance departments.
The discussion started by sharing the types of data being collected by learning departments, as well as exploring the purposes for collecting the data.
The majority of the responses confirmed some of the shackles we place on ourselves within our organizations.  What we are tracking are mostly widgets: Number of course completions, test grades, and the proverbial butts in the seats.  The other data that was collected by many was reaction data from program participation, commonly referred to as Smile Sheets.
Think about that for a moment.  The two most commonly collected forms of data are respected so little by organizations that there are commonly known phrases that openly mock their value.  Thats a problem.
I think part of the problem, historically, is the learning function providing the data that is asked for as opposed to the data that is needed.  If you don’t understand the metrics of a field, chances are you’re going to ask about the obvious widgets associated with the topic.
In the case of learning, the most common widget is the ‘butts n the seat’.  I point the finger of. Blame for this in two directions.
The first area I see causing this issue is Compliance Training.  In many organizations, this is the fist, and possibly only, learning program in which data is requested.  It’s also training that is often less about performance improvement and more about being able to report that the required training took place. 
In my experience, most stakeholders ask questions about compliance-related questions that fall under the theme of “Did everyone complete it?” and not “Is everyone’s performance meeting the compliance standards? The data being asked for, by both the organization and the regulators that enforce the compliance standards, is ‘butts in the seats’.
The other area I see as contributing to the data problem is the learning field itself.  We should not be waiting to be asked for data; we should always be showing he value being created by our efforts and of learning in general.
In the organizations I have been a part of, there is not a great understanding of the link between learning and performance.  There’s an accepted connection between the two, but not an understanding the pathway that leads from one to the other. 
The organization cares about performance, not learning.  Those that do not understand the pathways from learning to performance will likely ask about the numbers.
I rarely share learning ‘data’.  Honestly, I don’t have much interest in the reports I an able to get from my LMS, and if I don’t have any interest in it, why should I expect anyone else to?
I do collect data, but unless pushed for it, I dont share data.  Its not an ownership issue, though that is something that will be considered later in this post.  I just dont think the data itself has value.  The value of learning isnt something you will find on a spreadsheet.  The value of the data I collect is in the story the data enables me to tell to a stakeholder.
Stories feel much more real than sterile and ultimately non-valued data.  In my experience, most of the stakeholders value and trust the story more than the data, though such trust does need to be earned.  For stakeholders that are interested in the data behind the story, I have that and can share it too.
The main point here is that Im not going to wait for someone to tell me what data to provide them.  Ideally that would be a discussion during the needs assessment conversation, but in some organizations, thats not a standard.  Even if I do not walk away with set metrics to report back on, I will walk away from those initial discussions with an understanding on the types of metrics that need to be impacted by the program.  That understanding gives me the framework for the story I need to tell.
I believe in organizations with a more mature learning culture, much of what Im describing is formally built into the structure of the workflow, and thats a good thing.  In the absence of that though, learning professionals need to step up and fill the gap.
From here the discussion moved on to the ethical issues that may exist with the very concept of collecting learning data.  For me, this is a question that needs to be analyzed beyond the initial knee-jerk reaction.
I think for many and this was represented in the discussions the immediate reaction is to say, collecting learning data is unethical; it goes against the principles of organic growth.
I think theres truth in the ethical concerns, but I dont think it has to do with the data collection.  Learning data is just that data.  Its not the data itself that is unethical; Its how we collect it and how the data is used that raises ethical concerns.
I think it starts with the plan specifically having one related to your data.  Any time data is collected without a set plan for its use, youre opening the door to ethical issues.  For example, many managers use the data for Gotcha Management; they use the data to hold people accountable and punish those that have not completed courses.
Dont get me wrong accountability is a huge piece of the performance improvement puzzle.  In the context of data though, its important that the data is used effectively.  Much of the data I collect is supplied directly or indirectly by the participants.  If we use the information they supply us with against them, how forthcoming with information will they be in the future?
Another pet peeve I have related to the ethics of data collection has to do with what we tell learners about the data we collect.  How many organizations make learners complete Level 1 evaluations – even explaining that their feedback is important and will impact future programs and then do nothing with the data?
Is it ethical to tell someone their opinions matter when in truth the data is not changing anything?  Im always amazed by the amount of data that is collected, and never used. Make a decision regarding you data: Either collect the data you need and use it, or dont collect it at all.  Your time and the time of learners are too valuable to waste it collecting unneeded data.
The discussion then moved towards the methods we are using to collect our formative and summative data.  Of course, in order to answer that we need to have an understanding of what the differences are between formative and summative data. 
While there are number of levels at which we could explore the difference, at the core the differences between the two are not about the data itself; its more about when the data is being collected.  Formative data is the data that is being actively collected during a learning program, whereas summative data is data that is collected after the program is over.
While I do see a place for formal data collection methods like focus groups, surveys, and assessments, they are not my primary tools for data collection.  I cannot overemphasize the fact that the most important data collection tools we have are simply our eyes and our ears.
If I want to collect data on how participants are learning and performing, my first step is to quite simply stop talking.  Getting people to talk about their learning in reflective ways, hearing that they see the connections between their learning and their work, and listening to them share ideas with each other on how to apply their new or enhanced skills provides me with more powerful data that I would likely ever get from an assessment or a smile sheet.
The other tool I mentioned is our eyes.  Harold Jarche simplified learning to its core when he said Work is learning, Learning Work.  That being the case, if I want to see if someone is learning, one of the best ways I can to do that is to watch as the person is working.  Not only does it provide learning professionals with some of the most accurate data related to performance, it also provides an excellent opportunity for real-time performance support and coaching. It fosters continuous improvement.
I also believe that its important that we define work in this context.  A work-based role-play or a simulation of the work environment is not representations of work.  They are excellent tools to reinforce performance, but the only way to truly collect data related to work performance is to observe the work itself where it happens. Anything else really pales in comparison.
Need proof of this?  Ask yourself how many times you have heard Thats great, but thats not how it works in the real world from a participant.  Learners already know this is true; we just need to follow their lead.
The chat concluded with an exploration of informal and social learning, and what, if any, data we should collect regarding it.  What surprised me about this discussion was how much of it centered on a different discussion: Whether one type of learning is better than another type of learning. 
To me that subtheme underscored one of the main issues related to learning analytics.  The learning is actually irrelevant in most cases.  What really matters is the performance. 
In most cases, we dont have a Social Learning Program or an Informal Learning Program; we have a learning program that incorporates Formal, Informal, and Social learning techniques.  Its the collective influence of program and all other external factors that ultimately leads to the performance.
Ultimately we design learning with the desired performance in mind.  When the time comes to collect summative data, we should be collecting data related to the performance, not the learning.  We can always backtrack to what factors influenced any performance change, including the effectiveness of each aspect of the learning program.
As I mentioned earlier, when I talk about the results of a learning program with stakeholders, Im sharing a story, not data.  Therefore, I rarely talk about measurement of the learning programs.  Ill talk about measurements related to performance, and then Ill build credible connections between the performance and the learning programs.
Until next week #lrnchat-ers!

, , , ,