2 min read

How do we know what (and how well) participants learn in short, educational programming? The answer, at least in my experience, is too often: "We don't, but we are sure they do!"

But the question is a very serious one for those of us who work in non-profits and design educational programming, and it is quite reasonable for donors to want to see some evidence of learning from the programs they fund with their resources.

It's an issue that I struggle with because, on the one hand, I believe that learning is difficult to measure, particularly in the short term. On the other hand, I hope donors want to know what is happening in the programs they support and that they will seek out some evidence of impact, and it's my responsibility to provide them with that evidence.

Several weeks ago, I was fortunate enough to participate in a meeting with a number of other non-profit professionals who work in educational programming. Caren Oberg, of Oberg Research, led a workshop for us on measuring educational impact, and one of the most important things I learned was that this question - how do we know what participants learn? - is too general. In order to measure impact, it's important to first look at the questions we're asking, determine why we want to know the answers and how we will use them once we find them, and consider some of the assumptions we make when design our programs.

Caren conducts free, online webinars that introduce many of the topics we covered in the workshop, and she is also willing to work with institutions and individuals to help them design measurement for their programs. It was a great workshop, and, while we still haven't refined the questions we're asking narrowly enough, I know my colleagues and I are looking forward to working with Caren again in the future as we try to find reasonable means to measure our impact in educational programming.


Leave a Reply

Your email address will not be published. Required fields are marked *