Teaching is a classic wicked problem. Anyone who thinks otherwise has never tried teaching.
The variables in the teaching/learning relationship are infinite, often unknowable and constantly subject to change.
This is what makes teaching the most challenging and involving thing I’ve ever done. It is far more complicated than, say … writing a book, which is a long and involved process, but which is also generally explicable and static while you’re doing the work. With teaching, every individual student in the room carries the potential for throwing a wrench into what you thought was going to work.
Again, I don’t know what this says about me, but I loved that about teaching, and it’s the thing I most miss most now that my teaching is sporadic and most often takes the form of one-off workshops.
A recent article by Beth McMurtrie at The Chronicle of Higher Education unpacks some of the fundamental tensions when it comes to how we view and resource teaching in higher education. She notes that as far as the public is concerned, the quality of teaching is very important. We also know that good teaching is very important to ultimate student outcomes.
Unfortunately, the structures and incentives inside of higher educational institutions are largely aligned against good teaching being a paramount priority on most campuses.
To me, by far the biggest issue is that lack of resources given over to supporting teaching. I encourage readers to go deep with McMurtrie’s article to see the many ways this manifests itself inside institutions.
Those structural factors are going to be hard to change, even though I believe making those changes is an existential necessity for higher education. If we cannot deliver on the promise to educate—and not just credential—students, then what the heck are we doing?
There is one area where positive change would require a shift in what’s valued, but that shift costs us nothing and stands to do some real good when it comes to helping instructors teach better. We need to change the kind of research about teaching and learning that is valued by the academy.
In short, we gotta go qualitative over quantitative in a big way. As a wicked problem, creating valid quantitative studies related to instruction often requires either ignoring or sanding away many of the complexities that inevitably exist in teaching. Consider the research that has now become conventional wisdom, that it is “better” to take notes by hand rather than on the computer. The 2014 study that is often cited as proof of this truism is careful, elegant—and utterly useless in a real-world teaching and learning environment that goes beyond taking quizzes.
(As with lots of other social science research, there are questions about replication, but even setting those aside, the practical upshot of the research as applied to teaching in a truly meaningful way is pretty much nil.)
While I’m willing to believe that if you’re taking a quiz on material presented in class, taking handwritten notes may result in on average higher scores, but of course not everyone is average, and taking quizzes is not the same thing as learning something meaningful. Additionally, a significant part of effective teaching should be focused on helping students understand what works for them and why so they can build their individual knowledge-acquisition practices. So much education research winds up exploring the trivial because it’s the trivial we can corral and measure. To genuinely better understand how teaching works, we need to go bigger and smaller at the same time.
Smaller means centering the lens at the class/student level. Perhaps thanks to my prior experience in market research, I ran each of my courses as a semester-long qualitative research project. I established my objectives around student learning, set up my plan for fulfilling the objectives and both during and at the end of the semester measured the results against the objectives.
Those measurements consisted of three main sources of data:
- My observations of what and how students were learning.
- The evidence contained in student-produced work product, primarily their writing, but this also included class activities, e.g. discussion.
- Student reflections about their learning.
Based on these findings, I would adjust semester to semester, sometimes quite significantly, such as overhauling my entire writing pedagogy or moving to an alternative grading approach. My experiments were mine alone, though I would often adjust based on sharing experiences with other instructors via informal shop talk. I’ve lost count of the number of things I borrowed from someone else based on an offhand talk in someone’s office or the hallways.
Some of what I learned is now collected in my books, and there are a few blog posts scattered through the archives, but I sometimes think how helpful it would’ve been to have all of my colleagues sharing their experiments in ways beyond that shoptalk. What would happen if we were incentivized to make the problem of teaching a central concern to our work?
Going bigger means generating much more of this stuff, so all this individual qualitative work can start to reveal useful patterns and practices.
I’m pleased to be able to share an example of exactly the kind of inquiry, data collection and research reporting I’m talking about in the form of the newsletter of Emily Pitts Donahoe, who currently works as the associate director of instructional support in the University of Mississippi Center for Excellence in Teaching and Learning.
Donahoe decided to formally record her experiment in alternative grading in her writing course, creating a series of journal entries concurrent with the course. Following the completion of the course, she published her experiences to her newsletter, Unmaking the Grade.
The experiment starts with her choice to take up the challenge of Robert Talbert, another proponent of alternative grading, to “show us the details” of what it means to try to ungrade a course. She establishes her framework, goals and approach at the outset.
From there she does a series of posts, describing her activities and then sharing student perceptions and responses. The informal feedback from students is fairly constant, but several times she also requires students to do a more formal self-assessment, which not only serves as a tool for reflection for students but a source of data for Donahoe.
Read entry to entry, the experiment takes on a narrative form, which not only makes for more compelling reading but also provides a lens for Donahoe to reflect on what’s happening in her class. We see the layers of complexity at play in the teaching experiment.
The story is not all puppies and rainbows. There are inevitable struggles and setbacks (see above about wicked problems) and adjustments must be made on the fly. Donahoe examines a number of core questions around alternative grading practices regarding workload, student honesty, attendance and assignment completion, the kinds of questions that I have inevitably been asked when I talk about my own alternative grading experiments.
I particularly appreciated the entry on “Ungrading Guilt” in which Donahoe wrestles with the ways results have fallen short of her desires. It is a sentiment I know well, and was often plagued by, but seeing someone else voice these feelings about their work makes it apparent that much of the self-recrimination is rooted in hindsight bias. If I believe that Donahoe should giver herself more slack, maybe the same is true of me.
As with any good experiment, Donahoe shares her conclusions in the form of a “start, stop, continue” post where she outlines what aspects of her approach will be jettisoned, what will be continued and what changes she will make the next time around. As qualitative research, it is a wonderful example of running an experiment and extracting as much learning from it as possible.
No, we cannot judge how the approach will scale. We cannot measure it against a single dependent variable. But what we do have is something to my mind much more useful, an experiment rooted in experience that can inform the work of others as they engage in their own challenge of corralling this wicked problem.
Doing this requires providing instructors time, space and incentive to engage in these practices and publish their results, but these resources would likely be relatively modest, perhaps one course release per year, or a summer stipend for publishing the results.
Good teaching happens all the time, so it’s not as though we have no idea what it looks like. Even as the necessary structural changes are happening, we can start producing the data that will inform the real work in the classroom.
Discussion about this post