Originally published on September 27, 2018
When I was working on my Ph.D. and had to choose a topic to focus on for my dissertation, I was a teacher and still a little bitter about the fact that in NYS, like many states, teachers cannot be promoted to administration--certification was required. At the time, it felt a lot like a hoop I had to jump through and/or a money-making scheme that said, "as long as you pay the state and the college the money, you can be an administrator."
By any means, when it came time for me to select a topic for my dissertation, I decided I wanted to study the impact of leadership preparation on leadership practice (if you're interested, you can read the first few pages of my dissertation here). I wanted to explore whether or not one's administrative abilities truly grew as a result of formal leadership training or, as I thought, the formal training was just a formality. This exploration had, at it's heart, a focus on how the theory of leadership interacted with the practice of leadership and, as a result, I find myself endlessly attuned to this intersection.
I think this is why I was really struck by the opening to the book Unstuck, which I wrote about in my last blog post. On page 3, under the heading, "Getting from Knowing Better to Doing Better," the authors write:
The first point we must acknowledge is that doing anything well--sometimes referred to as closing the knowing-doing gap--is no small feat. Like most organizations, schools have yet to perfect that art of implementation. Case in point: A few years ago, the U.S. government supported more that two dozen scientific studies of popular interventions through its Regional Educational Laboratory (REL) program. The hope was that in doing so, the so-called regional labs could separate the wheat from the chaff and add the good programs to the newly created What Works Clearinghouse, an online repository designed to provide educators with gold-star reassurance of which programs they could go forth and use with confidence.
There was just one problem with the raft of studies: In every case but one, the popular widespread approaches, when put under glass, were found to have no positive effects on student achievement (Goodwin, 2011b). If we dig more deeply into those published studies, thought, it becomes apparent that in most cases, the programs studied were so poorly implemented that researchers were unable to discern whether the fault lay with the program itself or poor deliverology. (p.3)
Goodwin et al. then go on to list four examples of large-scale studies where the programmatic expectations were not followed, thereby causing the results to lack validity and reliability. "Of the more than two dozen programs rigorously studied by the regional labs, only one was found to yield significant results..." (p. 4)
I find this research to align with my own experiences and observations (theory and practice). Most of us in the field of education, when talking about curriculum, cite Marzano's work stating that curriculum should be "guaranteed" and "viable." Guaranteed meaning everyone has access to the same curriculum and viable meaning the curriculum contains the information that is important to learn to meet the standards. I would absolutely agree with both of these components, but I have found that guaranteed and viable are not enough. The third, and equally important component to consider is "context," or the setting, resources, and training that will be needed for the proper delivery of that curriculum.
My classic example of the importance of context with curriculum is the example of a district that selected a curriculum for its middle school math classes. The curriculum selected was going to be used by all middle school math teachers (guaranteed) and it was absolutely aligned to the state standards (viable). Unfortunately, the lessons in the curriculum were 60 minutes in length and premised on 60 minutes of math instruction daily. This was not considered when selecting the curriculum since the middle school math block was only 39 minutes. Clearly there is discrepancy and clearly considering only whether the curriculum is guaranteed and viable is insufficient.
This is one example of how the knowing-doing gap happens. It's that the people who design what to do are designing it in a context that doesn't match the one in which you are working. It's the classic case of theory versus practice. When the research was conducted it wasn't in your school or classroom. Your students are different. Your training, if you were lucky enough to get it, is different than the training those who were researched got. More than likely there wasn't an investment in all of the resources, perhaps. There are a litany of reasons that could cause the gap and it's probably more than one in any given scenario. I am not looking to make excuses for a lack of fidelity or lower results; I am trying to explain how this happens.
This leads me to think about the science and art of teaching, which I'll save for another post. Suffice it to say, it is my belief that the best practice is informed by theory. Ideally, the theory matches with your context, however, that is often not the case. In that event, the best we can do is adapt the theory to fit the context.
Which brings me back to my dissertation...so what did the research illuminate regarding leadership preparation? To my disappointment at the time, I learned that good leaders are made. This does not mean that some people are naturally wired or more adept at leadership. However, even the best of us grow when given training--and the better the training, the better the result. However, the best leaders are those who are able to enact what they have learned with intention, thought, and purpose. In the end, I find that to be good news and encourages me to continually learn and grow!