This months Journal of Extension has a really good article on A Call to Embrace Program Innovation (July 2015). There are some great tips and processes identified by which Cooperative Extension can really fulfill its primary mission of leading by innovative programming. One of the things that really jumped out to me is that innovation comes about at many different stages in the process of program development. It can come out in the planning, the undertaking, and the evaluation. I tend to think of the program itself as the “innovation” but really this article talks about being able to look at the process and seeing where there are places for innovation to occur.
One of the keywords that I really like is “tinkerer.” How many of us think of ourselves as experts in this area? Looking at the ways many of you provide programming in the same topic but in different ways, I think that a strong Extension expert is very good at this. If you are not good at tinkering, you may not find the level of enjoyment in your job as you would expect. However, I tend to think that the tinkering is adaptive versus planned for most of us. We adjust as we go. I wonder if we build back in purposeful tinkering if that would help us innovate in ways that differentiate our product more substantially?
Permission to tinker, as this article suggests, also entails a certain amount of acceptable failure. Having a work culture that accepts failure (as opposed to always being successful – because that’s all we can report) is a different work culture than most organizations have – except for truly innovative ones (irony)! I have yet to find a reporting system that celebrates failure. As a matter of fact, most of us are rated only on continuing success versus attempts at innovation that result in failure. Some of the most innovative Extension programming I’ve attended doesn’t have the mark of “success” by which we measure (they have less people who attend, evaluations may show less knowledge gained). However, they are innovative in a way that differentiates the product we offer as Cooperative Extension as unique. On the flip-side, Plans of Work (POW) and long-range program planning do tend to give credit for innovation when they are created, but when evaluation time comes, the credit of success usually is still in the numbers versus the attempt to create programming innovation. I tend to agree with this observation from the authors:
Program development models often employed in Extension are risky in such uncertain conditions because they tend to be linear in implementation and thus 1) underemphasize attention toward changing conditions and 2) overemphasize efforts to mature programming.
Most of us don’t have a box that asks how many new programs I’ve tried to implement, only a box for how many participants attended, or hours of contact. It will most likely boil down to if a supervisor (or an organization) chooses to recognize those contributions as valuable or not. Therefore, there is a danger in not having a specific organizational culture of innovation that is recognized by upper management as a tangible measure through some process. The process could be as simple as supervisor training and evaluation documents that ask for measures of innovation, or an organizational culture that suggests a percent of time that is acceptable to innovate.
I think in casual conversations, most upper-management would say they value innovation highly but most likely there are not direct statements in the mission or vision documents of organization that speak to the empowering of the employee’s to be innovative. There are expectations that it occurs organically, however I theorize that employee’s are somewhat reluctant to take on risky innovation in case they cannot have a report-able success at the end of the program. I think that in programming, we encourage the “tinkering” that can get more numbers, versus the “tinkering” that might improve the product. That’s not to say that high numbers of participants are not due to high quality programming. The question is more towards the initial development of innovative programming or the front-of the wave programming. I think that truly successful innovative programming might lead to high participation numbers, but it may also not. It may be that some high-quality programming has consistently smaller numbers and more challenges. Again, its a question of how we measure success and failure.
I like the idea of looking at individual programs and assessing them along their trajectory (and without the expectation that they are linear). There are start-up programs that could really take into some methods suggested in this article, and some long-running programs that, while successful, might benefit from re-evaluating the intersection of innovation and the CES mission.
What do you think? How important is a culture of innovation to create innovating programs? Do we have processes in place that encourage the tinkering?