READING

Panel Comments: Debra Rog

I’m very pleased to be here in front of such an esteemed panel. What I tried to do was cut across rather than go through each one what I saw as some themes and what I think it says for our association. First, I want to thank George also for stepping in at the last minute and setting an overview, which I think was a very good overview, in setting the stage for having us understand more broadly what evaluation policy is all about and, in particular, what he did. And you heard it through a number of the other talks, was the number of external forces that are pushing on evaluations when the time is right for having evaluation policy and for us having a role.

We heard about evaluation policy in 3 different settings and I think really represented a range - from more of normative practice without a formal policy to one that Bernice talked about, with a much more detailed mandated policy. And the settings also have a range of approaches and, although there were different settings, there were lots of similarities. And I think what you did also hear across them, they’re all affected by forces - some internal, some external - that push the direction of evaluation. Tom really spoke more clearly about some of the internal forces, in terms of the culture of the organization, in terms of surveillance or coming from some of our public health focus and how that shapes evaluation. We heard a lot about the external forces. GPRA, PART, other things that - I can’t read my notes here,- GAO and other forces that helps to strengthen evaluation.

 One thing I also heard, going back to thinking about setting the stage and about these forces, is the patience we have to have putting it together. Hearing that GPRA took 12 years to get started makes you know that progress can happen, but you’ve got to have hope and optimism.

In terms of the roles, there was a remarkable similarity despite the differences in the organizations of the roles that AEA can play, and ones that we are prepared to do. I heard a role as critical advisor - to be a sounding board, to be a support for high standards, to provide a good housekeeping seal of approval when it makes sense and when it’s appropriate. To act as peer review in looking at both specific kinds of programs and initiatives and maybe more globally what the organizations are doing. I saw a role as educator. What evaluation is and is not. Again, Tom said it in terms of it’s not just performance measurement, it’s broader than that. Helping to clarify and educate.

I heard a lot of what we’ve been hearing in a number of sessions. What is really nice is that none of these organizations say that evaluation is one thing, or that there is some hierarchy of priority, but there is a real need to reinforcing and keep educating how we make method choices. Again, matching methods to questions, matching methods to contexts, matching methods to a variety of those factors.

What we also heard, is evaluators as trainers that go and actually train the people who are doing evaluation, both with the mechanisms we already have in place, but also maybe some going inside and doing some more internal training of folks, both in terms of new approaches and cutting edge approaches as well as some of the standard things that are going on.

Lastly, there was a role that I heard some of being a partner. In particular, one of the examples was maybe we need to talk about joint publications and working together, again to sort of help get the evaluation language and evaluation culture inside.

I did hear some challenges. Again, moving up the demand for evaluation, Tom talked about it being secondary demand, moving it to a primary demand, working through the complexity of some of these organizations. We saw a very complicated chart and where evaluation is in there and started figuring out – we’ve got three organizations here, we want to go to different policy places, how do you start to negotiate that? Places where there’s lack of processes for evaluation, negotiating the different languages and how folks talk about things. The fact that we talked about whether or not, as we’re getting into a new administration, the openness that’s going to be there and how we’re going to have to deal with that.

Then one of the people said at the end, to not be co-opted and to guide against misuse of evaluation and to keep holding to our standards as we start to get into here. So I'm very hopeful listening to this, and with greater involvement and collaboration, we might be able to avoid some of the divisiveness that’s been created by some other policies that have been done by having more... either those policies that were misguided or misunderstood. It provides for evaluations that have a greater potential for being methodologically sound, appropriate to the questions and settings and are more likely to lead to good use.

I want to stop there with my comments but I want to start with the first question.

Questions and Answers