From John Gargani, AEA President
Over the past year, it has been my honor to represent the membership and promote evaluation around the world as your AEA president. In that short span, I believe the association has made some important strides. We completed two successful conferences, collaborated with other communities of practice, built international alliances, created new opportunities for local affiliates and topical interest groups, and began implementing an ambitious strategic plan. There is a great deal of work ahead, and in my last column as president, I would like to focus on a critical part of it.
Professionalization remains one of our most important and contested areas of AEA’s work. As the recent U.S. presidential election reminds us, the role evaluation plays in the public and private sectors is neither certain nor predictable. Professionalization is how we make it otherwise. I define professionalization as the collection of efforts that we, as a community, undertake to ensure that evaluation more effectively improves the lives of people. It includes not only strengthening the quality of practice, but strengthening the efforts of commissioners, policy makers, local communities, funders, and other stakeholders, all of whom affect and are affected by evaluation.
Among them are evaluation associations. Around the world, local, national, and regional associations are building their capacities and professionalizing. As our association works to professionalize itself, not its membership or the field, it is critical that AEA retains what makes it special—the open and inclusive nature of our community. We aren’t perfect. We may sometimes debate ideas too hotly, hesitate to embrace those we believe too different, or act in ways that reflect inequalities that, ironically, our professional lives are dedicated to overcoming. Yet, I have been impressed over my 17 years as a member by how well we notice our own imperfections, bring them to each other’s attention, and work together to improve. It’s evaluation at its best, and I never cease to find it inspiring.
There is new urgency to professionalizing the association. It calls for the optimistic, playful, and iterative approaches of the designer who rapidly learns by doing. The courage of the social entrepreneur who is willing to take risks to accomplish good. And the financial savvy of the social investor who acts strategically to ensure organizational sustainability. We have that talent to undertake this great effort and succeed. Do we have the will to act? And to do so quickly?
I hope so. Evaluation has never mattered more.
From Zachary Grays, AEA Staff, and the 2016-17 GEDI Cohort
Before discussing the Design Loft sessions, we should explain who we are. The Graduate Evaluation Diversity Internship (GEDI) program is an American Evaluation Association (AEA) internship whose goals are to expand the pool of graduate students of color in the field of evaluation, stimulate the thinking in and practice of culturally responsive evaluation, and deepen the field of evaluation’s ability to work in diverse settings. The program grew annually from a few GEDIs to include 15 cohort members this year. For our educational component, we attended Claremont Graduate University’s Professional Development Workshop Series in Evaluation and Applied Research Methods in August, where we learned about evaluation approaches. For the practicum, each GEDI cohort member works a few hours a week at an industry or government partner site. Some of those sites are United Way of the Bay Area, University of Southern California’s Shoah Foundation, and Partners in School Innovation.
In September of 2016, the newest GEDI cohort was tasked with evaluating the innovative Design Loft (DL) sessions at the AEA conference in Atlanta, Georgia. The DL sessions were the brainchild of Dr. Cameron Norman of Cense Research and Design and current AEA president Dr. John Gargani of Gargani + Company. Drs. Gargani and Norman conceptualized the DL sessions as a way to bring design thinking into evaluation work through short, interactive, and hands-on group activities. During each session, Dr. Norman taught a group of attendees one design skill or technique that could be folded into evaluation practice.
But isn’t design thinking different from evaluation?
The DL sessions were supposed to dissolve that separation. While we may think of designers as the architects of websites or program creators, evaluators assess the worth and social impact of programs. Evaluators design evaluations, including making creative decision for data collection and representation. Moreover, evaluators often make recommendations that amend or create new programmatic components for clients. Evaluators do design work.
One of the most intriguing DL sessions was titled “A Day in the Life.” This scenario-based design thinking tool provides the evaluator with a means by which to gauge program participants’ lived experiences. Participants draw pictures or write text to lay bare what a day in their lives actually looks and feels like. This information can then be used to design a tool that fits into people’s lives. Evaluators can use it to learn about how or why a program intervention is or is not affecting participants’ lives.
The GEDI cohort is currently in the data analysis phase of the DL evaluation. Initial findings support the conclusion that the DL was executed as designed, that there was a transfer of tangible skills/tools, and that many participants found it valuable. In the December AEA newsletter we plan to share a summary of our findings. For the moment, it would seem that Drs. Norman and Gargani were right: Design thinking does have a place in evaluation work.
From Sheila B. Robinson, Potent Presentations Initiative Coordinator
“A sword never kills anybody; it is a tool in the killer's hand.”
-Lucius Annaeus Seneca
Fortunately, the Roman statesman Seneca the Younger never had to sit through a bad PowerPoint presentation. But, then again, he was born in the year 4 BCE. I’m sure he still endured his share of boring Iron Age presentations!
As far back as 2003, marketing guru Seth Godin was talking and writing about bad PowerPoint. In a 2011 blog post, Godin advocated for his atomic method of creating a PowerPoint presentation in which you, “Force yourself to break each concept into the smallest possible atom.” And then a little closer to home (evaluation, that it), Stephanie Evergreen wrote about atomic slide development and offered “before” and “after” examples. More and more, innovators are actively seeking ways to enhance slide design.
In a recent p2i newsletter article, I covered SlideShare, one of my favorite sources for slide design inspiration. Now I’d like to introduce you to Note & Point. I have to admit that it took me a bit to get the title: I was so focused on the two words as verbs and making sense of them in the context of presentations, that at first I failed to see that they are simply short for Keynote and PowerPoint! The tagline on the site’s home page says all you need to know about the site: “KILLER SLIDE DECKS.” That’s exactly what you’ll find on Note & Point. Scroll through pages and pages of slide decks on all sorts of topics that, according to the site, fill “the gaping void of inspiration for those of us who use projectors.”
This site is a user-friendly as it gets. One can sort by PowerPoint- or Keynote-created slide decks, view them, and post comments on them. I found the most success using the available tags to find content that interests me.
Note & Point does have its glitches, however. I found an interesting slide deck, but was disappointed to find that the link took me back to Note & Point’s home page. Another brought me to a 404 Error page. The site isn’t perfect, but it is an indication that we are moving in the right direction.
Back to Seth Godin. If you want the history of Really Bad PowerPoint, Godin shares it in this 2007 blog post, a reprint of his original eBook from 2003. Also worth checking out here are the links to other blog posts that refer back to this one.
Want a little comedy with your inspiration? Try The Greatest PowerPoint Presentation Ever. It is supposedly designed by “Anyone in corporate America,” but revealed in the comments to be the work of Don McMillan, a comedian well-known for his sketch, “Life After Death by PowerPoint.”
Leslie Goodyear, Ph.D., is a principal research scientist at Education Development Center (EDC) in Waltham, Massachusetts, just outside of Boston. Her passion is in helping clients understand the value of evaluation for documenting and demonstrating program value, but also for personal and organizational reflection and learning. She loves helping clients think about their vision and goals and figure out, together, what information will help them succeed. And she loves working with committed, passionate colleagues and clients who create exciting experiences for youth and their families.
At EDC, she leads evaluations of national and local education initiatives, develops evaluation capacity building systems and serves as an evaluation advisor for clients and colleagues. Over her career, she’s conducted evaluations in formal and informal educational settings, and has experience evaluating programs in youth development, civic engagement, afterschool, STEM education and, most recently, broadening participation in STEM. While most of her time these days is spent managing people and projects, she loves getting out in the field, observing programs and meeting the people who make them happen.
From 2009–2012, Leslie took a leave from EDC to serve as a program officer at the National Science Foundation, where she administered national grants programs, supervised evaluation and research contracts, developed directorate and division-level evaluation policy, and learned about federal agencies and STEM policy. While at NSF, she also stoked a latent interest in all things science, from learning about the cutting-edge research in Antarctica and the discoveries of the Hadron Collider to talking with grantees about their cool ideas for helping kids get excited about science, like theater shows that explain climate change or golf courses that foster physics understanding.
Leslie is finishing her time as the associate editor of the American Journal of Evaluation, a former board member of AEA, and past chair of the AEA Ethics Committee. Her publications and presentation focus on mixed methods and qualitative inquiry, evaluation use, ethics, and evaluation capacity building.
Leslie received her M.S. and Ph.D. in program evaluation and planning from Cornell University. And when she isn’t designing or implementing evaluations, she’s running (half marathons are her zone), cooking, listening to 70’s funk, and enjoying Boston.
In her ballot statement, Dr. Goodyear stated, “I would be honored to serve as your president. If elected, I will work tirelessly to promote an AEA that a) supports evaluators with practical, timely resources and the theoretical grounding to ensure quality; b) makes the most of communication and collaboration platforms to engage diverse evaluators in ongoing dialogue; and c) increases member participation at all levels of the association and, in particular, develops clear pathways to leadership positions.”
AEA honored four individuals and one organization at its 2016 Awards Luncheon in Atlanta. Honored this year were recipients in five categories who are involved in cutting-edge evaluation/research initiatives that have impacted citizens around the world. We will spotlight each award winner in upcoming issues. In this issue, we extend our congratulations to Marvin Alkin.
Marvin Alkin, Emeritus Professor, Graduate School of Education, University of California, Los Angeles
2016 AEA Research on Evaluation Award
Marvin C. Alkin is emeritus professor in the Social Research Methodology Division of the Graduate School of Education & Information Studies. Since receiving his doctorate from Stanford University in 1964, he has been a member of the UCLA faculty. Dr. Alkin has, at various times, served as chair of the Education Department and Associate Dean of the School.
Dr. Alkin was one of the founders of the Center for the Study of Evaluation and was its Director for seven years. He is a leading authority in the field of evaluation. He has published important research studies on the topic of the use of evaluation information in decision-making and is considered to be one of the primary researchers in this area. He is also noted for his work related to comparative evaluation theory. Alkin was the recipient of the American Evaluation Association's Paul F. Lazarsfeld Award for Evaluation Theory. Dr. Alkin's publications list includes seven books on evaluation, and over 150 journal articles, book chapters, monographs, and technical reports. Books include Using Evaluations, Debates on Evaluation, Evaluation Essentials and Evaluation Roots. He was the editor-in-chief of the four-volume Encyclopedia of Educational Research (6th edition), published by Macmillan in 1992.
Dr. Alkin was associate editor of Studies in Educational Evaluation from its inception in 1975 to 2010, and associate editor of Evaluation Review. He previously had been editor of the journal Educational Evaluation and Policy Analysis and co-section editor for the American Journal of Evaluation.
Alkin has been a consultant to numerous national governments and directed program evaluations in 14 different countries.
Last night, the Chicagoland Evaluation Association (CEA) held its Annual AEA Conference recap event. For many years, this annual social and professional gathering of CEA members has provided a forum for participants to reflect on the Annual Conference themes, showcase AEA conference presentations to local colleagues who could not attend the conference, and promote upcoming AEA initiatives and themes.
The conversation at the CEA recap is often lively, reflective, and engaging, and is accompanied by food, companionship, and laughter. The group of 10 participants was diverse — representing the racial diversity of Chicago, as well as a variety of professions and evaluation contexts. However, this year’s AEA recap event provided a more expansive conversation for participants: The discussion moved from evaluation and design, to the recent presidential election, to the political and social rhetoric, and our role, as well as the role of our profession, in these conversations. The event provided a forum for the group to discuss personal feelings about the election, as well as the role of our profession as it relates to Evaluation and Action, next year’s conference theme. It became clear over the course of the conversation that evaluation and evaluators are deeply driven by values, diversity, public good, and action — no matter their personal alliances. The recap participants did not always agree, but the discussion was an excellent place to consider and reflect on profound issues in our country.
The conversation, in many ways, demonstrated the best of competencies and principles of the evaluation profession. AEA Guiding Principles (2003) indicate that:
- Principle #4: Evaluators respect the security, dignity, and self-worth of respondents, program participants, clients, and other evaluation stakeholders.
- Principle #5: Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation.
Regardless of political affiliation, the recent presidential campaigns and elections have resulted in strong emotional reaction for many. The two guiding principles above indicate that the evaluation can provide a safe forum in which to discuss difficult issues. Evaluators should strive to build these spaces for these conversations in their practice or support them as they occur, as they did at the CEA ReCap event. The theme of Evaluation and Action will unfold with many opportunities to engage locally and nationally in the next year. Please consider participating in upcoming AEA and local affiliate activities linking evaluation to action. The results can be rewarding for individuals, communities, and the profession.
Asma M. Ali PhD (ABD) is a program evaluation and applied social science research professional. At the American Society for Clinical Pathology, she leads a team that develops data collection and reporting procedures to measure learning, predict change, and measure outcomes for complex, multi-stakeholder continuing medical education initiatives. She enjoys partnering with stakeholders so that evaluations reflect their program and initiative values, and develops creative ways to measure impact and change.
Her clients have varied from large government agencies and universities to private consulting firms and non-profit community organizations. She has 15+ years of evaluation and research management experience in diverse industries such as philanthropy, medical education, health services research, K-12 and higher education, evaluation studies, nonprofit programming, survey research, and community development.